This is the start of the stable review cycle for the 5.9.4 release. There are 391 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Thu, 05 Nov 2020 20:29:58 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.9.4-rc1.g... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.9.y and the diffstat can be found below.
thanks,
greg k-h
------------- Pseudo-Shortlog of commits:
Greg Kroah-Hartman gregkh@linuxfoundation.org Linux 5.9.4-rc1
Stephen Boyd swboyd@chromium.org KVM: arm64: ARM_SMCCC_ARCH_WORKAROUND_1 doesn't return SMCCC_RET_NOT_REQUIRED
Quanyang Wang quanyang.wang@windriver.com time/sched_clock: Mark sched_clock_read_begin/retry() as notrace
Zeng Tao prime.zeng@hisilicon.com time: Prevent undefined behaviour in timespec64_to_ns()
Jing Xiangfeng jingxiangfeng@huawei.com vdpa/mlx5: Fix error return in map_direct_mr()
Dan Carpenter dan.carpenter@oracle.com vhost_vdpa: Return -EFAULT if copy_from_user() fails
Rafael J. Wysocki rafael.j.wysocki@intel.com cpufreq: schedutil: Always call driver if CPUFREQ_NEED_UPDATE_LIMITS is set
Rafael J. Wysocki rafael.j.wysocki@intel.com cpufreq: Introduce cpufreq_driver_test_flags()
Alexander Sverdlin alexander.sverdlin@nokia.com staging: octeon: Drop on uncorrectable alignment or FCS error
Alexander Sverdlin alexander.sverdlin@nokia.com staging: octeon: repair "fixed-link" support
Ian Abbott abbotti@mev.co.uk staging: comedi: cb_pcidas: Allow 2-channel commands for AO subdevice
Jing Xiangfeng jingxiangfeng@huawei.com staging: fieldbus: anybuss: jump to correct label in an error path
Zong Li zong.li@sifive.com stop_machine, rcu: Mark functions as notrace
Marc Zyngier maz@kernel.org KVM: arm64: Fix AArch32 handling of DBGD{CCINT,SCRext} and DBGVCR
Takashi Iwai tiwai@suse.de KVM: x86: Fix NULL dereference at kvm_msr_ignored_check()
Andy Shevchenko andriy.shevchenko@linux.intel.com device property: Don't clear secondary pointer for shared primary firmware node
Andy Shevchenko andriy.shevchenko@linux.intel.com device property: Keep secondary firmware node secondary by type
Suzuki K Poulose suzuki.poulose@arm.com coresight: cti: Initialize dynamic sysfs attributes
Kanchan Joshi joshi.k@samsung.com null_blk: synchronization fix for zoned device
Pali Rohár pali@kernel.org arm64: dts: marvell: espressobin: Add ethernet switch aliases
Fangrui Song maskray@google.com arm64: Change .weak to SYM_FUNC_START_WEAK_PI for arch/arm64/lib/mem*.S
Krzysztof Kozlowski krzk@kernel.org ARM: s3c24xx: fix missing system reset
Krzysztof Kozlowski krzk@kernel.org ARM: samsung: fix PM debug build with DEBUG_LL but !MMU
Joel Stanley joel@jms.id.au ARM: config: aspeed: Fix selection of media drivers
Krzysztof Kozlowski krzk@kernel.org ARM: dts: s5pv210: fix pinctrl property of "vibrator-en" regulator in Aries
Joel Stanley joel@jms.id.au ARM: aspeed: g5: Do not set sirq polarity
Frank Wunderlich frank-w@public-files.de arm: dts: mt7623: add missing pause for switchport
Helge Deller deller@gmx.de hil/parisc: Disable HIL driver when it gets stuck
Matthew Wilcox (Oracle) willy@infradead.org cachefiles: Handle readpage error correctly
Jisheng Zhang Jisheng.Zhang@synaptics.com arm64: berlin: Select DW_APB_TIMER_OF
Linus Torvalds torvalds@linux-foundation.org tty: make FONTX ioctl use the tty pointer they were actually passed
Likun Gao Likun.Gao@amd.com drm/amdgpu: correct the cu and rb info for sienna cichlid
Andrey Grodzovsky andrey.grodzovsky@amd.com drm/amd/psp: Fix sysfs: cannot create duplicate filename
Kevin Wang kevin1.wang@amd.com drm/amd/swsmu: add missing feature map for sienna_cichlid
Kenneth Feng kenneth.feng@amd.com drm/amd/pm: fix pp_dpm_fclk
Evan Quan evan.quan@amd.com drm/amd/pm: increase mclk switch threshold to 200 us
Alex Deucher alexander.deucher@amd.com drm/amdgpu/swsmu: drop smu i2c bus on navi1x
Andrei Vagin avagin@gmail.com futex: Adjust absolute futex timeouts with per time namespace offset
Alex Dewar alex.dewar90@gmail.com memory: brcmstb_dpfe: Fix memory leak
Thierry Reding treding@nvidia.com memory: tegra: Remove GPU from DRM IOMMU group
Jisheng Zhang Jisheng.Zhang@synaptics.com mmc: sdhci: Use Auto CMD Auto Select only when v4_mode is true
Michael Walle michael@walle.cc mmc: sdhci-of-esdhc: set timeout to max before tuning
Yangbo Lu yangbo.lu@nxp.com mmc: sdhci-of-esdhc: make sure delay chain locked for HS400
Dave Airlie airlied@redhat.com drm/ttm: fix eviction valuable range check.
yangerkun yangerkun@huawei.com ext4: do not use extent after put_bh
Ritesh Harjani riteshh@linux.ibm.com ext4: fix bs < ps issue reported with dioread_nolock mount opt
Zhang Xiaoxu zhangxiaoxu5@huawei.com ext4: fix bdev write error check failed when mount fs with ro
zhangyi (F) yi.zhang@huawei.com ext4: clear buffer verified flag if read meta block from disk
Constantine Sapuntzakis costa@purestorage.com ext4: fix superblock checksum calculation race
Luo Meng luomeng12@huawei.com ext4: fix invalid inode checksum
Ritesh Harjani riteshh@linux.ibm.com ext4: implement swap_activate aops using iomap
Dinghao Liu dinghao.liu@zju.edu.cn ext4: fix error handling code in add_new_gdb
Eric Biggers ebiggers@google.com ext4: fix leaking sysfs kobject after failed mount
Stefano Garzarella sgarzare@redhat.com vringh: fix __vringh_iov() when riov and wiov are different
Rafael J. Wysocki rafael.j.wysocki@intel.com cpufreq: intel_pstate: Avoid missing HWP max updates in passive mode
Rafael J. Wysocki rafael.j.wysocki@intel.com cpufreq: Introduce CPUFREQ_NEED_UPDATE_LIMITS driver flag
Rafael J. Wysocki rafael.j.wysocki@intel.com cpufreq: Avoid configuring old governors as default with intel_pstate
Chen Yu yu.c.chen@intel.com intel_idle: Fix max_cstate for processor models without C-state tables
Mel Gorman mgorman@techsingularity.net intel_idle: Ignore _CST if control cannot be taken from the platform
Qiujun Huang hqjagain@gmail.com ring-buffer: Return 0 on success from ring_buffer_resize()
Ansuel Smith ansuelsmth@gmail.com PCI: qcom: Make sure PCIe is reset before init for rev 2.1.0
Artur Molchanov arturmolchanov@gmail.com net/sunrpc: Fix return value for sysctl sunrpc.transports
Matthew Wilcox (Oracle) willy@infradead.org 9P: Cast to loff_t before multiplying
Ilya Dryomov idryomov@gmail.com libceph: clear con->out_msg on Policy::stateful_server faults
Matthew Wilcox (Oracle) willy@infradead.org ceph: promote to unsigned long long before shifting
Takashi Iwai tiwai@suse.de drm/amd/display: Fix kernel panic by dal_gpio_open() error
Takashi Iwai tiwai@suse.de drm/amd/display: Don't invoke kgdb_breakpoint() unconditionally
Christian König christian.koenig@amd.com drm/amdgpu: increase the reserved VM size to 2MB
Likun Gao Likun.Gao@amd.com drm/amdgpu: add function to program pbb mode for sienna cichlid
Andrey Grodzovsky andrey.grodzovsky@amd.com drm/amd/display: Avoid MST manager resource leak.
Jay Cornwall jay.cornwall@amd.com drm/amdkfd: Use same SQ prefetch setting as amdgpu
Evan Quan evan.quan@amd.com drm/amdgpu: correct the gpu reset handling for job != NULL case
Likun Gao Likun.Gao@amd.com drm/amdgpu: update golden setting for sienna_cichlid
Veerabadhran G vegopala@amd.com drm/amdgpu: vcn and jpeg ring synchronization
Wesley Chalmers Wesley.Chalmers@amd.com drm/amd/display: Increase timeout for DP Disable
David Galiffi David.Galiffi@amd.com drm/amd/display: Fix incorrect backlight register offset for DCN
Madhav Chauhan madhav.chauhan@amd.com drm/amdgpu: don't map BO in reserved region
Krzysztof Kozlowski krzk@kernel.org i2c: imx: Fix external abort on interrupt in exit paths
Bartosz Golaszewski bgolaszewski@baylibre.com rtc: rx8010: don't modify the global rtc ops
Krzysztof Kozlowski krzk@kernel.org ia64: fix build error with !COREDUMP
Zhihao Cheng chengzhihao1@huawei.com ubi: check kthread_should_stop() after the setting of task state
Vineet Gupta vgupta@synopsys.com ARC: perf: redo the pct irq missing in device-tree handling
Jiri Olsa jolsa@kernel.org perf python scripting: Fix printable strings in python3 scripts
Kim Phillips kim.phillips@amd.com perf vendor events amd: Add L2 Prefetch events for zen1
Zhihao Cheng chengzhihao1@huawei.com ubifs: mount_ubifs: Release authentication resource in error handling path
Zhihao Cheng chengzhihao1@huawei.com ubifs: Don't parse authentication mount options in remount process
Zhihao Cheng chengzhihao1@huawei.com ubifs: Fix a memleak after dumping authentication mount options
Richard Weinberger richard@nod.at ubifs: journal: Make sure to not dirty twice for auth nodes
Zhihao Cheng chengzhihao1@huawei.com ubifs: xattr: Fix some potential memory leaks while iterating entries
Zhihao Cheng chengzhihao1@huawei.com ubifs: dent: Fix some potential memory leaks while iterating entries
Chuck Lever chuck.lever@oracle.com NFSD: Add missing NFSv2 .pc_func methods
Olga Kornievskaia kolga@netapp.com NFSv4.2: support EXCHGID4_FLAG_SUPP_FENCE_OPS 4.2 EXCHANGE_ID flag
Benjamin Coddington bcodding@redhat.com NFSv4: Wait for stateid updates after CLOSE/OPEN_DOWNGRADE
Bob Peterson rpeterso@redhat.com gfs2: Only access gl_delete for iopen glocks
Andreas Gruenbacher agruenba@redhat.com gfs2: Make sure we don't miss any delayed withdraws
Sibi Sankar sibis@codeaurora.org remoteproc: Fixup coredump debugfs disable request
Jens Axboe axboe@kernel.dk io_uring: use type appropriate io_kiocb handler for double poll
Naohiro Aota naohiro.aota@wdc.com block: advance iov_iter on bio_add_hw_page failure
Christophe Leroy christophe.leroy@csgroup.eu powerpc/32: Fix vmap stack - Properly set r1 before activating MMU
Christophe Leroy christophe.leroy@csgroup.eu powerpc/32: Fix vmap stack - Do not activate MMU before reading task struct
Michael Neuling mikey@neuling.org powerpc: Fix undetected data corruption with P9N DD2.1 VSX CI load emulation
Ganesh Goudar ganeshgr@linux.ibm.com powerpc/mce: Avoid nmi_enter/exit in real mode on pseries hash
Christophe Leroy christophe.leroy@csgroup.eu powerpc/powermac: Fix low_sleep_handler with KUAP and KUEP
Mahesh Salgaonkar mahesh@linux.ibm.com powerpc/powernv/elog: Fix race while processing OPAL error log event.
Aneesh Kumar K.V aneesh.kumar@linux.ibm.com powerpc/memhotplug: Make lmb size 64bit
Joel Stanley joel@jms.id.au powerpc: Warn about use of smt_snooze_delay
Andrew Donnellan ajd@linux.ibm.com powerpc/rtas: Restrict RTAS requests from userspace
Sven Schnelle svens@linux.ibm.com s390/stp: add locking to sysfs functions
Paul Cercueil paul@crapouillou.net MIPS: configs: lb60: Fix defconfig not selecting correct board
Maciej W. Rozycki macro@linux-mips.org MIPS: DEC: Restore bootmem reservation for firmware working memory area
Paul E. McKenney paulmck@kernel.org rcu-tasks: Enclose task-list scan in rcu_read_lock()
Paul E. McKenney paulmck@kernel.org rcu-tasks: Fix low-probability task_struct leak
Paul E. McKenney paulmck@kernel.org rcu-tasks: Fix grace-period/unlock race in RCU Tasks Trace
Aneesh Kumar K.V aneesh.kumar@linux.ibm.com powerpc/drmem: Make lmb_size 64 bit
Jonathan Cameron Jonathan.Cameron@huawei.com iio:gyro:itg3200: Fix timestamp alignment and prevent data leak.
Jonathan Cameron Jonathan.Cameron@huawei.com iio:imu:st_lsm6dsx Fix alignment and data leak issues
Jonathan Cameron Jonathan.Cameron@huawei.com iio:adc:ti-adc12138 Fix alignment issue with timestamp
Jonathan Cameron Jonathan.Cameron@huawei.com iio:adc:ti-adc0832 Fix alignment issue with timestamp
Nuno Sá nuno.sa@analog.com iio: ad7292: Fix of_node refcounting
Tobias Jordan kernel@cdqe.de iio: adc: gyroadc: fix leak of device node iterator
Jonathan Cameron Jonathan.Cameron@huawei.com iio:light:si1145: Fix timestamp alignment and prevent data leak.
Tom Rix trix@redhat.com iio:imu:st_lsm6dsx: check st_lsm6dsx_shub_read_output return
Jonathan Cameron Jonathan.Cameron@huawei.com iio:imu:inv_mpu6050 Fix dma and ts alignment and data leak issues.
Eugen Hristev eugen.hristev@microchip.com iio: adc: at91-sama5d2_adc: fix DMA conversion crash
Nuno Sá nuno.sa@analog.com iio: ltc2983: Fix of_node refcounting
Daniel Vetter daniel.vetter@ffwll.ch drm/shme-helpers: Fix dma_buf_mmap forwarding bug
Laurent Vivier lvivier@redhat.com vdpa_sim: Fix DMA mask
Paul Cercueil paul@crapouillou.net dmaengine: dma-jz4780: Fix race in jz4780_dma_tx_status
Jan Kara jack@suse.cz udf: Fix memory leak when mounting
Christophe Leroy christophe.leroy@csgroup.eu powerpc: Fix random segfault when freeing hugetlb range
Michael S. Tsirkin mst@redhat.com Revert "vhost-vdpa: fix page pinning leakage in error path"
Gaurav Kohli gkohli@codeaurora.org tracing: Fix race in trace_open and buffer resize call
Vladimir Oltean vladimir.oltean@nxp.com tty: serial: fsl_lpuart: LS1021A has a FIFO size of 16 words, like LS1028A
Russell King rmk+kernel@armlinux.org.uk tty: serial: 21285: fix lockup on open
Borislav Petkov bp@suse.de x86/mce: Allow for copy_mc_fragile symbol checksum to be generated
Jason Gerecke jason.gerecke@wacom.com HID: wacom: Avoid entering wacom_wac_pen_report for pad / battery
Jiri Slaby jirislaby@kernel.org vt_ioctl: fix GIO_UNIMAP regression
Jiri Slaby jirislaby@kernel.org vt: keyboard, extend func_buf_lock to readers
Jiri Slaby jirislaby@kernel.org vt: keyboard, simplify vt_kdgkbsent
Chris Wilson chris@chris-wilson.co.uk drm/i915: Force VT'd workarounds when running as a guest OS
Bastien Nocera hadess@hadess.net USB: apple-mfi-fastcharge: don't probe unhandled devices
Bastien Nocera hadess@hadess.net usbcore: Check both id_table and match() when both available
Ran Wang ran.wang_1@nxp.com usb: host: fsl-mph-dr-of: check return of dma_set_mask()
Li Jun jun.li@nxp.com usb: typec: tcpm: reset hard_reset_count for any disconnect
Jerome Brunet jbrunet@baylibre.com usb: cdc-acm: fix cooldown mechanism
Pawel Laszczak pawell@cadence.com usb: cdns3: Fix on-chip memory overflow issue
Thinh Nguyen Thinh.Nguyen@synopsys.com usb: dwc3: gadget: END_TRANSFER before CLEAR_STALL command
Thinh Nguyen Thinh.Nguyen@synopsys.com usb: dwc3: gadget: Resume pending requests after CLEAR_STALL
Li Jun jun.li@nxp.com usb: dwc3: core: don't trigger runtime pm when remove driver
Li Jun jun.li@nxp.com usb: dwc3: core: add phy cleanup for probe error handling
Thinh Nguyen Thinh.Nguyen@synopsys.com usb: dwc3: gadget: Reclaim extra TRBs after request completion
Thinh Nguyen Thinh.Nguyen@synopsys.com usb: dwc3: gadget: Check MPS of the request length
Thinh Nguyen Thinh.Nguyen@synopsys.com usb: dwc3: ep0: Fix ZLP for OUT ep0 requests
Raymond Tan raymond.tan@intel.com usb: dwc3: pci: Allow Elkhart Lake to utilize DSM method for PM functionality
Sandeep Singh sandeep.singh@amd.com usb: xhci: Workaround for S3 issue on AMD SNPS 3.0 xHC
Josef Bacik josef@toxicpanda.com btrfs: drop the path before adding block group sysfs files
Filipe Manana fdmanana@suse.com btrfs: fix readahead hang and use-after-free after removing a device
Filipe Manana fdmanana@suse.com btrfs: fix use-after-free on readahead extent after failure to create it
Daniel Xu dxu@dxuuu.xyz btrfs: tree-checker: validate number of chunk stripes and parity
Anand Jain anand.jain@oracle.com btrfs: skip devices without magic signature when mounting
Josef Bacik josef@toxicpanda.com btrfs: cleanup cow block on error
Johannes Thumshirn johannes.thumshirn@wdc.com btrfs: reschedule when cloning lots of extents
Qu Wenruo wqu@suse.com btrfs: tree-checker: fix false alert caused by legacy btrfs root item
Denis Efremov efremov@linux.com btrfs: use kvzalloc() to allocate clone_roots in btrfs_ioctl_send()
Filipe Manana fdmanana@suse.com btrfs: send, recompute reference path after orphanization of a directory
Filipe Manana fdmanana@suse.com btrfs: send, orphanize first all conflicting inodes when processing references
Filipe Manana fdmanana@suse.com btrfs: reschedule if necessary when logging directory items
Qu Wenruo wqu@suse.com btrfs: tracepoints: output proper root owner for trace_find_free_extent()
Josef Bacik josef@toxicpanda.com btrfs: sysfs: init devices outside of the chunk_mutex
Qu Wenruo wqu@suse.com btrfs: qgroup: fix qgroup meta rsv leak for subvolume operations
Anand Jain anand.jain@oracle.com btrfs: improve device scanning messages
Qu Wenruo wqu@suse.com btrfs: qgroup: fix wrong qgroup metadata reserve for delayed inode
Xiang Chen chenxiang66@hisilicon.com PM: runtime: Remove link state checks in rpm_get/put_supplier()
Quinn Tran qutran@marvell.com scsi: qla2xxx: Fix crash on session cleanup with unload
Arun Easi aeasi@marvell.com scsi: qla2xxx: Fix reset of MPI firmware
Arun Easi aeasi@marvell.com scsi: qla2xxx: Fix MPI reset needed message
Helge Deller deller@gmx.de scsi: mptfusion: Fix null pointer dereferences in mptscsih_remove()
Kees Cook keescook@chromium.org fs/kernel_read_file: Remove FIRMWARE_PREALLOC_BUFFER enum
Martin Fuzzey martin.fuzzey@flowbird.group w1: mxc_w1: Fix timeout resolution problem leading to bus error
Jens Axboe axboe@kernel.dk io-wq: assign NUMA node locality if appropriate
Wei Huang wei.huang2@amd.com acpi-cpufreq: Honor _PSD table setting on new AMD CPUs
Rafael J. Wysocki rafael.j.wysocki@intel.com ACPI: EC: PM: Drop ec_no_wakeup check from acpi_ec_dispatch_gpe()
Rafael J. Wysocki rafael.j.wysocki@intel.com ACPI: EC: PM: Flush EC work unconditionally after wakeup
Lukas Wunner lukas@wunner.de PCI/ACPI: Whitelist hotplug ports for D3 if power managed by ACPI
Jamie Iles jamie@nuviainc.com ACPI: debug: don't allow debugging when ACPI is disabled
Alex Hung alex.hung@canonical.com ACPI: video: use ACPI backlight for HP 635 Notebook
Ben Hutchings ben@decadent.org.uk ACPI / extlog: Check for RDMSR failure
dmitry.torokhov@gmail.com dmitry.torokhov@gmail.com ACPI: button: fix handling lid state changes when input device closed
Ashish Sangwan ashishsangwan2@gmail.com NFS: fix nfs_path in case of a rename retry
Hanjun Guo guohanjun@huawei.com ACPI: configfs: Add missing config_item_put() to fix refcount leak
Jan Kara jack@suse.cz fs: Don't invalidate page buffers in block_write_full_page()
Hans de Goede hdegoede@redhat.com media: uvcvideo: Fix uvc_ctrl_fixup_xu_info() not having any effect
Steve Foreman foremans@google.com hwmon: (pmbus/max34440) Fix OC fault limits
Marek Behún marek.behun@nic.cz leds: bcm6328, bcm6358: use devres LED registering function
Krzysztof Kozlowski krzk@kernel.org extcon: ptn5150: Fix usage of atomic GPIO with sleeping GPIO chips
Krzysztof Kozlowski krzk@kernel.org spi: sprd: Release DMA channel also on probe deferral
Chuanhong Guo gch981213@gmail.com spi: spi-mtk-nor: fix timeout calculation overflow
Kim Phillips kim.phillips@amd.com perf/x86/amd/ibs: Fix raw sample data accumulation
Kim Phillips kim.phillips@amd.com perf/x86/amd/ibs: Don't include randomized bits in get_ibs_op_count()
Kim Phillips kim.phillips@amd.com perf/amd/uncore: Set all slices and threads to restore perf stat -a behaviour
Kim Phillips kim.phillips@amd.com perf/x86/amd: Fix sampling Large Increment per Cycle events
Kan Liang kan.liang@linux.intel.com perf/x86/intel: Fix Ice Lake event constraint table
Andy Lutomirski luto@kernel.org selftests/x86/fsgsbase: Test PTRACE_PEEKUSER for GSBASE with invalid LDT GS
Jann Horn jannh@google.com seccomp: Make duplicate listener detection non-racy
Bharata B Rao bharata@linux.ibm.com mm: memcg/slab: uncharge during kmem_cache_free_bulk()
Raul E Rangel rrangel@chromium.org mmc: sdhci-acpi: AMDI0040: Set SDHCI_QUIRK2_PRESET_VALUE_BROKEN
Adrian Hunter adrian.hunter@intel.com mmc: sdhci: Add LTR support for some Intel BYT based controllers
Song Liu songliubraving@fb.com md/raid5: fix oops during stripe resizing
Guoqing Jiang guoqing.jiang@cloud.ionos.com md: fix the checking of wrong work queue
Huacai Chen chenhc@lemote.com irqchip/loongson-htvec: Fix initial interrupt clearing
Nick Desaulniers ndesaulniers@google.com vmlinux.lds.h: Add PGO and AutoFDO input sections
Chao Leng lengchao@huawei.com nvme-rdma: fix crash when connect rejected
Douglas Gilbert dgilbert@interlog.com sgl_alloc_order: fix memory leak
Xiubo Li xiubli@redhat.com nbd: make the config put is called before the notifying the waiter
Konrad Dybcio konradybcio@gmail.com arm64: dts: qcom: kitakami: Temporarily disable SDHCI1
Sudeep Holla sudeep.holla@arm.com firmware: arm_scmi: Move scmi bus init and exit calls into the driver
Grygorii Strashko grygorii.strashko@ti.com bindings: soc: ti: soc: ringacc: remove ti,dma-ring-reset-quirk
Grygorii Strashko grygorii.strashko@ti.com soc: ti: k3: ringacc: add am65x sr2.0 support
Stephen Boyd swboyd@chromium.org soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free
Krzysztof Kozlowski krzk@kernel.org ARM: dts: s5pv210: align SPI GPIO node name with dtschema in Aries
Krzysztof Kozlowski krzk@kernel.org ARM: dts: s5pv210: add RTC 32 KHz clock in Aries family
Krzysztof Kozlowski krzk@kernel.org ARM: dts: s5pv210: remove dedicated 'audio-subsystem' node
Krzysztof Kozlowski krzk@kernel.org ARM: dts: s5pv210: move PMU node out of clock controller
Krzysztof Kozlowski krzk@kernel.org ARM: dts: s5pv210: move fixed clocks under root node
Krzysztof Kozlowski krzk@kernel.org ARM: dts: s5pv210: remove DMA controller bus node name to fix dtschema warnings
Jonathan Bakker xc-racer2@live.ca ARM: dts: s5pv210: Enable audio on Aries boards
Dan Carpenter dan.carpenter@oracle.com memory: emif: Remove bogus debugfs error handling
Tony Lindgren tony@atomide.com ARM: dts: omap4: Fix sgx clock rate for 4430
Yoshihiro Shimoda yoshihiro.shimoda.uh@renesas.com arm64: dts: renesas: ulcb: add full-pwr-cycle-in-suspend into eMMC nodes
Ronnie Sahlberg lsahlber@redhat.com cifs: handle -EINTR in cifs_setattr
Rohith Surabattula rohiths@microsoft.com Handle STATUS_IO_TIMEOUT gracefully
Anant Thazhemadam anant.thazhemadam@gmail.com gfs2: add validation checks for size of superblock
Jamie Iles jamie@nuviainc.com gfs2: use-after-free in sysfs deregistration
Andrew Price anprice@redhat.com gfs2: Fix NULL pointer dereference in gfs2_rgrp_dump
Bob Peterson rpeterso@redhat.com gfs2: call truncate_inode_pages_final for address space glocks
Christoph Hellwig hch@lst.de scsi: core: Clean up allocation and freeing of sgtables
Fabiano Rosas farosas@linux.ibm.com KVM: PPC: Book3S HV: Do not allocate HPT for a nested guest
Jan Kara jack@suse.cz ext4: Detect already used quota file early
changfengnan fengnanchang@foxmail.com jbd2: avoid transaction reuse after reformatting
Madhuparna Bhowmik madhuparnabhowmik10@gmail.com drivers: watchdog: rdc321x_wdt: Fix race condition bugs
Yan, Zheng zyan@redhat.com ceph: encode inodes' parent/d_name in cap reconnect message
Anant Thazhemadam anant.thazhemadam@gmail.com net: 9p: initialize sun_server.sun_path to have addr's value only when addr is valid
J. Bruce Fields bfields@redhat.com nfsd4: remove check_conflicting_opens warning
Hou Tao houtao1@huawei.com nfsd: rename delegation related tracepoints to make them less confusing
Tero Kristo t-kristo@ti.com clk: ti: clockdomain: fix static checker warning
Tuan Phan tuanphan@os.amperecomputing.com PCI/ACPI: Add Ampere Altra SOC MCFG quirk
Chris Lew clew@codeaurora.org rpmsg: glink: Use complete_all for open states
Michael Chan michael.chan@broadcom.com bnxt_en: Log unknown link speed appropriately.
Chao Yu chao@kernel.org f2fs: fix to set SBI_NEED_FSCK flag for inconsistent inode
Zhao Heming heming.zhao@suse.com md/bitmap: md_bitmap_get_counter returns wrong blocks
Anand Jain anand.jain@oracle.com btrfs: fix replace of seed device
Gabriel Krisman Bertazi krisman@collabora.com block: Consider only dispatched requests for inflight statistic
Zhen Lei thunder.leizhen@huawei.com ARC: [dts] fix the errors detected by dtbs_check
Rodrigo Siqueira Rodrigo.Siqueira@amd.com drm/amd/display: Avoid set zero in the requested clk
Fangzhi Zuo Jerry.Zuo@amd.com drm/amd/display: HDMI remote sink need mode validation for Linux
Xiongfeng Wang wangxiongfeng2@huawei.com power: supply: test_power: add missing newlines when printing parameters by sysfs
Jonathan Cameron Jonathan.Cameron@huawei.com ACPI: HMAT: Fix handling of changes from ACPI 6.2 to ACPI 6.3
Diana Craciun diana.craciun@oss.nxp.com bus/fsl_mc: Do not rely on caller to provide non NULL mc_io
Bhaumik Bhatt bbhatt@codeaurora.org bus: mhi: core: Abort suspends due to outgoing pending packets
Li Jun jun.li@nxp.com usb: dwc3: core: do not queue work if dr_mode is not USB_DR_MODE_OTG
Xie He xie.he.0141@gmail.com drivers/net/wan/hdlc_fr: Correctly handle special skb->protocol values
Wen Gong wgong@codeaurora.org ath11k: change to disable softirqs for ath11k_regd_update to solve deadlock
Carl Huang cjhuang@codeaurora.org ath11k: fix warning caused by lockdep_assert_held
Wen Gong wgong@codeaurora.org ath11k: Use GFP_ATOMIC instead of GFP_KERNEL in ath11k_dp_htt_get_ppdu_desc
Wright Feng wright.feng@cypress.com brcmfmac: Fix warning message after dongle setup failed
Stanislaw Kardach skardach@marvell.com octeontx2-af: fix LD CUSTOM LTYPE aliasing
Jonathan Cameron Jonathan.Cameron@huawei.com ACPI: Add out of bounds and numa_off protections to pxm_to_node()
Gao Xiang hsiangkao@redhat.com xfs: avoid LR buffer overrun due to crafted h_len
Darrick J. Wong darrick.wong@oracle.com xfs: don't free rt blocks when we're doing a REMAP bunmapi call
farah kassabri fkassabri@habana.ai habanalabs: remove security from ARB_MST_QUIET register
Joakim Zhang qiangqing.zhang@nxp.com can: flexcan: disable clocks during stop mode
Zhengyuan Liu liuzhengyuan@tj.kylinos.cn arm64/mm: return cpu_all_mask when node is NUMA_NO_NODE
Dmitry Osipenko digetx@gmail.com cpuidle: tegra: Correctly handle result of arm_cpuidle_simple_enter()
Chuck Lever chuck.lever@oracle.com SUNRPC: Mitigate cond_resched() in xprt_transmit()
Peter Chen peter.chen@nxp.com usb: xhci: omit duplicate actions when suspending a runtime suspended host.
Felix Fietkau nbd@nbd.name mac80211: add missing queue/hash initialization to 802.3 xmit
Luben Tuikov luben.tuikov@amd.com drm/amdgpu: No sysfs, not an error condition
Linu Cherian lcherian@marvell.com coresight: Make sysfs functional on topologies with per core sink
Lang Dai lang.dai@intel.com uio: free uio id after uio file node is freed
Oliver Neukum oneukum@suse.com USB: adutux: fix debugging
Alain Volmat avolmat@me.com cpufreq: sti-cpufreq: add stih418 support
Zong Li zong.li@sifive.com riscv: Define AT_VECTOR_SIZE_ARCH for ARCH_DLINFO
Rodrigo Siqueira Rodrigo.Siqueira@amd.com drm/amd/display: Check clock table return
Magnus Karlsson magnus.karlsson@intel.com samples/bpf: Fix possible deadlock in xdpsock
Stephen Smalley stephen.smalley.work@gmail.com selinux: access policycaps with READ_ONCE/WRITE_ONCE
Yonghong Song yhs@fb.com selftests/bpf: Define string const as global for test_sysctl_prog.c
Krzysztof Kozlowski krzk@kernel.org nfc: s3fwrn5: Add missing CRYPTO_HASH dependency
Daniel W. S. Almeida dwlsalmeida@gmail.com media: uvcvideo: Fix dereference of out-of-bound list iterator
Marek Szyprowski m.szyprowski@samsung.com drm: panfrost: fix common struct sg_table related issues
Marek Szyprowski m.szyprowski@samsung.com drm: lima: fix common struct sg_table related issues
Marek Szyprowski m.szyprowski@samsung.com xen: gntdev: fix common struct sg_table related issues
Marek Szyprowski m.szyprowski@samsung.com drm: exynos: fix common struct sg_table related issues
Yonghong Song yhs@fb.com bpf: Permit map_ptr arithmetic with opcode add and offset 0
Douglas Anderson dianders@chromium.org kgdb: Make "kgdbcon" work properly with "kgdb_earlycon"
Michael Ellerman mpe@ellerman.id.au selftests/powerpc: Make using_hash_mmu() work on Cell & PowerMac
Masami Hiramatsu mhiramat@kernel.org ia64: kprobes: Use generic kretprobe trampoline handler
John Ogness john.ogness@linutronix.de printk: reduce LOG_BUF_SHIFT range for H8300
Valentin Schneider valentin.schneider@arm.com arm64: topology: Stop using MPIDR for topology information
Dmitry Osipenko digetx@gmail.com brcmfmac: increase F2 watermark for BCM4329
Antonio Borneo antonio.borneo@st.com drm/bridge/synopsys: dsi: add support for non-continuous HS clock
Madhuparna Bhowmik madhuparnabhowmik10@gmail.com mmc: via-sdmmc: Fix data race bug
Hans Verkuil hverkuil@xs4all.nl media: imx274: fix frame interval handling
Sidong Yang realwakka@gmail.com drm/vkms: avoid warning in vkms_get_vblank_timestamp
Tom Rix trix@redhat.com media: tw5864: check status of tw5864_frameinterval_get
Badhri Jagan Sridharan badhri@google.com usb: typec: tcpm: During PR_SWAP, source caps should be sent only after tSwapSourceStart
Xia Jiang xia.jiang@mediatek.com media: platform: Improve queue set up flow for bug fixing
Jérôme Pouiller jerome.pouiller@silabs.com staging: wfx: fix potential use before init
Marek Szyprowski m.szyprowski@samsung.com misc: fastrpc: fix common struct sg_table related issues
Akshu Agrawal akshu.agrawal@amd.com ASoC: AMD: Clean kernel log from deferred probe error messages
Hans Verkuil hverkuil-cisco@xs4all.nl media: videodev2.h: RGB BT2020 and HSV are always full range
Enric Balletbo i Serra enric.balletbo@collabora.com drm/bridge_connector: Set default status connected for eDP connectors
Andy Lutomirski luto@kernel.org selftests/x86/fsgsbase: Reap a forgotten child
Rander Wang rander.wang@intel.com ASoC: SOF: fix a runtime pm issue in SOF when HDMI codec doesn't work
Nadezda Lutovinova lutovinova@ispras.ru drm/brige/megachips: Add checking if ge_b850v3_lvds_init() is working correctly
Luben Tuikov luben.tuikov@amd.com drm/scheduler: Scheduler priority fixes (v2)
Sathishkumar Muruganandam murugana@codeaurora.org ath10k: fix VHT NSS calculation when STBC is enabled
Wen Gong wgong@codeaurora.org ath10k: start recovery process when payload length exceeds max htc length for sdio
Tom Rix trix@redhat.com video: fbdev: pvr2fb: initialize variables
Guchun Chen guchun.chen@amd.com drm/amdgpu: restore ras flags when user resets eeprom(v2)
Thomas Zimmermann tzimmermann@suse.de drm/ast: Separate DRM driver from PCI code
Arvind Sankar nivedita@alum.mit.edu x86/kaslr: Initialize mem_limit to the real maximum address
Venkateswara Naralasetty vnaralas@codeaurora.org ath10k: fix retry packets update in station dump
Pavel Begunkov asml.silence@gmail.com io_uring: don't set COMP_LOCKED if won't put
Darrick J. Wong darrick.wong@oracle.com xfs: fix realtime bitmap/summary file truncation when growing rt volume
Darrick J. Wong darrick.wong@oracle.com xfs: change the order in which child and parent defer ops are finished
Krzysztof Kozlowski krzk@kernel.org power: supply: bq27xxx: report "not charging" on all types
Darrick J. Wong darrick.wong@oracle.com xfs: log new intent items created as part of finishing recovered intent items
Chandan Babu R chandanrlinux@gmail.com xfs: Set xfs_buf's b_ops member when zeroing bitmap/summary files
Chandan Babu R chandanrlinux@gmail.com xfs: Set xfs_buf type flag when growing summary/bitmap files
Dave Wysochanski dwysocha@redhat.com NFS4: Fix oops when copy_file_range is attempted with NFS4.0 source
Douglas Anderson dianders@chromium.org ARM: 8997/2: hw_breakpoint: Handle inexact watchpoint addresses
Nicholas Piggin npiggin@gmail.com powerpc/64s: handle ISA v3.1 local copy-paste context switches
David Howells dhowells@redhat.com afs: Don't assert on unpurgeable server records
Jaegeuk Kim jaegeuk@kernel.org f2fs: handle errors of f2fs_get_meta_page_nofail
Johannes Berg johannes.berg@intel.com um: change sigio_spinlock to a mutex
Harald Freudenberger freude@linux.ibm.com s390/ap/zcrypt: revisit ap and zcrypt error handling
Chao Yu chao@kernel.org f2fs: compress: fix to disallow enabling compress on non-empty file
Vasily Gorbik gor@linux.ibm.com s390/startup: avoid save_area_sync overflow
Chao Yu chao@kernel.org f2fs: fix to check segment boundary during SIT page readahead
Chao Yu chao@kernel.org f2fs: fix uninit-value in f2fs_lookup
Chao Yu chao@kernel.org f2fs: do sanity check on zoned block device path
Zhang Qilong zhangqilong3@huawei.com f2fs: add trace exit in exception path
Nicholas Piggin npiggin@gmail.com sparc64: remove mm_cpumask clearing to fix kthread_use_mm race
Nicholas Piggin npiggin@gmail.com powerpc: select ARCH_WANT_IRQS_OFF_ACTIVATE_MM
Nicholas Piggin npiggin@gmail.com mm: fix exec activate_mm vs TLB shootdown and lazy tlb switching race
Ravi Bangoria ravi.bangoria@linux.ibm.com powerpc/watchpoint/ptrace: Fix SETHWDEBUG when CONFIG_HAVE_HW_BREAKPOINT=N
Chao Yu chao@kernel.org f2fs: allocate proper size memory for zstd decompress
Jason Gunthorpe jgg@ziepe.ca RDMA/core: Change how failing destroy is handled during uobj abort
Oliver O'Halloran oohall@gmail.com powerpc/powernv/smp: Fix spurious DBG() warning
Aneesh Kumar K.V aneesh.kumar@linux.ibm.com powerpc/vmemmap: Fix memory leak with vmemmap list allocation failures.
Mateusz Nosek mateusznosek0@gmail.com futex: Fix incorrect should_fail_futex() handling
Tang Bin tangbin@cmss.chinamobile.com usb: host: ehci-tegra: Fix error handling in tegra_ehci_probe()
Peter Zijlstra peterz@infradead.org lockdep: Fix preemption WARN for spurious IRQ-enable
Georgi Djakov georgi.djakov@linaro.org interconnect: qcom: sdm845: Enable keepalive for the MM1 BCM
Laurent Vivier lvivier@redhat.com vdpasim: fix MAC address configuration
David Howells dhowells@redhat.com afs: Fix dirty-region encoding on ppc32 with 64K pages
David Howells dhowells@redhat.com afs: Fix afs_invalidatepage to adjust the dirty region
David Howells dhowells@redhat.com afs: Alter dirty range encoding in page->private
David Howells dhowells@redhat.com afs: Wrap page->private manipulations in inline functions
David Howells dhowells@redhat.com afs: Fix where page->private is set during write
David Howells dhowells@redhat.com afs: Fix page leak on afs_write_begin() failure
David Howells dhowells@redhat.com afs: Fix to take ref on page when PG_private is set
Ard Biesheuvel ardb@kernel.org arm64: efi: increase EFI PE/COFF header padding to 64 KB
Sascha Hauer s.hauer@pengutronix.de ata: sata_nv: Fix retrieving of active qcs
Alok Prasad palok@marvell.com RDMA/qedr: Fix memory leak in iWARP CM
David Howells dhowells@redhat.com afs: Fix afs_launder_page to not clear PG_writeback
Dan Carpenter dan.carpenter@oracle.com afs: Fix a use after free in afs_xattr_get_acl()
Sasha Levin sashal@kernel.org tracing, synthetic events: Replace buggy strcat() with seq_buf operations
Amit Cohen amcohen@nvidia.com mlxsw: core: Fix use-after-free in mlxsw_emad_trans_finish()
Parav Pandit parav@nvidia.com RDMA/mlx5: Fix devlink deadlock on net namespace deletion
Sasha Levin sashal@kernel.org ionic: no rx flush in deinit
Juergen Gross jgross@suse.com x86/alternative: Don't call text_poke() in lazy TLB mode
Florian Fainelli f.fainelli@gmail.com firmware: arm_scmi: Fix duplicate workqueue name
Cristian Marussi cristian.marussi@arm.com firmware: arm_scmi: Fix locking in notifications
Jiri Slaby jirislaby@kernel.org x86/unwind/orc: Fix inactive tasks with stack pointer in %sp on GCC 10 compiled kernels
Sudeep Holla sudeep.holla@arm.com firmware: arm_scmi: Add missing Rx size re-initialisation
Sumit Garg sumit.garg@linaro.org tee: client UUID: Skip REE kernel login method as well
Etienne Carriere etienne.carriere@linaro.org firmware: arm_scmi: Expand SMC/HVC message pool to more than one
Etienne Carriere etienne.carriere@linaro.org firmware: arm_scmi: Fix ARCH_COLD_RESET
Juergen Gross jgross@suse.com xen/events: block rogue events for some time
Juergen Gross jgross@suse.com xen/events: defer eoi in case of excessive number of events
Juergen Gross jgross@suse.com xen/events: use a common cpu hotplug hook for event channels
Juergen Gross jgross@suse.com xen/events: switch user event channels to lateeoi model
Juergen Gross jgross@suse.com xen/pciback: use lateeoi irq binding
Juergen Gross jgross@suse.com xen/pvcallsback: use lateeoi irq binding
Juergen Gross jgross@suse.com xen/scsiback: use lateeoi irq binding
Juergen Gross jgross@suse.com xen/netback: use lateeoi irq binding
Juergen Gross jgross@suse.com xen/blkback: use lateeoi irq binding
Juergen Gross jgross@suse.com xen/events: add a new "late EOI" evtchn framework
Juergen Gross jgross@suse.com xen/events: fix race in evtchn_fifo_unmask()
Juergen Gross jgross@suse.com xen/events: add a proper barrier to 2-level uevent unmasking
Juergen Gross jgross@suse.com xen/events: avoid removing an event channel while handling it
-------------
Diffstat:
Documentation/admin-guide/kernel-parameters.txt | 8 + .../devicetree/bindings/soc/ti/k3-ringacc.yaml | 6 - .../userspace-api/media/v4l/colorspaces-defs.rst | 9 +- .../media/v4l/colorspaces-details.rst | 5 +- Makefile | 4 +- arch/Kconfig | 7 + arch/arc/boot/dts/axc001.dtsi | 2 +- arch/arc/boot/dts/axc003.dtsi | 2 +- arch/arc/boot/dts/axc003_idu.dtsi | 2 +- arch/arc/boot/dts/vdk_axc003.dtsi | 2 +- arch/arc/boot/dts/vdk_axc003_idu.dtsi | 2 +- arch/arc/kernel/perf_event.c | 27 +- arch/arm/Kconfig | 2 + arch/arm/boot/dts/aspeed-g5.dtsi | 1 - arch/arm/boot/dts/mt7623n-bananapi-bpi-r2.dts | 1 + arch/arm/boot/dts/omap4.dtsi | 2 +- arch/arm/boot/dts/omap443x.dtsi | 10 + arch/arm/boot/dts/s5pv210-aries.dtsi | 26 +- arch/arm/boot/dts/s5pv210-fascinate4g.dts | 98 +++++ arch/arm/boot/dts/s5pv210-galaxys.dts | 85 +++++ arch/arm/boot/dts/s5pv210.dtsi | 163 ++++---- arch/arm/configs/aspeed_g4_defconfig | 3 +- arch/arm/configs/aspeed_g5_defconfig | 3 +- arch/arm/kernel/hw_breakpoint.c | 100 +++-- arch/arm/plat-samsung/Kconfig | 1 + arch/arm64/Kconfig.platforms | 1 + .../marvell/armada-3720-espressobin-v7-emmc.dts | 10 +- .../dts/marvell/armada-3720-espressobin-v7.dts | 10 +- .../boot/dts/marvell/armada-3720-espressobin.dtsi | 12 +- .../dts/qcom/msm8994-sony-xperia-kitakami.dtsi | 7 +- arch/arm64/boot/dts/renesas/ulcb.dtsi | 1 + arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/include/asm/numa.h | 3 + arch/arm64/kernel/efi-header.S | 2 +- arch/arm64/kernel/topology.c | 32 +- arch/arm64/kvm/hypercalls.c | 2 +- arch/arm64/kvm/sys_regs.c | 6 +- arch/arm64/lib/memcpy.S | 3 +- arch/arm64/lib/memmove.S | 3 +- arch/arm64/lib/memset.S | 3 +- arch/arm64/mm/numa.c | 6 +- arch/ia64/kernel/Makefile | 2 +- arch/ia64/kernel/kprobes.c | 77 +--- arch/mips/configs/qi_lb60_defconfig | 1 + arch/mips/dec/setup.c | 9 +- arch/powerpc/Kconfig | 14 + arch/powerpc/include/asm/drmem.h | 4 +- arch/powerpc/include/asm/mmu_context.h | 2 +- arch/powerpc/kernel/head_32.S | 8 +- arch/powerpc/kernel/head_32.h | 72 ++-- arch/powerpc/kernel/mce.c | 7 +- arch/powerpc/kernel/process.c | 16 +- arch/powerpc/kernel/ptrace/ptrace-noadv.c | 2 +- arch/powerpc/kernel/rtas.c | 153 ++++++++ arch/powerpc/kernel/sysfs.c | 42 +- arch/powerpc/kernel/traps.c | 2 +- arch/powerpc/kvm/book3s_hv.c | 13 + arch/powerpc/kvm/book3s_hv_rmhandlers.S | 8 + arch/powerpc/mm/hugetlbpage.c | 18 +- arch/powerpc/mm/init_64.c | 35 +- arch/powerpc/platforms/powermac/sleep.S | 9 +- arch/powerpc/platforms/powernv/opal-elog.c | 33 +- arch/powerpc/platforms/powernv/smp.c | 2 +- arch/powerpc/platforms/pseries/hotplug-memory.c | 43 ++- arch/riscv/include/uapi/asm/auxvec.h | 3 + arch/s390/boot/head.S | 21 +- arch/s390/kernel/time.c | 118 ++++-- arch/sparc/kernel/smp_64.c | 65 +--- arch/um/kernel/sigio.c | 6 +- arch/x86/boot/compressed/kaslr.c | 41 +- arch/x86/events/amd/ibs.c | 38 +- arch/x86/events/amd/uncore.c | 28 +- arch/x86/events/core.c | 4 +- arch/x86/events/intel/core.c | 2 +- arch/x86/include/asm/asm-prototypes.h | 1 + arch/x86/include/asm/msr-index.h | 1 + arch/x86/kernel/alternative.c | 9 + arch/x86/kernel/unwind_orc.c | 9 +- arch/x86/kvm/x86.c | 8 +- block/bio.c | 11 +- block/blk-mq.c | 2 +- drivers/acpi/acpi_configfs.c | 1 + drivers/acpi/acpi_dbg.c | 3 + drivers/acpi/acpi_extlog.c | 6 +- drivers/acpi/button.c | 13 +- drivers/acpi/ec.c | 10 +- drivers/acpi/numa/hmat.c | 3 +- drivers/acpi/numa/srat.c | 2 +- drivers/acpi/pci_mcfg.c | 20 + drivers/acpi/video_detect.c | 9 + drivers/ata/sata_nv.c | 2 +- drivers/base/core.c | 4 +- drivers/base/firmware_loader/main.c | 5 +- drivers/base/power/runtime.c | 5 +- drivers/block/nbd.c | 2 +- drivers/block/null_blk.h | 1 + drivers/block/null_blk_zoned.c | 22 +- drivers/block/xen-blkback/blkback.c | 22 +- drivers/block/xen-blkback/xenbus.c | 5 +- drivers/bus/fsl-mc/mc-io.c | 7 +- drivers/bus/mhi/core/pm.c | 6 +- drivers/clk/ti/clockdomain.c | 2 + drivers/cpufreq/Kconfig | 2 + drivers/cpufreq/acpi-cpufreq.c | 3 +- drivers/cpufreq/cpufreq.c | 15 +- drivers/cpufreq/intel_pstate.c | 13 +- drivers/cpufreq/sti-cpufreq.c | 6 +- drivers/cpuidle/cpuidle-tegra.c | 34 +- drivers/dma/dma-jz4780.c | 7 +- drivers/extcon/extcon-ptn5150.c | 8 +- drivers/firmware/arm_scmi/base.c | 2 + drivers/firmware/arm_scmi/bus.c | 6 +- drivers/firmware/arm_scmi/clock.c | 2 + drivers/firmware/arm_scmi/common.h | 5 + drivers/firmware/arm_scmi/driver.c | 24 +- drivers/firmware/arm_scmi/notify.c | 22 +- drivers/firmware/arm_scmi/perf.c | 2 + drivers/firmware/arm_scmi/reset.c | 4 +- drivers/firmware/arm_scmi/sensors.c | 2 + drivers/firmware/arm_scmi/smc.c | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 4 +- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 6 +- drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 10 + drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c | 3 +- drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c | 13 +- drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c | 6 +- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 2 + drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h | 1 + drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 4 +- drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 72 ++++ drivers/gpu/drm/amd/amdgpu/jpeg_v1_0.c | 24 +- drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c | 28 +- drivers/gpu/drm/amd/amdgpu/vcn_v1_0.h | 3 +- .../drm/amd/amdkfd/kfd_device_queue_manager_v10.c | 5 +- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 7 + .../amd/display/dc/clk_mgr/dce112/dce112_clk_mgr.c | 3 +- .../drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c | 7 +- drivers/gpu/drm/amd/display/dc/core/dc_link.c | 2 +- .../gpu/drm/amd/display/dc/dce/dce_panel_cntl.h | 2 +- .../amd/display/dc/dcn10/dcn10_stream_encoder.c | 4 +- drivers/gpu/drm/amd/display/dc/gpio/gpio_base.c | 4 +- drivers/gpu/drm/amd/display/dc/os_types.h | 2 +- drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c | 2 +- drivers/gpu/drm/amd/powerplay/inc/smu_types.h | 1 + drivers/gpu/drm/amd/powerplay/navi10_ppt.c | 26 -- drivers/gpu/drm/amd/powerplay/sienna_cichlid_ppt.c | 6 + drivers/gpu/drm/ast/ast_drv.c | 59 +-- .../drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c | 12 +- drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c | 9 +- drivers/gpu/drm/drm_bridge_connector.c | 1 + drivers/gpu/drm/drm_gem.c | 4 +- drivers/gpu/drm/drm_gem_shmem_helper.c | 7 +- drivers/gpu/drm/exynos/exynos_drm_g2d.c | 10 +- drivers/gpu/drm/i915/i915_drv.h | 6 +- drivers/gpu/drm/lima/lima_gem.c | 11 +- drivers/gpu/drm/lima/lima_vm.c | 5 +- drivers/gpu/drm/panfrost/panfrost_gem.c | 4 +- drivers/gpu/drm/panfrost/panfrost_mmu.c | 7 +- drivers/gpu/drm/scheduler/sched_main.c | 4 +- drivers/gpu/drm/ttm/ttm_bo.c | 2 +- drivers/gpu/drm/vkms/vkms_crtc.c | 5 + drivers/hid/wacom_wac.c | 4 +- drivers/hwmon/pmbus/max34440.c | 23 ++ drivers/hwtracing/coresight/coresight-cti-sysfs.c | 7 + drivers/hwtracing/coresight/coresight-priv.h | 3 +- drivers/hwtracing/coresight/coresight.c | 62 ++- drivers/i2c/busses/i2c-imx.c | 24 +- drivers/idle/intel_idle.c | 9 +- drivers/iio/adc/ad7292.c | 4 +- drivers/iio/adc/at91-sama5d2_adc.c | 16 +- drivers/iio/adc/rcar-gyroadc.c | 21 +- drivers/iio/adc/ti-adc0832.c | 11 +- drivers/iio/adc/ti-adc12138.c | 13 +- drivers/iio/gyro/itg3200_buffer.c | 15 +- drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h | 12 +- drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c | 12 +- drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h | 6 + drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c | 42 +- drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c | 2 + drivers/iio/light/si1145.c | 19 +- drivers/iio/temperature/ltc2983.c | 19 +- drivers/infiniband/core/rdma_core.c | 30 +- drivers/infiniband/hw/mlx5/main.c | 6 +- drivers/infiniband/hw/qedr/qedr_iw_cm.c | 1 + drivers/input/serio/hil_mlc.c | 21 +- drivers/input/serio/hp_sdc_mlc.c | 8 +- drivers/interconnect/qcom/sdm845.c | 2 +- drivers/irqchip/irq-loongson-htvec.c | 4 +- drivers/leds/leds-bcm6328.c | 2 +- drivers/leds/leds-bcm6358.c | 2 +- drivers/md/md-bitmap.c | 2 +- drivers/md/md.c | 2 +- drivers/md/raid5.c | 4 +- drivers/media/i2c/imx274.c | 8 +- drivers/media/pci/tw5864/tw5864-video.c | 6 + drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c | 7 + drivers/media/usb/uvc/uvc_ctrl.c | 27 +- drivers/memory/brcmstb_dpfe.c | 16 +- drivers/memory/emif.c | 33 +- drivers/memory/tegra/tegra124.c | 1 - drivers/message/fusion/mptscsih.c | 13 +- drivers/misc/fastrpc.c | 4 +- drivers/misc/habanalabs/gaudi/gaudi_security.c | 55 +-- drivers/mmc/host/sdhci-acpi.c | 37 ++ drivers/mmc/host/sdhci-esdhc.h | 2 + drivers/mmc/host/sdhci-of-esdhc.c | 28 ++ drivers/mmc/host/sdhci-pci-core.c | 154 ++++++++ drivers/mmc/host/sdhci.c | 6 +- drivers/mmc/host/via-sdmmc.c | 3 + drivers/mtd/ubi/wl.c | 13 + drivers/net/can/flexcan.c | 30 +- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 6 +- drivers/net/ethernet/marvell/octeontx2/af/npc.h | 4 +- drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h | 5 - drivers/net/ethernet/mellanox/mlxsw/core.c | 3 + drivers/net/ethernet/pensando/ionic/ionic_lif.c | 1 - drivers/net/ethernet/pensando/ionic/ionic_txrx.c | 13 - drivers/net/ethernet/pensando/ionic/ionic_txrx.h | 1 - drivers/net/wan/hdlc_fr.c | 98 ++--- drivers/net/wireless/ath/ath10k/htt_rx.c | 16 +- drivers/net/wireless/ath/ath10k/mac.c | 5 +- drivers/net/wireless/ath/ath10k/sdio.c | 4 + drivers/net/wireless/ath/ath11k/dp_rx.c | 2 +- drivers/net/wireless/ath/ath11k/dp_tx.c | 4 + drivers/net/wireless/ath/ath11k/reg.c | 6 +- .../wireless/broadcom/brcm80211/brcmfmac/fweh.c | 10 +- .../wireless/broadcom/brcm80211/brcmfmac/sdio.c | 1 + drivers/net/xen-netback/common.h | 15 + drivers/net/xen-netback/interface.c | 61 ++- drivers/net/xen-netback/netback.c | 11 +- drivers/net/xen-netback/rx.c | 13 +- drivers/nfc/s3fwrn5/Kconfig | 1 + drivers/nvme/host/rdma.c | 1 - drivers/pci/controller/dwc/pcie-qcom.c | 13 + drivers/pci/ecam.c | 10 + drivers/pci/pci-acpi.c | 10 + drivers/power/supply/bq27xxx_battery.c | 6 +- drivers/power/supply/test_power.c | 6 + drivers/remoteproc/remoteproc_debugfs.c | 2 +- drivers/rpmsg/qcom_glink_native.c | 6 +- drivers/rtc/rtc-rx8010.c | 24 +- drivers/s390/crypto/ap_bus.h | 1 + drivers/s390/crypto/ap_queue.c | 8 +- drivers/s390/crypto/zcrypt_debug.h | 8 + drivers/s390/crypto/zcrypt_error.h | 88 ++--- drivers/s390/crypto/zcrypt_msgtype50.c | 50 +-- drivers/s390/crypto/zcrypt_msgtype6.c | 92 ++--- drivers/scsi/qla2xxx/qla_attr.c | 10 +- drivers/scsi/qla2xxx/qla_gbl.h | 1 - drivers/scsi/qla2xxx/qla_init.c | 2 + drivers/scsi/qla2xxx/qla_isr.c | 2 +- drivers/scsi/qla2xxx/qla_target.c | 13 +- drivers/scsi/qla2xxx/qla_tmpl.c | 49 +-- drivers/scsi/scsi_lib.c | 22 +- drivers/scsi/sd.c | 27 +- drivers/scsi/sr.c | 16 +- drivers/soc/qcom/rpmh-internal.h | 4 + drivers/soc/qcom/rpmh-rsc.c | 115 +++--- drivers/soc/ti/k3-ringacc.c | 33 +- drivers/spi/spi-mtk-nor.c | 6 +- drivers/spi/spi-sprd.c | 2 +- drivers/staging/comedi/drivers/cb_pcidas.c | 1 + drivers/staging/fieldbus/anybuss/arcx-anybus.c | 2 +- drivers/staging/octeon/ethernet-mdio.c | 6 - drivers/staging/octeon/ethernet-rx.c | 34 +- drivers/staging/octeon/ethernet.c | 9 + drivers/staging/wfx/sta.c | 28 +- drivers/tee/tee_core.c | 3 +- drivers/tty/serial/21285.c | 12 +- drivers/tty/serial/fsl_lpuart.c | 13 +- drivers/tty/vt/keyboard.c | 39 +- drivers/tty/vt/vt_ioctl.c | 47 +-- drivers/uio/uio.c | 4 +- drivers/usb/cdns3/ep0.c | 65 ++-- drivers/usb/cdns3/gadget.c | 102 ++--- drivers/usb/cdns3/gadget.h | 3 +- drivers/usb/class/cdc-acm.c | 12 +- drivers/usb/class/cdc-acm.h | 3 +- drivers/usb/core/driver.c | 30 +- drivers/usb/core/generic.c | 4 +- drivers/usb/core/usb.h | 2 + drivers/usb/dwc3/core.c | 21 +- drivers/usb/dwc3/core.h | 1 + drivers/usb/dwc3/dwc3-pci.c | 3 +- drivers/usb/dwc3/ep0.c | 27 +- drivers/usb/dwc3/gadget.c | 70 +++- drivers/usb/dwc3/gadget.h | 1 + drivers/usb/host/ehci-tegra.c | 4 +- drivers/usb/host/fsl-mph-dr-of.c | 9 +- drivers/usb/host/xhci-pci.c | 17 + drivers/usb/host/xhci.c | 7 +- drivers/usb/host/xhci.h | 1 + drivers/usb/misc/adutux.c | 1 + drivers/usb/misc/apple-mfi-fastcharge.c | 17 +- drivers/usb/typec/tcpm/tcpm.c | 8 +- drivers/vdpa/mlx5/core/mr.c | 5 +- drivers/vdpa/vdpa_sim/vdpa_sim.c | 7 +- drivers/vhost/vdpa.c | 129 +++---- drivers/vhost/vringh.c | 9 +- drivers/video/fbdev/pvr2fb.c | 2 + drivers/w1/masters/mxc_w1.c | 14 +- drivers/watchdog/rdc321x_wdt.c | 5 +- drivers/xen/events/events_2l.c | 9 +- drivers/xen/events/events_base.c | 423 +++++++++++++++++++-- drivers/xen/events/events_fifo.c | 83 ++-- drivers/xen/events/events_internal.h | 20 +- drivers/xen/evtchn.c | 7 +- drivers/xen/gntdev-dmabuf.c | 13 +- drivers/xen/pvcalls-back.c | 76 ++-- drivers/xen/xen-pciback/pci_stub.c | 13 +- drivers/xen/xen-pciback/pciback.h | 12 +- drivers/xen/xen-pciback/pciback_ops.c | 48 ++- drivers/xen/xen-pciback/xenbus.c | 2 +- drivers/xen/xen-scsiback.c | 23 +- fs/9p/vfs_file.c | 4 +- fs/afs/dir.c | 12 +- fs/afs/dir_edit.c | 6 +- fs/afs/file.c | 77 +++- fs/afs/internal.h | 57 +++ fs/afs/server.c | 7 +- fs/afs/write.c | 105 ++--- fs/afs/xattr.c | 2 +- fs/btrfs/block-group.c | 1 + fs/btrfs/ctree.c | 6 + fs/btrfs/ctree.h | 4 +- fs/btrfs/delayed-inode.c | 3 +- fs/btrfs/dev-replace.c | 7 +- fs/btrfs/disk-io.c | 8 +- fs/btrfs/extent-tree.c | 7 +- fs/btrfs/inode.c | 2 +- fs/btrfs/ioctl.c | 6 +- fs/btrfs/reada.c | 47 +++ fs/btrfs/reflink.c | 2 + fs/btrfs/root-tree.c | 13 +- fs/btrfs/send.c | 201 ++++++++-- fs/btrfs/tree-checker.c | 35 +- fs/btrfs/tree-log.c | 8 + fs/btrfs/volumes.c | 42 +- fs/btrfs/volumes.h | 1 + fs/buffer.c | 16 - fs/cachefiles/rdwr.c | 3 +- fs/ceph/addr.c | 2 +- fs/ceph/mds_client.c | 89 +++-- fs/cifs/cifsglob.h | 2 + fs/cifs/connect.c | 15 +- fs/cifs/inode.c | 13 +- fs/cifs/smb2maperror.c | 2 +- fs/cifs/smb2ops.c | 15 + fs/exec.c | 24 +- fs/ext4/balloc.c | 1 + fs/ext4/extents.c | 33 +- fs/ext4/ialloc.c | 1 + fs/ext4/inode.c | 29 +- fs/ext4/resize.c | 4 +- fs/ext4/super.c | 31 +- fs/f2fs/checkpoint.c | 10 +- fs/f2fs/compress.c | 7 +- fs/f2fs/dir.c | 8 +- fs/f2fs/f2fs.h | 4 +- fs/f2fs/file.c | 2 + fs/f2fs/inode.c | 3 + fs/f2fs/node.c | 2 +- fs/f2fs/segment.c | 12 +- fs/f2fs/super.c | 6 + fs/gfs2/glock.c | 18 +- fs/gfs2/glops.c | 11 +- fs/gfs2/incore.h | 1 + fs/gfs2/log.c | 61 +-- fs/gfs2/ops_fstype.c | 40 +- fs/gfs2/rgrp.c | 9 +- fs/gfs2/rgrp.h | 2 +- fs/gfs2/super.c | 3 + fs/gfs2/sys.c | 5 +- fs/gfs2/util.c | 2 +- fs/gfs2/util.h | 10 + fs/io-wq.c | 1 + fs/io_uring.c | 8 +- fs/jbd2/recovery.c | 78 +++- fs/nfs/namespace.c | 12 +- fs/nfs/nfs4_fs.h | 8 + fs/nfs/nfs4file.c | 3 +- fs/nfs/nfs4proc.c | 90 +++-- fs/nfs/nfs4trace.h | 1 + fs/nfsd/nfs4state.c | 5 +- fs/nfsd/nfsproc.c | 16 + fs/nfsd/trace.h | 4 +- fs/ubifs/debug.c | 1 + fs/ubifs/journal.c | 6 +- fs/ubifs/orphan.c | 2 + fs/ubifs/super.c | 44 ++- fs/ubifs/tnc.c | 3 + fs/ubifs/xattr.c | 2 + fs/udf/super.c | 21 +- fs/xfs/libxfs/xfs_bmap.c | 19 +- fs/xfs/libxfs/xfs_defer.c | 37 +- fs/xfs/libxfs/xfs_defer.h | 6 + fs/xfs/xfs_bmap_item.c | 2 +- fs/xfs/xfs_log_recover.c | 39 +- fs/xfs/xfs_refcount_item.c | 2 +- fs/xfs/xfs_rtalloc.c | 19 +- include/asm-generic/vmlinux.lds.h | 5 +- include/drm/gpu_scheduler.h | 12 +- include/linux/arm-smccc.h | 2 + include/linux/cpufreq.h | 11 +- include/linux/fs.h | 1 - include/linux/hil_mlc.h | 2 +- include/linux/mlx5/driver.h | 18 + include/linux/pci-ecam.h | 1 + include/linux/rcupdate_trace.h | 4 + include/linux/time64.h | 4 + include/linux/usb/pd.h | 1 + include/rdma/ib_verbs.h | 5 - include/scsi/scsi_cmnd.h | 3 +- include/trace/events/afs.h | 22 +- include/trace/events/btrfs.h | 10 +- include/uapi/linux/btrfs_tree.h | 14 + include/uapi/linux/nfs4.h | 3 + include/uapi/linux/videodev2.h | 17 +- include/xen/events.h | 21 + init/Kconfig | 3 +- kernel/bpf/verifier.c | 4 + kernel/debug/debug_core.c | 22 +- kernel/futex.c | 9 +- kernel/locking/lockdep.c | 4 +- kernel/module.c | 2 +- kernel/rcu/tasks.h | 16 +- kernel/rcu/tree.c | 2 +- kernel/sched/cpufreq_schedutil.c | 6 +- kernel/seccomp.c | 38 +- kernel/stop_machine.c | 2 +- kernel/time/itimer.c | 4 - kernel/time/sched_clock.c | 4 +- kernel/trace/ring_buffer.c | 18 +- kernel/trace/trace_events_synth.c | 23 +- lib/scatterlist.c | 2 +- mm/slab.c | 2 +- mm/slab.h | 42 +- mm/slub.c | 3 +- net/9p/trans_fd.c | 2 +- net/ceph/messenger.c | 5 + net/mac80211/tx.c | 6 + net/sunrpc/sysctl.c | 8 +- net/sunrpc/xprt.c | 6 +- samples/bpf/xdpsock_user.c | 1 + security/integrity/digsig.c | 2 +- security/integrity/ima/ima_fs.c | 2 +- security/integrity/ima/ima_main.c | 6 +- security/selinux/include/security.h | 14 +- security/selinux/ss/services.c | 3 +- sound/soc/amd/acp3x-rt5682-max9836.c | 11 +- sound/soc/sof/intel/hda-codec.c | 4 +- tools/perf/pmu-events/arch/x86/amdzen1/cache.json | 18 + tools/perf/util/print_binary.c | 2 +- .../testing/selftests/bpf/progs/test_sysctl_prog.c | 4 +- tools/testing/selftests/powerpc/utils.c | 4 +- tools/testing/selftests/x86/fsgsbase.c | 68 ++++ 460 files changed, 5209 insertions(+), 2493 deletions(-)
From: Juergen Gross jgross@suse.com
commit 073d0552ead5bfc7a3a9c01de590e924f11b5dd2 upstream.
Today it can happen that an event channel is being removed from the system while the event handling loop is active. This can lead to a race resulting in crashes or WARN() splats when trying to access the irq_info structure related to the event channel.
Fix this problem by using a rwlock taken as reader in the event handling loop and as writer when deallocating the irq_info structure.
As the observed problem was a NULL dereference in evtchn_from_irq() make this function more robust against races by testing the irq_info pointer to be not NULL before dereferencing it.
And finally make all accesses to evtchn_to_irq[row][col] atomic ones in order to avoid seeing partial updates of an array element in irq handling. Note that irq handling can be entered only for event channels which have been valid before, so any not populated row isn't a problem in this regard, as rows are only ever added and never removed.
This is XSA-331.
Cc: stable@vger.kernel.org Reported-by: Marek Marczykowski-Górecki marmarek@invisiblethingslab.com Reported-by: Jinoh Kang luke1337@theori.io Signed-off-by: Juergen Gross jgross@suse.com Reviewed-by: Stefano Stabellini sstabellini@kernel.org Reviewed-by: Wei Liu wl@xen.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/xen/events/events_base.c | 41 ++++++++++++++++++++++++++++++++++----- 1 file changed, 36 insertions(+), 5 deletions(-)
--- a/drivers/xen/events/events_base.c +++ b/drivers/xen/events/events_base.c @@ -33,6 +33,7 @@ #include <linux/slab.h> #include <linux/irqnr.h> #include <linux/pci.h> +#include <linux/spinlock.h>
#ifdef CONFIG_X86 #include <asm/desc.h> @@ -71,6 +72,23 @@ const struct evtchn_ops *evtchn_ops; */ static DEFINE_MUTEX(irq_mapping_update_lock);
+/* + * Lock protecting event handling loop against removing event channels. + * Adding of event channels is no issue as the associated IRQ becomes active + * only after everything is setup (before request_[threaded_]irq() the handler + * can't be entered for an event, as the event channel will be unmasked only + * then). + */ +static DEFINE_RWLOCK(evtchn_rwlock); + +/* + * Lock hierarchy: + * + * irq_mapping_update_lock + * evtchn_rwlock + * IRQ-desc lock + */ + static LIST_HEAD(xen_irq_list_head);
/* IRQ <-> VIRQ mapping. */ @@ -105,7 +123,7 @@ static void clear_evtchn_to_irq_row(unsi unsigned col;
for (col = 0; col < EVTCHN_PER_ROW; col++) - evtchn_to_irq[row][col] = -1; + WRITE_ONCE(evtchn_to_irq[row][col], -1); }
static void clear_evtchn_to_irq_all(void) @@ -142,7 +160,7 @@ static int set_evtchn_to_irq(evtchn_port clear_evtchn_to_irq_row(row); }
- evtchn_to_irq[row][col] = irq; + WRITE_ONCE(evtchn_to_irq[row][col], irq); return 0; }
@@ -152,7 +170,7 @@ int get_evtchn_to_irq(evtchn_port_t evtc return -1; if (evtchn_to_irq[EVTCHN_ROW(evtchn)] == NULL) return -1; - return evtchn_to_irq[EVTCHN_ROW(evtchn)][EVTCHN_COL(evtchn)]; + return READ_ONCE(evtchn_to_irq[EVTCHN_ROW(evtchn)][EVTCHN_COL(evtchn)]); }
/* Get info for IRQ */ @@ -261,10 +279,14 @@ static void xen_irq_info_cleanup(struct */ evtchn_port_t evtchn_from_irq(unsigned irq) { - if (WARN(irq >= nr_irqs, "Invalid irq %d!\n", irq)) + const struct irq_info *info = NULL; + + if (likely(irq < nr_irqs)) + info = info_for_irq(irq); + if (!info) return 0;
- return info_for_irq(irq)->evtchn; + return info->evtchn; }
unsigned int irq_from_evtchn(evtchn_port_t evtchn) @@ -440,16 +462,21 @@ static int __must_check xen_allocate_irq static void xen_free_irq(unsigned irq) { struct irq_info *info = info_for_irq(irq); + unsigned long flags;
if (WARN_ON(!info)) return;
+ write_lock_irqsave(&evtchn_rwlock, flags); + list_del(&info->list);
set_info_for_irq(irq, NULL);
WARN_ON(info->refcnt > 0);
+ write_unlock_irqrestore(&evtchn_rwlock, flags); + kfree(info);
/* Legacy IRQ descriptors are managed by the arch. */ @@ -1233,6 +1260,8 @@ static void __xen_evtchn_do_upcall(void) struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu); int cpu = smp_processor_id();
+ read_lock(&evtchn_rwlock); + do { vcpu_info->evtchn_upcall_pending = 0;
@@ -1243,6 +1272,8 @@ static void __xen_evtchn_do_upcall(void) virt_rmb(); /* Hypervisor can set upcall pending. */
} while (vcpu_info->evtchn_upcall_pending); + + read_unlock(&evtchn_rwlock); }
void xen_evtchn_do_upcall(struct pt_regs *regs)
From: Juergen Gross jgross@suse.com
commit 4d3fe31bd993ef504350989786858aefdb877daa upstream.
A follow-up patch will require certain write to happen before an event channel is unmasked.
While the memory barrier is not strictly necessary for all the callers, the main one will need it. In order to avoid an extra memory barrier when using fifo event channels, mandate evtchn_unmask() to provide write ordering.
The 2-level event handling unmask operation is missing an appropriate barrier, so add it. Fifo event channels are fine in this regard due to using sync_cmpxchg().
This is part of XSA-332.
Cc: stable@vger.kernel.org Suggested-by: Julien Grall julien@xen.org Signed-off-by: Juergen Gross jgross@suse.com Reviewed-by: Julien Grall jgrall@amazon.com Reviewed-by: Wei Liu wl@xen.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/xen/events/events_2l.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/drivers/xen/events/events_2l.c +++ b/drivers/xen/events/events_2l.c @@ -91,6 +91,8 @@ static void evtchn_2l_unmask(evtchn_port
BUG_ON(!irqs_disabled());
+ smp_wmb(); /* All writes before unmask must be visible. */ + if (unlikely((cpu != cpu_from_evtchn(port)))) do_hypercall = 1; else {
From: Juergen Gross jgross@suse.com
commit f01337197419b7e8a492e83089552b77d3b5fb90 upstream.
Unmasking a fifo event channel can result in unmasking it twice, once directly in the kernel and once via a hypercall in case the event was pending.
Fix that by doing the local unmask only if the event is not pending.
This is part of XSA-332.
Cc: stable@vger.kernel.org Signed-off-by: Juergen Gross jgross@suse.com Reviewed-by: Jan Beulich jbeulich@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/xen/events/events_fifo.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-)
--- a/drivers/xen/events/events_fifo.c +++ b/drivers/xen/events/events_fifo.c @@ -227,19 +227,25 @@ static bool evtchn_fifo_is_masked(evtchn return sync_test_bit(EVTCHN_FIFO_BIT(MASKED, word), BM(word)); } /* - * Clear MASKED, spinning if BUSY is set. + * Clear MASKED if not PENDING, spinning if BUSY is set. + * Return true if mask was cleared. */ -static void clear_masked(volatile event_word_t *word) +static bool clear_masked_cond(volatile event_word_t *word) { event_word_t new, old, w;
w = *word;
do { + if (w & (1 << EVTCHN_FIFO_PENDING)) + return false; + old = w & ~(1 << EVTCHN_FIFO_BUSY); new = old & ~(1 << EVTCHN_FIFO_MASKED); w = sync_cmpxchg(word, old, new); } while (w != old); + + return true; }
static void evtchn_fifo_unmask(evtchn_port_t port) @@ -248,8 +254,7 @@ static void evtchn_fifo_unmask(evtchn_po
BUG_ON(!irqs_disabled());
- clear_masked(word); - if (evtchn_fifo_is_pending(port)) { + if (!clear_masked_cond(word)) { struct evtchn_unmask unmask = { .port = port }; (void)HYPERVISOR_event_channel_op(EVTCHNOP_unmask, &unmask); }
From: Juergen Gross jgross@suse.com
commit 54c9de89895e0a36047fcc4ae754ea5b8655fb9d upstream.
In order to avoid tight event channel related IRQ loops add a new framework of "late EOI" handling: the IRQ the event channel is bound to will be masked until the event has been handled and the related driver is capable to handle another event. The driver is responsible for unmasking the event channel via the new function xen_irq_lateeoi().
This is similar to binding an event channel to a threaded IRQ, but without having to structure the driver accordingly.
In order to support a future special handling in case a rogue guest is sending lots of unsolicited events, add a flag to xen_irq_lateeoi() which can be set by the caller to indicate the event was a spurious one.
This is part of XSA-332.
Cc: stable@vger.kernel.org Reported-by: Julien Grall julien@xen.org Signed-off-by: Juergen Gross jgross@suse.com Reviewed-by: Jan Beulich jbeulich@suse.com Reviewed-by: Stefano Stabellini sstabellini@kernel.org Reviewed-by: Wei Liu wl@xen.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/xen/events/events_base.c | 151 ++++++++++++++++++++++++++++++++++----- include/xen/events.h | 21 +++++ 2 files changed, 155 insertions(+), 17 deletions(-)
--- a/drivers/xen/events/events_base.c +++ b/drivers/xen/events/events_base.c @@ -113,6 +113,7 @@ static bool (*pirq_needs_eoi)(unsigned i static struct irq_info *legacy_info_ptrs[NR_IRQS_LEGACY];
static struct irq_chip xen_dynamic_chip; +static struct irq_chip xen_lateeoi_chip; static struct irq_chip xen_percpu_chip; static struct irq_chip xen_pirq_chip; static void enable_dynirq(struct irq_data *data); @@ -397,6 +398,33 @@ void notify_remote_via_irq(int irq) } EXPORT_SYMBOL_GPL(notify_remote_via_irq);
+static void xen_irq_lateeoi_locked(struct irq_info *info) +{ + evtchn_port_t evtchn; + + evtchn = info->evtchn; + if (!VALID_EVTCHN(evtchn)) + return; + + unmask_evtchn(evtchn); +} + +void xen_irq_lateeoi(unsigned int irq, unsigned int eoi_flags) +{ + struct irq_info *info; + unsigned long flags; + + read_lock_irqsave(&evtchn_rwlock, flags); + + info = info_for_irq(irq); + + if (info) + xen_irq_lateeoi_locked(info); + + read_unlock_irqrestore(&evtchn_rwlock, flags); +} +EXPORT_SYMBOL_GPL(xen_irq_lateeoi); + static void xen_irq_init(unsigned irq) { struct irq_info *info; @@ -868,7 +896,7 @@ int xen_pirq_from_irq(unsigned irq) } EXPORT_SYMBOL_GPL(xen_pirq_from_irq);
-int bind_evtchn_to_irq(evtchn_port_t evtchn) +static int bind_evtchn_to_irq_chip(evtchn_port_t evtchn, struct irq_chip *chip) { int irq; int ret; @@ -885,7 +913,7 @@ int bind_evtchn_to_irq(evtchn_port_t evt if (irq < 0) goto out;
- irq_set_chip_and_handler_name(irq, &xen_dynamic_chip, + irq_set_chip_and_handler_name(irq, chip, handle_edge_irq, "event");
ret = xen_irq_info_evtchn_setup(irq, evtchn); @@ -906,8 +934,19 @@ out:
return irq; } + +int bind_evtchn_to_irq(evtchn_port_t evtchn) +{ + return bind_evtchn_to_irq_chip(evtchn, &xen_dynamic_chip); +} EXPORT_SYMBOL_GPL(bind_evtchn_to_irq);
+int bind_evtchn_to_irq_lateeoi(evtchn_port_t evtchn) +{ + return bind_evtchn_to_irq_chip(evtchn, &xen_lateeoi_chip); +} +EXPORT_SYMBOL_GPL(bind_evtchn_to_irq_lateeoi); + static int bind_ipi_to_irq(unsigned int ipi, unsigned int cpu) { struct evtchn_bind_ipi bind_ipi; @@ -949,8 +988,9 @@ static int bind_ipi_to_irq(unsigned int return irq; }
-int bind_interdomain_evtchn_to_irq(unsigned int remote_domain, - evtchn_port_t remote_port) +static int bind_interdomain_evtchn_to_irq_chip(unsigned int remote_domain, + evtchn_port_t remote_port, + struct irq_chip *chip) { struct evtchn_bind_interdomain bind_interdomain; int err; @@ -961,10 +1001,26 @@ int bind_interdomain_evtchn_to_irq(unsig err = HYPERVISOR_event_channel_op(EVTCHNOP_bind_interdomain, &bind_interdomain);
- return err ? : bind_evtchn_to_irq(bind_interdomain.local_port); + return err ? : bind_evtchn_to_irq_chip(bind_interdomain.local_port, + chip); +} + +int bind_interdomain_evtchn_to_irq(unsigned int remote_domain, + evtchn_port_t remote_port) +{ + return bind_interdomain_evtchn_to_irq_chip(remote_domain, remote_port, + &xen_dynamic_chip); } EXPORT_SYMBOL_GPL(bind_interdomain_evtchn_to_irq);
+int bind_interdomain_evtchn_to_irq_lateeoi(unsigned int remote_domain, + evtchn_port_t remote_port) +{ + return bind_interdomain_evtchn_to_irq_chip(remote_domain, remote_port, + &xen_lateeoi_chip); +} +EXPORT_SYMBOL_GPL(bind_interdomain_evtchn_to_irq_lateeoi); + static int find_virq(unsigned int virq, unsigned int cpu, evtchn_port_t *evtchn) { struct evtchn_status status; @@ -1061,14 +1117,15 @@ static void unbind_from_irq(unsigned int mutex_unlock(&irq_mapping_update_lock); }
-int bind_evtchn_to_irqhandler(evtchn_port_t evtchn, - irq_handler_t handler, - unsigned long irqflags, - const char *devname, void *dev_id) +static int bind_evtchn_to_irqhandler_chip(evtchn_port_t evtchn, + irq_handler_t handler, + unsigned long irqflags, + const char *devname, void *dev_id, + struct irq_chip *chip) { int irq, retval;
- irq = bind_evtchn_to_irq(evtchn); + irq = bind_evtchn_to_irq_chip(evtchn, chip); if (irq < 0) return irq; retval = request_irq(irq, handler, irqflags, devname, dev_id); @@ -1079,18 +1136,38 @@ int bind_evtchn_to_irqhandler(evtchn_por
return irq; } + +int bind_evtchn_to_irqhandler(evtchn_port_t evtchn, + irq_handler_t handler, + unsigned long irqflags, + const char *devname, void *dev_id) +{ + return bind_evtchn_to_irqhandler_chip(evtchn, handler, irqflags, + devname, dev_id, + &xen_dynamic_chip); +} EXPORT_SYMBOL_GPL(bind_evtchn_to_irqhandler);
-int bind_interdomain_evtchn_to_irqhandler(unsigned int remote_domain, - evtchn_port_t remote_port, - irq_handler_t handler, - unsigned long irqflags, - const char *devname, - void *dev_id) +int bind_evtchn_to_irqhandler_lateeoi(evtchn_port_t evtchn, + irq_handler_t handler, + unsigned long irqflags, + const char *devname, void *dev_id) +{ + return bind_evtchn_to_irqhandler_chip(evtchn, handler, irqflags, + devname, dev_id, + &xen_lateeoi_chip); +} +EXPORT_SYMBOL_GPL(bind_evtchn_to_irqhandler_lateeoi); + +static int bind_interdomain_evtchn_to_irqhandler_chip( + unsigned int remote_domain, evtchn_port_t remote_port, + irq_handler_t handler, unsigned long irqflags, + const char *devname, void *dev_id, struct irq_chip *chip) { int irq, retval;
- irq = bind_interdomain_evtchn_to_irq(remote_domain, remote_port); + irq = bind_interdomain_evtchn_to_irq_chip(remote_domain, remote_port, + chip); if (irq < 0) return irq;
@@ -1102,8 +1179,33 @@ int bind_interdomain_evtchn_to_irqhandle
return irq; } + +int bind_interdomain_evtchn_to_irqhandler(unsigned int remote_domain, + evtchn_port_t remote_port, + irq_handler_t handler, + unsigned long irqflags, + const char *devname, + void *dev_id) +{ + return bind_interdomain_evtchn_to_irqhandler_chip(remote_domain, + remote_port, handler, irqflags, devname, + dev_id, &xen_dynamic_chip); +} EXPORT_SYMBOL_GPL(bind_interdomain_evtchn_to_irqhandler);
+int bind_interdomain_evtchn_to_irqhandler_lateeoi(unsigned int remote_domain, + evtchn_port_t remote_port, + irq_handler_t handler, + unsigned long irqflags, + const char *devname, + void *dev_id) +{ + return bind_interdomain_evtchn_to_irqhandler_chip(remote_domain, + remote_port, handler, irqflags, devname, + dev_id, &xen_lateeoi_chip); +} +EXPORT_SYMBOL_GPL(bind_interdomain_evtchn_to_irqhandler_lateeoi); + int bind_virq_to_irqhandler(unsigned int virq, unsigned int cpu, irq_handler_t handler, unsigned long irqflags, const char *devname, void *dev_id) @@ -1634,6 +1736,21 @@ static struct irq_chip xen_dynamic_chip .irq_mask_ack = mask_ack_dynirq,
.irq_set_affinity = set_affinity_irq, + .irq_retrigger = retrigger_dynirq, +}; + +static struct irq_chip xen_lateeoi_chip __read_mostly = { + /* The chip name needs to contain "xen-dyn" for irqbalance to work. */ + .name = "xen-dyn-lateeoi", + + .irq_disable = disable_dynirq, + .irq_mask = disable_dynirq, + .irq_unmask = enable_dynirq, + + .irq_ack = mask_ack_dynirq, + .irq_mask_ack = mask_ack_dynirq, + + .irq_set_affinity = set_affinity_irq, .irq_retrigger = retrigger_dynirq, };
--- a/include/xen/events.h +++ b/include/xen/events.h @@ -15,10 +15,15 @@ unsigned xen_evtchn_nr_channels(void);
int bind_evtchn_to_irq(evtchn_port_t evtchn); +int bind_evtchn_to_irq_lateeoi(evtchn_port_t evtchn); int bind_evtchn_to_irqhandler(evtchn_port_t evtchn, irq_handler_t handler, unsigned long irqflags, const char *devname, void *dev_id); +int bind_evtchn_to_irqhandler_lateeoi(evtchn_port_t evtchn, + irq_handler_t handler, + unsigned long irqflags, const char *devname, + void *dev_id); int bind_virq_to_irq(unsigned int virq, unsigned int cpu, bool percpu); int bind_virq_to_irqhandler(unsigned int virq, unsigned int cpu, irq_handler_t handler, @@ -32,12 +37,20 @@ int bind_ipi_to_irqhandler(enum ipi_vect void *dev_id); int bind_interdomain_evtchn_to_irq(unsigned int remote_domain, evtchn_port_t remote_port); +int bind_interdomain_evtchn_to_irq_lateeoi(unsigned int remote_domain, + evtchn_port_t remote_port); int bind_interdomain_evtchn_to_irqhandler(unsigned int remote_domain, evtchn_port_t remote_port, irq_handler_t handler, unsigned long irqflags, const char *devname, void *dev_id); +int bind_interdomain_evtchn_to_irqhandler_lateeoi(unsigned int remote_domain, + evtchn_port_t remote_port, + irq_handler_t handler, + unsigned long irqflags, + const char *devname, + void *dev_id);
/* * Common unbind function for all event sources. Takes IRQ to unbind from. @@ -46,6 +59,14 @@ int bind_interdomain_evtchn_to_irqhandle */ void unbind_from_irqhandler(unsigned int irq, void *dev_id);
+/* + * Send late EOI for an IRQ bound to an event channel via one of the *_lateeoi + * functions above. + */ +void xen_irq_lateeoi(unsigned int irq, unsigned int eoi_flags); +/* Signal an event was spurious, i.e. there was no action resulting from it. */ +#define XEN_EOI_FLAG_SPURIOUS 0x00000001 + #define XEN_IRQ_PRIORITY_MAX EVTCHN_FIFO_PRIORITY_MAX #define XEN_IRQ_PRIORITY_DEFAULT EVTCHN_FIFO_PRIORITY_DEFAULT #define XEN_IRQ_PRIORITY_MIN EVTCHN_FIFO_PRIORITY_MIN
From: Juergen Gross jgross@suse.com
commit 01263a1fabe30b4d542f34c7e2364a22587ddaf2 upstream.
In order to reduce the chance for the system becoming unresponsive due to event storms triggered by a misbehaving blkfront use the lateeoi irq binding for blkback and unmask the event channel only after processing all pending requests.
As the thread processing requests is used to do purging work in regular intervals an EOI may be sent only after having received an event. If there was no pending I/O request flag the EOI as spurious.
This is part of XSA-332.
Cc: stable@vger.kernel.org Reported-by: Julien Grall julien@xen.org Signed-off-by: Juergen Gross jgross@suse.com Reviewed-by: Jan Beulich jbeulich@suse.com Reviewed-by: Wei Liu wl@xen.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/block/xen-blkback/blkback.c | 22 +++++++++++++++++----- drivers/block/xen-blkback/xenbus.c | 5 ++--- 2 files changed, 19 insertions(+), 8 deletions(-)
--- a/drivers/block/xen-blkback/blkback.c +++ b/drivers/block/xen-blkback/blkback.c @@ -201,7 +201,7 @@ static inline void shrink_free_pagepool(
#define vaddr(page) ((unsigned long)pfn_to_kaddr(page_to_pfn(page)))
-static int do_block_io_op(struct xen_blkif_ring *ring); +static int do_block_io_op(struct xen_blkif_ring *ring, unsigned int *eoi_flags); static int dispatch_rw_block_io(struct xen_blkif_ring *ring, struct blkif_request *req, struct pending_req *pending_req); @@ -612,6 +612,8 @@ int xen_blkif_schedule(void *arg) struct xen_vbd *vbd = &blkif->vbd; unsigned long timeout; int ret; + bool do_eoi; + unsigned int eoi_flags = XEN_EOI_FLAG_SPURIOUS;
set_freezable(); while (!kthread_should_stop()) { @@ -636,16 +638,23 @@ int xen_blkif_schedule(void *arg) if (timeout == 0) goto purge_gnt_list;
+ do_eoi = ring->waiting_reqs; + ring->waiting_reqs = 0; smp_mb(); /* clear flag *before* checking for work */
- ret = do_block_io_op(ring); + ret = do_block_io_op(ring, &eoi_flags); if (ret > 0) ring->waiting_reqs = 1; if (ret == -EACCES) wait_event_interruptible(ring->shutdown_wq, kthread_should_stop());
+ if (do_eoi && !ring->waiting_reqs) { + xen_irq_lateeoi(ring->irq, eoi_flags); + eoi_flags |= XEN_EOI_FLAG_SPURIOUS; + } + purge_gnt_list: if (blkif->vbd.feature_gnt_persistent && time_after(jiffies, ring->next_lru)) { @@ -1121,7 +1130,7 @@ static void end_block_io_op(struct bio * * and transmute it to the block API to hand it over to the proper block disk. */ static int -__do_block_io_op(struct xen_blkif_ring *ring) +__do_block_io_op(struct xen_blkif_ring *ring, unsigned int *eoi_flags) { union blkif_back_rings *blk_rings = &ring->blk_rings; struct blkif_request req; @@ -1144,6 +1153,9 @@ __do_block_io_op(struct xen_blkif_ring * if (RING_REQUEST_CONS_OVERFLOW(&blk_rings->common, rc)) break;
+ /* We've seen a request, so clear spurious eoi flag. */ + *eoi_flags &= ~XEN_EOI_FLAG_SPURIOUS; + if (kthread_should_stop()) { more_to_do = 1; break; @@ -1202,13 +1214,13 @@ done: }
static int -do_block_io_op(struct xen_blkif_ring *ring) +do_block_io_op(struct xen_blkif_ring *ring, unsigned int *eoi_flags) { union blkif_back_rings *blk_rings = &ring->blk_rings; int more_to_do;
do { - more_to_do = __do_block_io_op(ring); + more_to_do = __do_block_io_op(ring, eoi_flags); if (more_to_do) break;
--- a/drivers/block/xen-blkback/xenbus.c +++ b/drivers/block/xen-blkback/xenbus.c @@ -246,9 +246,8 @@ static int xen_blkif_map(struct xen_blki if (req_prod - rsp_prod > size) goto fail;
- err = bind_interdomain_evtchn_to_irqhandler(blkif->domid, evtchn, - xen_blkif_be_int, 0, - "blkif-backend", ring); + err = bind_interdomain_evtchn_to_irqhandler_lateeoi(blkif->domid, + evtchn, xen_blkif_be_int, 0, "blkif-backend", ring); if (err < 0) goto fail; ring->irq = err;
From: Juergen Gross jgross@suse.com
commit 23025393dbeb3b8b3b60ebfa724cdae384992e27 upstream.
In order to reduce the chance for the system becoming unresponsive due to event storms triggered by a misbehaving netfront use the lateeoi irq binding for netback and unmask the event channel only just before going to sleep waiting for new events.
Make sure not to issue an EOI when none is pending by introducing an eoi_pending element to struct xenvif_queue.
When no request has been consumed set the spurious flag when sending the EOI for an interrupt.
This is part of XSA-332.
Cc: stable@vger.kernel.org Reported-by: Julien Grall julien@xen.org Signed-off-by: Juergen Gross jgross@suse.com Reviewed-by: Jan Beulich jbeulich@suse.com Reviewed-by: Wei Liu wl@xen.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/xen-netback/common.h | 15 ++++++++ drivers/net/xen-netback/interface.c | 61 ++++++++++++++++++++++++++++++------ drivers/net/xen-netback/netback.c | 11 +++++- drivers/net/xen-netback/rx.c | 13 +++++-- 4 files changed, 86 insertions(+), 14 deletions(-)
--- a/drivers/net/xen-netback/common.h +++ b/drivers/net/xen-netback/common.h @@ -140,6 +140,20 @@ struct xenvif_queue { /* Per-queue data char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */ struct xenvif *vif; /* Parent VIF */
+ /* + * TX/RX common EOI handling. + * When feature-split-event-channels = 0, interrupt handler sets + * NETBK_COMMON_EOI, otherwise NETBK_RX_EOI and NETBK_TX_EOI are set + * by the RX and TX interrupt handlers. + * RX and TX handler threads will issue an EOI when either + * NETBK_COMMON_EOI or their specific bits (NETBK_RX_EOI or + * NETBK_TX_EOI) are set and they will reset those bits. + */ + atomic_t eoi_pending; +#define NETBK_RX_EOI 0x01 +#define NETBK_TX_EOI 0x02 +#define NETBK_COMMON_EOI 0x04 + /* Use NAPI for guest TX */ struct napi_struct napi; /* When feature-split-event-channels = 0, tx_irq = rx_irq. */ @@ -378,6 +392,7 @@ int xenvif_dealloc_kthread(void *data);
irqreturn_t xenvif_ctrl_irq_fn(int irq, void *data);
+bool xenvif_have_rx_work(struct xenvif_queue *queue, bool test_kthread); void xenvif_rx_action(struct xenvif_queue *queue); void xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb);
--- a/drivers/net/xen-netback/interface.c +++ b/drivers/net/xen-netback/interface.c @@ -77,12 +77,28 @@ int xenvif_schedulable(struct xenvif *vi !vif->disabled; }
+static bool xenvif_handle_tx_interrupt(struct xenvif_queue *queue) +{ + bool rc; + + rc = RING_HAS_UNCONSUMED_REQUESTS(&queue->tx); + if (rc) + napi_schedule(&queue->napi); + return rc; +} + static irqreturn_t xenvif_tx_interrupt(int irq, void *dev_id) { struct xenvif_queue *queue = dev_id; + int old;
- if (RING_HAS_UNCONSUMED_REQUESTS(&queue->tx)) - napi_schedule(&queue->napi); + old = atomic_fetch_or(NETBK_TX_EOI, &queue->eoi_pending); + WARN(old & NETBK_TX_EOI, "Interrupt while EOI pending\n"); + + if (!xenvif_handle_tx_interrupt(queue)) { + atomic_andnot(NETBK_TX_EOI, &queue->eoi_pending); + xen_irq_lateeoi(irq, XEN_EOI_FLAG_SPURIOUS); + }
return IRQ_HANDLED; } @@ -116,19 +132,46 @@ static int xenvif_poll(struct napi_struc return work_done; }
+static bool xenvif_handle_rx_interrupt(struct xenvif_queue *queue) +{ + bool rc; + + rc = xenvif_have_rx_work(queue, false); + if (rc) + xenvif_kick_thread(queue); + return rc; +} + static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id) { struct xenvif_queue *queue = dev_id; + int old;
- xenvif_kick_thread(queue); + old = atomic_fetch_or(NETBK_RX_EOI, &queue->eoi_pending); + WARN(old & NETBK_RX_EOI, "Interrupt while EOI pending\n"); + + if (!xenvif_handle_rx_interrupt(queue)) { + atomic_andnot(NETBK_RX_EOI, &queue->eoi_pending); + xen_irq_lateeoi(irq, XEN_EOI_FLAG_SPURIOUS); + }
return IRQ_HANDLED; }
irqreturn_t xenvif_interrupt(int irq, void *dev_id) { - xenvif_tx_interrupt(irq, dev_id); - xenvif_rx_interrupt(irq, dev_id); + struct xenvif_queue *queue = dev_id; + int old; + + old = atomic_fetch_or(NETBK_COMMON_EOI, &queue->eoi_pending); + WARN(old, "Interrupt while EOI pending\n"); + + /* Use bitwise or as we need to call both functions. */ + if ((!xenvif_handle_tx_interrupt(queue) | + !xenvif_handle_rx_interrupt(queue))) { + atomic_andnot(NETBK_COMMON_EOI, &queue->eoi_pending); + xen_irq_lateeoi(irq, XEN_EOI_FLAG_SPURIOUS); + }
return IRQ_HANDLED; } @@ -605,7 +648,7 @@ int xenvif_connect_ctrl(struct xenvif *v if (req_prod - rsp_prod > RING_SIZE(&vif->ctrl)) goto err_unmap;
- err = bind_interdomain_evtchn_to_irq(vif->domid, evtchn); + err = bind_interdomain_evtchn_to_irq_lateeoi(vif->domid, evtchn); if (err < 0) goto err_unmap;
@@ -709,7 +752,7 @@ int xenvif_connect_data(struct xenvif_qu
if (tx_evtchn == rx_evtchn) { /* feature-split-event-channels == 0 */ - err = bind_interdomain_evtchn_to_irqhandler( + err = bind_interdomain_evtchn_to_irqhandler_lateeoi( queue->vif->domid, tx_evtchn, xenvif_interrupt, 0, queue->name, queue); if (err < 0) @@ -720,7 +763,7 @@ int xenvif_connect_data(struct xenvif_qu /* feature-split-event-channels == 1 */ snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name), "%s-tx", queue->name); - err = bind_interdomain_evtchn_to_irqhandler( + err = bind_interdomain_evtchn_to_irqhandler_lateeoi( queue->vif->domid, tx_evtchn, xenvif_tx_interrupt, 0, queue->tx_irq_name, queue); if (err < 0) @@ -730,7 +773,7 @@ int xenvif_connect_data(struct xenvif_qu
snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name), "%s-rx", queue->name); - err = bind_interdomain_evtchn_to_irqhandler( + err = bind_interdomain_evtchn_to_irqhandler_lateeoi( queue->vif->domid, rx_evtchn, xenvif_rx_interrupt, 0, queue->rx_irq_name, queue); if (err < 0) --- a/drivers/net/xen-netback/netback.c +++ b/drivers/net/xen-netback/netback.c @@ -169,6 +169,10 @@ void xenvif_napi_schedule_or_enable_even
if (more_to_do) napi_schedule(&queue->napi); + else if (atomic_fetch_andnot(NETBK_TX_EOI | NETBK_COMMON_EOI, + &queue->eoi_pending) & + (NETBK_TX_EOI | NETBK_COMMON_EOI)) + xen_irq_lateeoi(queue->tx_irq, 0); }
static void tx_add_credit(struct xenvif_queue *queue) @@ -1643,9 +1647,14 @@ static bool xenvif_ctrl_work_todo(struct irqreturn_t xenvif_ctrl_irq_fn(int irq, void *data) { struct xenvif *vif = data; + unsigned int eoi_flag = XEN_EOI_FLAG_SPURIOUS;
- while (xenvif_ctrl_work_todo(vif)) + while (xenvif_ctrl_work_todo(vif)) { xenvif_ctrl_action(vif); + eoi_flag = 0; + } + + xen_irq_lateeoi(irq, eoi_flag);
return IRQ_HANDLED; } --- a/drivers/net/xen-netback/rx.c +++ b/drivers/net/xen-netback/rx.c @@ -503,13 +503,13 @@ static bool xenvif_rx_queue_ready(struct return queue->stalled && prod - cons >= 1; }
-static bool xenvif_have_rx_work(struct xenvif_queue *queue) +bool xenvif_have_rx_work(struct xenvif_queue *queue, bool test_kthread) { return xenvif_rx_ring_slots_available(queue) || (queue->vif->stall_timeout && (xenvif_rx_queue_stalled(queue) || xenvif_rx_queue_ready(queue))) || - kthread_should_stop() || + (test_kthread && kthread_should_stop()) || queue->vif->disabled; }
@@ -540,15 +540,20 @@ static void xenvif_wait_for_rx_work(stru { DEFINE_WAIT(wait);
- if (xenvif_have_rx_work(queue)) + if (xenvif_have_rx_work(queue, true)) return;
for (;;) { long ret;
prepare_to_wait(&queue->wq, &wait, TASK_INTERRUPTIBLE); - if (xenvif_have_rx_work(queue)) + if (xenvif_have_rx_work(queue, true)) break; + if (atomic_fetch_andnot(NETBK_RX_EOI | NETBK_COMMON_EOI, + &queue->eoi_pending) & + (NETBK_RX_EOI | NETBK_COMMON_EOI)) + xen_irq_lateeoi(queue->rx_irq, 0); + ret = schedule_timeout(xenvif_rx_queue_timeout(queue)); if (!ret) break;
From: Juergen Gross jgross@suse.com
commit 86991b6e7ea6c613b7692f65106076943449b6b7 upstream.
In order to reduce the chance for the system becoming unresponsive due to event storms triggered by a misbehaving scsifront use the lateeoi irq binding for scsiback and unmask the event channel only just before leaving the event handling function.
In case of a ring protocol error don't issue an EOI in order to avoid the possibility to use that for producing an event storm. This at once will result in no further call of scsiback_irq_fn(), so the ring_error struct member can be dropped and scsiback_do_cmd_fn() can signal the protocol error via a negative return value.
This is part of XSA-332.
Cc: stable@vger.kernel.org Reported-by: Julien Grall julien@xen.org Signed-off-by: Juergen Gross jgross@suse.com Reviewed-by: Jan Beulich jbeulich@suse.com Reviewed-by: Wei Liu wl@xen.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/xen/xen-scsiback.c | 23 +++++++++++++---------- 1 file changed, 13 insertions(+), 10 deletions(-)
--- a/drivers/xen/xen-scsiback.c +++ b/drivers/xen/xen-scsiback.c @@ -91,7 +91,6 @@ struct vscsibk_info { unsigned int irq;
struct vscsiif_back_ring ring; - int ring_error;
spinlock_t ring_lock; atomic_t nr_unreplied_reqs; @@ -722,7 +721,8 @@ static struct vscsibk_pend *prepare_pend return pending_req; }
-static int scsiback_do_cmd_fn(struct vscsibk_info *info) +static int scsiback_do_cmd_fn(struct vscsibk_info *info, + unsigned int *eoi_flags) { struct vscsiif_back_ring *ring = &info->ring; struct vscsiif_request ring_req; @@ -739,11 +739,12 @@ static int scsiback_do_cmd_fn(struct vsc rc = ring->rsp_prod_pvt; pr_warn("Dom%d provided bogus ring requests (%#x - %#x = %u). Halting ring processing\n", info->domid, rp, rc, rp - rc); - info->ring_error = 1; - return 0; + return -EINVAL; }
while ((rc != rp)) { + *eoi_flags &= ~XEN_EOI_FLAG_SPURIOUS; + if (RING_REQUEST_CONS_OVERFLOW(ring, rc)) break;
@@ -802,13 +803,16 @@ static int scsiback_do_cmd_fn(struct vsc static irqreturn_t scsiback_irq_fn(int irq, void *dev_id) { struct vscsibk_info *info = dev_id; + int rc; + unsigned int eoi_flags = XEN_EOI_FLAG_SPURIOUS;
- if (info->ring_error) - return IRQ_HANDLED; - - while (scsiback_do_cmd_fn(info)) + while ((rc = scsiback_do_cmd_fn(info, &eoi_flags)) > 0) cond_resched();
+ /* In case of a ring error we keep the event channel masked. */ + if (!rc) + xen_irq_lateeoi(irq, eoi_flags); + return IRQ_HANDLED; }
@@ -829,7 +833,7 @@ static int scsiback_init_sring(struct vs sring = (struct vscsiif_sring *)area; BACK_RING_INIT(&info->ring, sring, PAGE_SIZE);
- err = bind_interdomain_evtchn_to_irq(info->domid, evtchn); + err = bind_interdomain_evtchn_to_irq_lateeoi(info->domid, evtchn); if (err < 0) goto unmap_page;
@@ -1253,7 +1257,6 @@ static int scsiback_probe(struct xenbus_
info->domid = dev->otherend_id; spin_lock_init(&info->ring_lock); - info->ring_error = 0; atomic_set(&info->nr_unreplied_reqs, 0); init_waitqueue_head(&info->waiting_to_free); info->dev = dev;
From: Juergen Gross jgross@suse.com
commit c8d647a326f06a39a8e5f0f1af946eacfa1835f8 upstream.
In order to reduce the chance for the system becoming unresponsive due to event storms triggered by a misbehaving pvcallsfront use the lateeoi irq binding for pvcallsback and unmask the event channel only after handling all write requests, which are the ones coming in via an irq.
This requires modifying the logic a little bit to not require an event for each write request, but to keep the ioworker running until no further data is found on the ring page to be processed.
This is part of XSA-332.
Cc: stable@vger.kernel.org Reported-by: Julien Grall julien@xen.org Signed-off-by: Juergen Gross jgross@suse.com Reviewed-by: Stefano Stabellini sstabellini@kernel.org Reviewed-by: Wei Liu wl@xen.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/xen/pvcalls-back.c | 76 +++++++++++++++++++++++++++------------------ 1 file changed, 46 insertions(+), 30 deletions(-)
--- a/drivers/xen/pvcalls-back.c +++ b/drivers/xen/pvcalls-back.c @@ -66,6 +66,7 @@ struct sock_mapping { atomic_t write; atomic_t io; atomic_t release; + atomic_t eoi; void (*saved_data_ready)(struct sock *sk); struct pvcalls_ioworker ioworker; }; @@ -87,7 +88,7 @@ static int pvcalls_back_release_active(s struct pvcalls_fedata *fedata, struct sock_mapping *map);
-static void pvcalls_conn_back_read(void *opaque) +static bool pvcalls_conn_back_read(void *opaque) { struct sock_mapping *map = (struct sock_mapping *)opaque; struct msghdr msg; @@ -107,17 +108,17 @@ static void pvcalls_conn_back_read(void virt_mb();
if (error) - return; + return false;
size = pvcalls_queued(prod, cons, array_size); if (size >= array_size) - return; + return false; spin_lock_irqsave(&map->sock->sk->sk_receive_queue.lock, flags); if (skb_queue_empty(&map->sock->sk->sk_receive_queue)) { atomic_set(&map->read, 0); spin_unlock_irqrestore(&map->sock->sk->sk_receive_queue.lock, flags); - return; + return true; } spin_unlock_irqrestore(&map->sock->sk->sk_receive_queue.lock, flags); wanted = array_size - size; @@ -141,7 +142,7 @@ static void pvcalls_conn_back_read(void ret = inet_recvmsg(map->sock, &msg, wanted, MSG_DONTWAIT); WARN_ON(ret > wanted); if (ret == -EAGAIN) /* shouldn't happen */ - return; + return true; if (!ret) ret = -ENOTCONN; spin_lock_irqsave(&map->sock->sk->sk_receive_queue.lock, flags); @@ -160,10 +161,10 @@ static void pvcalls_conn_back_read(void virt_wmb(); notify_remote_via_irq(map->irq);
- return; + return true; }
-static void pvcalls_conn_back_write(struct sock_mapping *map) +static bool pvcalls_conn_back_write(struct sock_mapping *map) { struct pvcalls_data_intf *intf = map->ring; struct pvcalls_data *data = &map->data; @@ -180,7 +181,7 @@ static void pvcalls_conn_back_write(stru array_size = XEN_FLEX_RING_SIZE(map->ring_order); size = pvcalls_queued(prod, cons, array_size); if (size == 0) - return; + return false;
memset(&msg, 0, sizeof(msg)); msg.msg_flags |= MSG_DONTWAIT; @@ -198,12 +199,11 @@ static void pvcalls_conn_back_write(stru
atomic_set(&map->write, 0); ret = inet_sendmsg(map->sock, &msg, size); - if (ret == -EAGAIN || (ret >= 0 && ret < size)) { + if (ret == -EAGAIN) { atomic_inc(&map->write); atomic_inc(&map->io); + return true; } - if (ret == -EAGAIN) - return;
/* write the data, then update the indexes */ virt_wmb(); @@ -216,9 +216,13 @@ static void pvcalls_conn_back_write(stru } /* update the indexes, then notify the other end */ virt_wmb(); - if (prod != cons + ret) + if (prod != cons + ret) { atomic_inc(&map->write); + atomic_inc(&map->io); + } notify_remote_via_irq(map->irq); + + return true; }
static void pvcalls_back_ioworker(struct work_struct *work) @@ -227,6 +231,7 @@ static void pvcalls_back_ioworker(struct struct pvcalls_ioworker, register_work); struct sock_mapping *map = container_of(ioworker, struct sock_mapping, ioworker); + unsigned int eoi_flags = XEN_EOI_FLAG_SPURIOUS;
while (atomic_read(&map->io) > 0) { if (atomic_read(&map->release) > 0) { @@ -234,10 +239,18 @@ static void pvcalls_back_ioworker(struct return; }
- if (atomic_read(&map->read) > 0) - pvcalls_conn_back_read(map); - if (atomic_read(&map->write) > 0) - pvcalls_conn_back_write(map); + if (atomic_read(&map->read) > 0 && + pvcalls_conn_back_read(map)) + eoi_flags = 0; + if (atomic_read(&map->write) > 0 && + pvcalls_conn_back_write(map)) + eoi_flags = 0; + + if (atomic_read(&map->eoi) > 0 && !atomic_read(&map->write)) { + atomic_set(&map->eoi, 0); + xen_irq_lateeoi(map->irq, eoi_flags); + eoi_flags = XEN_EOI_FLAG_SPURIOUS; + }
atomic_dec(&map->io); } @@ -334,12 +347,9 @@ static struct sock_mapping *pvcalls_new_ goto out; map->bytes = page;
- ret = bind_interdomain_evtchn_to_irqhandler(fedata->dev->otherend_id, - evtchn, - pvcalls_back_conn_event, - 0, - "pvcalls-backend", - map); + ret = bind_interdomain_evtchn_to_irqhandler_lateeoi( + fedata->dev->otherend_id, evtchn, + pvcalls_back_conn_event, 0, "pvcalls-backend", map); if (ret < 0) goto out; map->irq = ret; @@ -873,15 +883,18 @@ static irqreturn_t pvcalls_back_event(in { struct xenbus_device *dev = dev_id; struct pvcalls_fedata *fedata = NULL; + unsigned int eoi_flags = XEN_EOI_FLAG_SPURIOUS;
- if (dev == NULL) - return IRQ_HANDLED; + if (dev) { + fedata = dev_get_drvdata(&dev->dev); + if (fedata) { + pvcalls_back_work(fedata); + eoi_flags = 0; + } + }
- fedata = dev_get_drvdata(&dev->dev); - if (fedata == NULL) - return IRQ_HANDLED; + xen_irq_lateeoi(irq, eoi_flags);
- pvcalls_back_work(fedata); return IRQ_HANDLED; }
@@ -891,12 +904,15 @@ static irqreturn_t pvcalls_back_conn_eve struct pvcalls_ioworker *iow;
if (map == NULL || map->sock == NULL || map->sock->sk == NULL || - map->sock->sk->sk_user_data != map) + map->sock->sk->sk_user_data != map) { + xen_irq_lateeoi(irq, 0); return IRQ_HANDLED; + }
iow = &map->ioworker;
atomic_inc(&map->write); + atomic_inc(&map->eoi); atomic_inc(&map->io); queue_work(iow->wq, &iow->register_work);
@@ -932,7 +948,7 @@ static int backend_connect(struct xenbus goto error; }
- err = bind_interdomain_evtchn_to_irq(dev->otherend_id, evtchn); + err = bind_interdomain_evtchn_to_irq_lateeoi(dev->otherend_id, evtchn); if (err < 0) goto error; fedata->irq = err;
From: Juergen Gross jgross@suse.com
commit c2711441bc961b37bba0615dd7135857d189035f upstream.
In order to reduce the chance for the system becoming unresponsive due to event storms triggered by a misbehaving pcifront use the lateeoi irq binding for pciback and unmask the event channel only just before leaving the event handling function.
Restructure the handling to support that scheme. Basically an event can come in for two reasons: either a normal request for a pciback action, which is handled in a worker, or in case the guest has finished an AER request which was requested by pciback.
When an AER request is issued to the guest and a normal pciback action is currently active issue an EOI early in order to be able to receive another event when the AER request has been finished by the guest.
Let the worker processing the normal requests run until no further request is pending, instead of starting a new worker ion that case. Issue the EOI only just before leaving the worker.
This scheme allows to drop calling the generic function xen_pcibk_test_and_schedule_op() after processing of any request as the handling of both request types is now separated more cleanly.
This is part of XSA-332.
Cc: stable@vger.kernel.org Reported-by: Julien Grall julien@xen.org Signed-off-by: Juergen Gross jgross@suse.com Reviewed-by: Jan Beulich jbeulich@suse.com Reviewed-by: Wei Liu wl@xen.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/xen/xen-pciback/pci_stub.c | 13 ++++----- drivers/xen/xen-pciback/pciback.h | 12 +++++++- drivers/xen/xen-pciback/pciback_ops.c | 48 ++++++++++++++++++++++++++-------- drivers/xen/xen-pciback/xenbus.c | 2 - 4 files changed, 56 insertions(+), 19 deletions(-)
--- a/drivers/xen/xen-pciback/pci_stub.c +++ b/drivers/xen/xen-pciback/pci_stub.c @@ -734,10 +734,17 @@ static pci_ers_result_t common_process(s wmb(); notify_remote_via_irq(pdev->evtchn_irq);
+ /* Enable IRQ to signal "request done". */ + xen_pcibk_lateeoi(pdev, 0); + ret = wait_event_timeout(xen_pcibk_aer_wait_queue, !(test_bit(_XEN_PCIB_active, (unsigned long *) &sh_info->flags)), 300*HZ);
+ /* Enable IRQ for pcifront request if not already active. */ + if (!test_bit(_PDEVF_op_active, &pdev->flags)) + xen_pcibk_lateeoi(pdev, 0); + if (!ret) { if (test_bit(_XEN_PCIB_active, (unsigned long *)&sh_info->flags)) { @@ -751,12 +758,6 @@ static pci_ers_result_t common_process(s } clear_bit(_PCIB_op_pending, (unsigned long *)&pdev->flags);
- if (test_bit(_XEN_PCIF_active, - (unsigned long *)&sh_info->flags)) { - dev_dbg(&psdev->dev->dev, "schedule pci_conf service\n"); - xen_pcibk_test_and_schedule_op(psdev->pdev); - } - res = (pci_ers_result_t)aer_op->err; return res; } --- a/drivers/xen/xen-pciback/pciback.h +++ b/drivers/xen/xen-pciback/pciback.h @@ -14,6 +14,7 @@ #include <linux/spinlock.h> #include <linux/workqueue.h> #include <linux/atomic.h> +#include <xen/events.h> #include <xen/interface/io/pciif.h>
#define DRV_NAME "xen-pciback" @@ -27,6 +28,8 @@ struct pci_dev_entry { #define PDEVF_op_active (1<<(_PDEVF_op_active)) #define _PCIB_op_pending (1) #define PCIB_op_pending (1<<(_PCIB_op_pending)) +#define _EOI_pending (2) +#define EOI_pending (1<<(_EOI_pending))
struct xen_pcibk_device { void *pci_dev_data; @@ -183,10 +186,15 @@ static inline void xen_pcibk_release_dev irqreturn_t xen_pcibk_handle_event(int irq, void *dev_id); void xen_pcibk_do_op(struct work_struct *data);
+static inline void xen_pcibk_lateeoi(struct xen_pcibk_device *pdev, + unsigned int eoi_flag) +{ + if (test_and_clear_bit(_EOI_pending, &pdev->flags)) + xen_irq_lateeoi(pdev->evtchn_irq, eoi_flag); +} + int xen_pcibk_xenbus_register(void); void xen_pcibk_xenbus_unregister(void); - -void xen_pcibk_test_and_schedule_op(struct xen_pcibk_device *pdev); #endif
/* Handles shared IRQs that can to device domain and control domain. */ --- a/drivers/xen/xen-pciback/pciback_ops.c +++ b/drivers/xen/xen-pciback/pciback_ops.c @@ -276,26 +276,41 @@ int xen_pcibk_disable_msix(struct xen_pc return 0; } #endif + +static inline bool xen_pcibk_test_op_pending(struct xen_pcibk_device *pdev) +{ + return test_bit(_XEN_PCIF_active, + (unsigned long *)&pdev->sh_info->flags) && + !test_and_set_bit(_PDEVF_op_active, &pdev->flags); +} + /* * Now the same evtchn is used for both pcifront conf_read_write request * as well as pcie aer front end ack. We use a new work_queue to schedule * xen_pcibk conf_read_write service for avoiding confict with aer_core * do_recovery job which also use the system default work_queue */ -void xen_pcibk_test_and_schedule_op(struct xen_pcibk_device *pdev) +static void xen_pcibk_test_and_schedule_op(struct xen_pcibk_device *pdev) { + bool eoi = true; + /* Check that frontend is requesting an operation and that we are not * already processing a request */ - if (test_bit(_XEN_PCIF_active, (unsigned long *)&pdev->sh_info->flags) - && !test_and_set_bit(_PDEVF_op_active, &pdev->flags)) { + if (xen_pcibk_test_op_pending(pdev)) { schedule_work(&pdev->op_work); + eoi = false; } /*_XEN_PCIB_active should have been cleared by pcifront. And also make sure xen_pcibk is waiting for ack by checking _PCIB_op_pending*/ if (!test_bit(_XEN_PCIB_active, (unsigned long *)&pdev->sh_info->flags) && test_bit(_PCIB_op_pending, &pdev->flags)) { wake_up(&xen_pcibk_aer_wait_queue); + eoi = false; } + + /* EOI if there was nothing to do. */ + if (eoi) + xen_pcibk_lateeoi(pdev, XEN_EOI_FLAG_SPURIOUS); }
/* Performing the configuration space reads/writes must not be done in atomic @@ -303,10 +318,8 @@ void xen_pcibk_test_and_schedule_op(stru * use of semaphores). This function is intended to be called from a work * queue in process context taking a struct xen_pcibk_device as a parameter */
-void xen_pcibk_do_op(struct work_struct *data) +static void xen_pcibk_do_one_op(struct xen_pcibk_device *pdev) { - struct xen_pcibk_device *pdev = - container_of(data, struct xen_pcibk_device, op_work); struct pci_dev *dev; struct xen_pcibk_dev_data *dev_data = NULL; struct xen_pci_op *op = &pdev->op; @@ -379,16 +392,31 @@ void xen_pcibk_do_op(struct work_struct smp_mb__before_atomic(); /* /after/ clearing PCIF_active */ clear_bit(_PDEVF_op_active, &pdev->flags); smp_mb__after_atomic(); /* /before/ final check for work */ +}
- /* Check to see if the driver domain tried to start another request in - * between clearing _XEN_PCIF_active and clearing _PDEVF_op_active. - */ - xen_pcibk_test_and_schedule_op(pdev); +void xen_pcibk_do_op(struct work_struct *data) +{ + struct xen_pcibk_device *pdev = + container_of(data, struct xen_pcibk_device, op_work); + + do { + xen_pcibk_do_one_op(pdev); + } while (xen_pcibk_test_op_pending(pdev)); + + xen_pcibk_lateeoi(pdev, 0); }
irqreturn_t xen_pcibk_handle_event(int irq, void *dev_id) { struct xen_pcibk_device *pdev = dev_id; + bool eoi; + + /* IRQs might come in before pdev->evtchn_irq is written. */ + if (unlikely(pdev->evtchn_irq != irq)) + pdev->evtchn_irq = irq; + + eoi = test_and_set_bit(_EOI_pending, &pdev->flags); + WARN(eoi, "IRQ while EOI pending\n");
xen_pcibk_test_and_schedule_op(pdev);
--- a/drivers/xen/xen-pciback/xenbus.c +++ b/drivers/xen/xen-pciback/xenbus.c @@ -123,7 +123,7 @@ static int xen_pcibk_do_attach(struct xe
pdev->sh_info = vaddr;
- err = bind_interdomain_evtchn_to_irqhandler( + err = bind_interdomain_evtchn_to_irqhandler_lateeoi( pdev->xdev->otherend_id, remote_evtchn, xen_pcibk_handle_event, 0, DRV_NAME, pdev); if (err < 0) {
From: Juergen Gross jgross@suse.com
commit c44b849cee8c3ac587da3b0980e01f77500d158c upstream.
Instead of disabling the irq when an event is received and enabling it again when handled by the user process use the lateeoi model.
This is part of XSA-332.
Cc: stable@vger.kernel.org Reported-by: Julien Grall julien@xen.org Signed-off-by: Juergen Gross jgross@suse.com Tested-by: Stefano Stabellini sstabellini@kernel.org Reviewed-by: Stefano Stabellini sstabellini@kernel.org Reviewed-by: Jan Beulich jbeulich@suse.com Reviewed-by: Wei Liu wl@xen.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/xen/evtchn.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)
--- a/drivers/xen/evtchn.c +++ b/drivers/xen/evtchn.c @@ -167,7 +167,6 @@ static irqreturn_t evtchn_interrupt(int "Interrupt for port %u, but apparently not enabled; per-user %p\n", evtchn->port, u);
- disable_irq_nosync(irq); evtchn->enabled = false;
spin_lock(&u->ring_prod_lock); @@ -293,7 +292,7 @@ static ssize_t evtchn_write(struct file evtchn = find_evtchn(u, port); if (evtchn && !evtchn->enabled) { evtchn->enabled = true; - enable_irq(irq_from_evtchn(port)); + xen_irq_lateeoi(irq_from_evtchn(port), 0); } }
@@ -393,8 +392,8 @@ static int evtchn_bind_to_user(struct pe if (rc < 0) goto err;
- rc = bind_evtchn_to_irqhandler(port, evtchn_interrupt, 0, - u->name, evtchn); + rc = bind_evtchn_to_irqhandler_lateeoi(port, evtchn_interrupt, 0, + u->name, evtchn); if (rc < 0) goto err;
From: Juergen Gross jgross@suse.com
commit 7beb290caa2adb0a399e735a1e175db9aae0523a upstream.
Today only fifo event channels have a cpu hotplug callback. In order to prepare for more percpu (de)init work move that callback into events_base.c and add percpu_init() and percpu_deinit() hooks to struct evtchn_ops.
This is part of XSA-332.
Cc: stable@vger.kernel.org Signed-off-by: Juergen Gross jgross@suse.com Reviewed-by: Jan Beulich jbeulich@suse.com Reviewed-by: Wei Liu wl@xen.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/xen/events/events_base.c | 25 +++++++++++++++++++++ drivers/xen/events/events_fifo.c | 40 ++++++++++++++++------------------- drivers/xen/events/events_internal.h | 3 ++ 3 files changed, 47 insertions(+), 21 deletions(-)
--- a/drivers/xen/events/events_base.c +++ b/drivers/xen/events/events_base.c @@ -34,6 +34,7 @@ #include <linux/irqnr.h> #include <linux/pci.h> #include <linux/spinlock.h> +#include <linux/cpuhotplug.h>
#ifdef CONFIG_X86 #include <asm/desc.h> @@ -1830,6 +1831,26 @@ static inline void xen_alloc_callback_ve static bool fifo_events = true; module_param(fifo_events, bool, 0);
+static int xen_evtchn_cpu_prepare(unsigned int cpu) +{ + int ret = 0; + + if (evtchn_ops->percpu_init) + ret = evtchn_ops->percpu_init(cpu); + + return ret; +} + +static int xen_evtchn_cpu_dead(unsigned int cpu) +{ + int ret = 0; + + if (evtchn_ops->percpu_deinit) + ret = evtchn_ops->percpu_deinit(cpu); + + return ret; +} + void __init xen_init_IRQ(void) { int ret = -EINVAL; @@ -1840,6 +1861,10 @@ void __init xen_init_IRQ(void) if (ret < 0) xen_evtchn_2l_init();
+ cpuhp_setup_state_nocalls(CPUHP_XEN_EVTCHN_PREPARE, + "xen/evtchn:prepare", + xen_evtchn_cpu_prepare, xen_evtchn_cpu_dead); + evtchn_to_irq = kcalloc(EVTCHN_ROW(xen_evtchn_max_channels()), sizeof(*evtchn_to_irq), GFP_KERNEL); BUG_ON(!evtchn_to_irq); --- a/drivers/xen/events/events_fifo.c +++ b/drivers/xen/events/events_fifo.c @@ -385,21 +385,6 @@ static void evtchn_fifo_resume(void) event_array_pages = 0; }
-static const struct evtchn_ops evtchn_ops_fifo = { - .max_channels = evtchn_fifo_max_channels, - .nr_channels = evtchn_fifo_nr_channels, - .setup = evtchn_fifo_setup, - .bind_to_cpu = evtchn_fifo_bind_to_cpu, - .clear_pending = evtchn_fifo_clear_pending, - .set_pending = evtchn_fifo_set_pending, - .is_pending = evtchn_fifo_is_pending, - .test_and_set_mask = evtchn_fifo_test_and_set_mask, - .mask = evtchn_fifo_mask, - .unmask = evtchn_fifo_unmask, - .handle_events = evtchn_fifo_handle_events, - .resume = evtchn_fifo_resume, -}; - static int evtchn_fifo_alloc_control_block(unsigned cpu) { void *control_block = NULL; @@ -422,19 +407,36 @@ static int evtchn_fifo_alloc_control_blo return ret; }
-static int xen_evtchn_cpu_prepare(unsigned int cpu) +static int evtchn_fifo_percpu_init(unsigned int cpu) { if (!per_cpu(cpu_control_block, cpu)) return evtchn_fifo_alloc_control_block(cpu); return 0; }
-static int xen_evtchn_cpu_dead(unsigned int cpu) +static int evtchn_fifo_percpu_deinit(unsigned int cpu) { __evtchn_fifo_handle_events(cpu, true); return 0; }
+static const struct evtchn_ops evtchn_ops_fifo = { + .max_channels = evtchn_fifo_max_channels, + .nr_channels = evtchn_fifo_nr_channels, + .setup = evtchn_fifo_setup, + .bind_to_cpu = evtchn_fifo_bind_to_cpu, + .clear_pending = evtchn_fifo_clear_pending, + .set_pending = evtchn_fifo_set_pending, + .is_pending = evtchn_fifo_is_pending, + .test_and_set_mask = evtchn_fifo_test_and_set_mask, + .mask = evtchn_fifo_mask, + .unmask = evtchn_fifo_unmask, + .handle_events = evtchn_fifo_handle_events, + .resume = evtchn_fifo_resume, + .percpu_init = evtchn_fifo_percpu_init, + .percpu_deinit = evtchn_fifo_percpu_deinit, +}; + int __init xen_evtchn_fifo_init(void) { int cpu = smp_processor_id(); @@ -448,9 +450,5 @@ int __init xen_evtchn_fifo_init(void)
evtchn_ops = &evtchn_ops_fifo;
- cpuhp_setup_state_nocalls(CPUHP_XEN_EVTCHN_PREPARE, - "xen/evtchn:prepare", - xen_evtchn_cpu_prepare, xen_evtchn_cpu_dead); - return ret; } --- a/drivers/xen/events/events_internal.h +++ b/drivers/xen/events/events_internal.h @@ -69,6 +69,9 @@ struct evtchn_ops {
void (*handle_events)(unsigned cpu); void (*resume)(void); + + int (*percpu_init)(unsigned int cpu); + int (*percpu_deinit)(unsigned int cpu); };
extern const struct evtchn_ops *evtchn_ops;
From: Juergen Gross jgross@suse.com
commit e99502f76271d6bc4e374fe368c50c67a1fd3070 upstream.
In case rogue guests are sending events at high frequency it might happen that xen_evtchn_do_upcall() won't stop processing events in dom0. As this is done in irq handling a crash might be the result.
In order to avoid that, delay further inter-domain events after some time in xen_evtchn_do_upcall() by forcing eoi processing into a worker on the same cpu, thus inhibiting new events coming in.
The time after which eoi processing is to be delayed is configurable via a new module parameter "event_loop_timeout" which specifies the maximum event loop time in jiffies (default: 2, the value was chosen after some tests showing that a value of 2 was the lowest with an only slight drop of dom0 network throughput while multiple guests performed an event storm).
How long eoi processing will be delayed can be specified via another parameter "event_eoi_delay" (again in jiffies, default 10, again the value was chosen after testing with different delay values).
This is part of XSA-332.
Cc: stable@vger.kernel.org Reported-by: Julien Grall julien@xen.org Signed-off-by: Juergen Gross jgross@suse.com Reviewed-by: Stefano Stabellini sstabellini@kernel.org Reviewed-by: Wei Liu wl@xen.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- Documentation/admin-guide/kernel-parameters.txt | 8 + drivers/xen/events/events_2l.c | 7 drivers/xen/events/events_base.c | 189 +++++++++++++++++++++++- drivers/xen/events/events_fifo.c | 30 +-- drivers/xen/events/events_internal.h | 14 + 5 files changed, 216 insertions(+), 32 deletions(-)
--- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -5828,6 +5828,14 @@ improve timer resolution at the expense of processing more timer interrupts.
+ xen.event_eoi_delay= [XEN] + How long to delay EOI handling in case of event + storms (jiffies). Default is 10. + + xen.event_loop_timeout= [XEN] + After which time (jiffies) the event handling loop + should start to delay EOI handling. Default is 2. + nopv= [X86,XEN,KVM,HYPER_V,VMWARE] Disables the PV optimizations forcing the guest to run as generic guest with no PV drivers. Currently support --- a/drivers/xen/events/events_2l.c +++ b/drivers/xen/events/events_2l.c @@ -161,7 +161,7 @@ static inline xen_ulong_t active_evtchns * a bitset of words which contain pending event bits. The second * level is a bitset of pending events themselves. */ -static void evtchn_2l_handle_events(unsigned cpu) +static void evtchn_2l_handle_events(unsigned cpu, struct evtchn_loop_ctrl *ctrl) { int irq; xen_ulong_t pending_words; @@ -242,10 +242,7 @@ static void evtchn_2l_handle_events(unsi
/* Process port. */ port = (word_idx * BITS_PER_EVTCHN_WORD) + bit_idx; - irq = get_evtchn_to_irq(port); - - if (irq != -1) - generic_handle_irq(irq); + handle_irq_for_port(port, ctrl);
bit_idx = (bit_idx + 1) % BITS_PER_EVTCHN_WORD;
--- a/drivers/xen/events/events_base.c +++ b/drivers/xen/events/events_base.c @@ -35,6 +35,8 @@ #include <linux/pci.h> #include <linux/spinlock.h> #include <linux/cpuhotplug.h> +#include <linux/atomic.h> +#include <linux/ktime.h>
#ifdef CONFIG_X86 #include <asm/desc.h> @@ -65,6 +67,15 @@
#include "events_internal.h"
+#undef MODULE_PARAM_PREFIX +#define MODULE_PARAM_PREFIX "xen." + +static uint __read_mostly event_loop_timeout = 2; +module_param(event_loop_timeout, uint, 0644); + +static uint __read_mostly event_eoi_delay = 10; +module_param(event_eoi_delay, uint, 0644); + const struct evtchn_ops *evtchn_ops;
/* @@ -88,6 +99,7 @@ static DEFINE_RWLOCK(evtchn_rwlock); * irq_mapping_update_lock * evtchn_rwlock * IRQ-desc lock + * percpu eoi_list_lock */
static LIST_HEAD(xen_irq_list_head); @@ -120,6 +132,8 @@ static struct irq_chip xen_pirq_chip; static void enable_dynirq(struct irq_data *data); static void disable_dynirq(struct irq_data *data);
+static DEFINE_PER_CPU(unsigned int, irq_epoch); + static void clear_evtchn_to_irq_row(unsigned row) { unsigned col; @@ -399,17 +413,120 @@ void notify_remote_via_irq(int irq) } EXPORT_SYMBOL_GPL(notify_remote_via_irq);
+struct lateeoi_work { + struct delayed_work delayed; + spinlock_t eoi_list_lock; + struct list_head eoi_list; +}; + +static DEFINE_PER_CPU(struct lateeoi_work, lateeoi); + +static void lateeoi_list_del(struct irq_info *info) +{ + struct lateeoi_work *eoi = &per_cpu(lateeoi, info->eoi_cpu); + unsigned long flags; + + spin_lock_irqsave(&eoi->eoi_list_lock, flags); + list_del_init(&info->eoi_list); + spin_unlock_irqrestore(&eoi->eoi_list_lock, flags); +} + +static void lateeoi_list_add(struct irq_info *info) +{ + struct lateeoi_work *eoi = &per_cpu(lateeoi, info->eoi_cpu); + struct irq_info *elem; + u64 now = get_jiffies_64(); + unsigned long delay; + unsigned long flags; + + if (now < info->eoi_time) + delay = info->eoi_time - now; + else + delay = 1; + + spin_lock_irqsave(&eoi->eoi_list_lock, flags); + + if (list_empty(&eoi->eoi_list)) { + list_add(&info->eoi_list, &eoi->eoi_list); + mod_delayed_work_on(info->eoi_cpu, system_wq, + &eoi->delayed, delay); + } else { + list_for_each_entry_reverse(elem, &eoi->eoi_list, eoi_list) { + if (elem->eoi_time <= info->eoi_time) + break; + } + list_add(&info->eoi_list, &elem->eoi_list); + } + + spin_unlock_irqrestore(&eoi->eoi_list_lock, flags); +} + static void xen_irq_lateeoi_locked(struct irq_info *info) { evtchn_port_t evtchn; + unsigned int cpu;
evtchn = info->evtchn; - if (!VALID_EVTCHN(evtchn)) + if (!VALID_EVTCHN(evtchn) || !list_empty(&info->eoi_list)) return;
+ cpu = info->eoi_cpu; + if (info->eoi_time && info->irq_epoch == per_cpu(irq_epoch, cpu)) { + lateeoi_list_add(info); + return; + } + + info->eoi_time = 0; unmask_evtchn(evtchn); }
+static void xen_irq_lateeoi_worker(struct work_struct *work) +{ + struct lateeoi_work *eoi; + struct irq_info *info; + u64 now = get_jiffies_64(); + unsigned long flags; + + eoi = container_of(to_delayed_work(work), struct lateeoi_work, delayed); + + read_lock_irqsave(&evtchn_rwlock, flags); + + while (true) { + spin_lock(&eoi->eoi_list_lock); + + info = list_first_entry_or_null(&eoi->eoi_list, struct irq_info, + eoi_list); + + if (info == NULL || now < info->eoi_time) { + spin_unlock(&eoi->eoi_list_lock); + break; + } + + list_del_init(&info->eoi_list); + + spin_unlock(&eoi->eoi_list_lock); + + info->eoi_time = 0; + + xen_irq_lateeoi_locked(info); + } + + if (info) + mod_delayed_work_on(info->eoi_cpu, system_wq, + &eoi->delayed, info->eoi_time - now); + + read_unlock_irqrestore(&evtchn_rwlock, flags); +} + +static void xen_cpu_init_eoi(unsigned int cpu) +{ + struct lateeoi_work *eoi = &per_cpu(lateeoi, cpu); + + INIT_DELAYED_WORK(&eoi->delayed, xen_irq_lateeoi_worker); + spin_lock_init(&eoi->eoi_list_lock); + INIT_LIST_HEAD(&eoi->eoi_list); +} + void xen_irq_lateeoi(unsigned int irq, unsigned int eoi_flags) { struct irq_info *info; @@ -429,6 +546,7 @@ EXPORT_SYMBOL_GPL(xen_irq_lateeoi); static void xen_irq_init(unsigned irq) { struct irq_info *info; + #ifdef CONFIG_SMP /* By default all event channels notify CPU#0. */ cpumask_copy(irq_get_affinity_mask(irq), cpumask_of(0)); @@ -443,6 +561,7 @@ static void xen_irq_init(unsigned irq)
set_info_for_irq(irq, info);
+ INIT_LIST_HEAD(&info->eoi_list); list_add_tail(&info->list, &xen_irq_list_head); }
@@ -498,6 +617,9 @@ static void xen_free_irq(unsigned irq)
write_lock_irqsave(&evtchn_rwlock, flags);
+ if (!list_empty(&info->eoi_list)) + lateeoi_list_del(info); + list_del(&info->list);
set_info_for_irq(irq, NULL); @@ -1358,17 +1480,66 @@ void xen_send_IPI_one(unsigned int cpu, notify_remote_via_irq(irq); }
+struct evtchn_loop_ctrl { + ktime_t timeout; + unsigned count; + bool defer_eoi; +}; + +void handle_irq_for_port(evtchn_port_t port, struct evtchn_loop_ctrl *ctrl) +{ + int irq; + struct irq_info *info; + + irq = get_evtchn_to_irq(port); + if (irq == -1) + return; + + /* + * Check for timeout every 256 events. + * We are setting the timeout value only after the first 256 + * events in order to not hurt the common case of few loop + * iterations. The 256 is basically an arbitrary value. + * + * In case we are hitting the timeout we need to defer all further + * EOIs in order to ensure to leave the event handling loop rather + * sooner than later. + */ + if (!ctrl->defer_eoi && !(++ctrl->count & 0xff)) { + ktime_t kt = ktime_get(); + + if (!ctrl->timeout) { + kt = ktime_add_ms(kt, + jiffies_to_msecs(event_loop_timeout)); + ctrl->timeout = kt; + } else if (kt > ctrl->timeout) { + ctrl->defer_eoi = true; + } + } + + info = info_for_irq(irq); + + if (ctrl->defer_eoi) { + info->eoi_cpu = smp_processor_id(); + info->irq_epoch = __this_cpu_read(irq_epoch); + info->eoi_time = get_jiffies_64() + event_eoi_delay; + } + + generic_handle_irq(irq); +} + static void __xen_evtchn_do_upcall(void) { struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu); int cpu = smp_processor_id(); + struct evtchn_loop_ctrl ctrl = { 0 };
read_lock(&evtchn_rwlock);
do { vcpu_info->evtchn_upcall_pending = 0;
- xen_evtchn_handle_events(cpu); + xen_evtchn_handle_events(cpu, &ctrl);
BUG_ON(!irqs_disabled());
@@ -1377,6 +1548,13 @@ static void __xen_evtchn_do_upcall(void) } while (vcpu_info->evtchn_upcall_pending);
read_unlock(&evtchn_rwlock); + + /* + * Increment irq_epoch only now to defer EOIs only for + * xen_irq_lateeoi() invocations occurring from inside the loop + * above. + */ + __this_cpu_inc(irq_epoch); }
void xen_evtchn_do_upcall(struct pt_regs *regs) @@ -1825,9 +2003,6 @@ void xen_setup_callback_vector(void) {} static inline void xen_alloc_callback_vector(void) {} #endif
-#undef MODULE_PARAM_PREFIX -#define MODULE_PARAM_PREFIX "xen." - static bool fifo_events = true; module_param(fifo_events, bool, 0);
@@ -1835,6 +2010,8 @@ static int xen_evtchn_cpu_prepare(unsign { int ret = 0;
+ xen_cpu_init_eoi(cpu); + if (evtchn_ops->percpu_init) ret = evtchn_ops->percpu_init(cpu);
@@ -1861,6 +2038,8 @@ void __init xen_init_IRQ(void) if (ret < 0) xen_evtchn_2l_init();
+ xen_cpu_init_eoi(smp_processor_id()); + cpuhp_setup_state_nocalls(CPUHP_XEN_EVTCHN_PREPARE, "xen/evtchn:prepare", xen_evtchn_cpu_prepare, xen_evtchn_cpu_dead); --- a/drivers/xen/events/events_fifo.c +++ b/drivers/xen/events/events_fifo.c @@ -275,19 +275,9 @@ static uint32_t clear_linked(volatile ev return w & EVTCHN_FIFO_LINK_MASK; }
-static void handle_irq_for_port(evtchn_port_t port) -{ - int irq; - - irq = get_evtchn_to_irq(port); - if (irq != -1) - generic_handle_irq(irq); -} - -static void consume_one_event(unsigned cpu, +static void consume_one_event(unsigned cpu, struct evtchn_loop_ctrl *ctrl, struct evtchn_fifo_control_block *control_block, - unsigned priority, unsigned long *ready, - bool drop) + unsigned priority, unsigned long *ready) { struct evtchn_fifo_queue *q = &per_cpu(cpu_queue, cpu); uint32_t head; @@ -320,16 +310,17 @@ static void consume_one_event(unsigned c clear_bit(priority, ready);
if (evtchn_fifo_is_pending(port) && !evtchn_fifo_is_masked(port)) { - if (unlikely(drop)) + if (unlikely(!ctrl)) pr_warn("Dropping pending event for port %u\n", port); else - handle_irq_for_port(port); + handle_irq_for_port(port, ctrl); }
q->head[priority] = head; }
-static void __evtchn_fifo_handle_events(unsigned cpu, bool drop) +static void __evtchn_fifo_handle_events(unsigned cpu, + struct evtchn_loop_ctrl *ctrl) { struct evtchn_fifo_control_block *control_block; unsigned long ready; @@ -341,14 +332,15 @@ static void __evtchn_fifo_handle_events(
while (ready) { q = find_first_bit(&ready, EVTCHN_FIFO_MAX_QUEUES); - consume_one_event(cpu, control_block, q, &ready, drop); + consume_one_event(cpu, ctrl, control_block, q, &ready); ready |= xchg(&control_block->ready, 0); } }
-static void evtchn_fifo_handle_events(unsigned cpu) +static void evtchn_fifo_handle_events(unsigned cpu, + struct evtchn_loop_ctrl *ctrl) { - __evtchn_fifo_handle_events(cpu, false); + __evtchn_fifo_handle_events(cpu, ctrl); }
static void evtchn_fifo_resume(void) @@ -416,7 +408,7 @@ static int evtchn_fifo_percpu_init(unsig
static int evtchn_fifo_percpu_deinit(unsigned int cpu) { - __evtchn_fifo_handle_events(cpu, true); + __evtchn_fifo_handle_events(cpu, NULL); return 0; }
--- a/drivers/xen/events/events_internal.h +++ b/drivers/xen/events/events_internal.h @@ -30,11 +30,15 @@ enum xen_irq_type { */ struct irq_info { struct list_head list; + struct list_head eoi_list; int refcnt; enum xen_irq_type type; /* type */ unsigned irq; evtchn_port_t evtchn; /* event channel */ unsigned short cpu; /* cpu bound */ + unsigned short eoi_cpu; /* EOI must happen on this cpu */ + unsigned int irq_epoch; /* If eoi_cpu valid: irq_epoch of event */ + u64 eoi_time; /* Time in jiffies when to EOI. */
union { unsigned short virq; @@ -53,6 +57,8 @@ struct irq_info { #define PIRQ_SHAREABLE (1 << 1) #define PIRQ_MSI_GROUP (1 << 2)
+struct evtchn_loop_ctrl; + struct evtchn_ops { unsigned (*max_channels)(void); unsigned (*nr_channels)(void); @@ -67,7 +73,7 @@ struct evtchn_ops { void (*mask)(evtchn_port_t port); void (*unmask)(evtchn_port_t port);
- void (*handle_events)(unsigned cpu); + void (*handle_events)(unsigned cpu, struct evtchn_loop_ctrl *ctrl); void (*resume)(void);
int (*percpu_init)(unsigned int cpu); @@ -78,6 +84,7 @@ extern const struct evtchn_ops *evtchn_o
extern int **evtchn_to_irq; int get_evtchn_to_irq(evtchn_port_t evtchn); +void handle_irq_for_port(evtchn_port_t port, struct evtchn_loop_ctrl *ctrl);
struct irq_info *info_for_irq(unsigned irq); unsigned cpu_from_irq(unsigned irq); @@ -135,9 +142,10 @@ static inline void unmask_evtchn(evtchn_ return evtchn_ops->unmask(port); }
-static inline void xen_evtchn_handle_events(unsigned cpu) +static inline void xen_evtchn_handle_events(unsigned cpu, + struct evtchn_loop_ctrl *ctrl) { - return evtchn_ops->handle_events(cpu); + return evtchn_ops->handle_events(cpu, ctrl); }
static inline void xen_evtchn_resume(void)
From: Juergen Gross jgross@suse.com
commit 5f7f77400ab5b357b5fdb7122c3442239672186c upstream.
In order to avoid high dom0 load due to rogue guests sending events at high frequency, block those events in case there was no action needed in dom0 to handle the events.
This is done by adding a per-event counter, which set to zero in case an EOI without the XEN_EOI_FLAG_SPURIOUS is received from a backend driver, and incremented when this flag has been set. In case the counter is 2 or higher delay the EOI by 1 << (cnt - 2) jiffies, but not more than 1 second.
In order not to waste memory shorten the per-event refcnt to two bytes (it should normally never exceed a value of 2). Add an overflow check to evtchn_get() to make sure the 2 bytes really won't overflow.
This is part of XSA-332.
Cc: stable@vger.kernel.org Signed-off-by: Juergen Gross jgross@suse.com Reviewed-by: Jan Beulich jbeulich@suse.com Reviewed-by: Stefano Stabellini sstabellini@kernel.org Reviewed-by: Wei Liu wl@xen.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/xen/events/events_base.c | 27 ++++++++++++++++++++++----- drivers/xen/events/events_internal.h | 3 ++- 2 files changed, 24 insertions(+), 6 deletions(-)
--- a/drivers/xen/events/events_base.c +++ b/drivers/xen/events/events_base.c @@ -461,17 +461,34 @@ static void lateeoi_list_add(struct irq_ spin_unlock_irqrestore(&eoi->eoi_list_lock, flags); }
-static void xen_irq_lateeoi_locked(struct irq_info *info) +static void xen_irq_lateeoi_locked(struct irq_info *info, bool spurious) { evtchn_port_t evtchn; unsigned int cpu; + unsigned int delay = 0;
evtchn = info->evtchn; if (!VALID_EVTCHN(evtchn) || !list_empty(&info->eoi_list)) return;
+ if (spurious) { + if ((1 << info->spurious_cnt) < (HZ << 2)) + info->spurious_cnt++; + if (info->spurious_cnt > 1) { + delay = 1 << (info->spurious_cnt - 2); + if (delay > HZ) + delay = HZ; + if (!info->eoi_time) + info->eoi_cpu = smp_processor_id(); + info->eoi_time = get_jiffies_64() + delay; + } + } else { + info->spurious_cnt = 0; + } + cpu = info->eoi_cpu; - if (info->eoi_time && info->irq_epoch == per_cpu(irq_epoch, cpu)) { + if (info->eoi_time && + (info->irq_epoch == per_cpu(irq_epoch, cpu) || delay)) { lateeoi_list_add(info); return; } @@ -508,7 +525,7 @@ static void xen_irq_lateeoi_worker(struc
info->eoi_time = 0;
- xen_irq_lateeoi_locked(info); + xen_irq_lateeoi_locked(info, false); }
if (info) @@ -537,7 +554,7 @@ void xen_irq_lateeoi(unsigned int irq, u info = info_for_irq(irq);
if (info) - xen_irq_lateeoi_locked(info); + xen_irq_lateeoi_locked(info, eoi_flags & XEN_EOI_FLAG_SPURIOUS);
read_unlock_irqrestore(&evtchn_rwlock, flags); } @@ -1441,7 +1458,7 @@ int evtchn_get(evtchn_port_t evtchn) goto done;
err = -EINVAL; - if (info->refcnt <= 0) + if (info->refcnt <= 0 || info->refcnt == SHRT_MAX) goto done;
info->refcnt++; --- a/drivers/xen/events/events_internal.h +++ b/drivers/xen/events/events_internal.h @@ -31,7 +31,8 @@ enum xen_irq_type { struct irq_info { struct list_head list; struct list_head eoi_list; - int refcnt; + short refcnt; + short spurious_cnt; enum xen_irq_type type; /* type */ unsigned irq; evtchn_port_t evtchn; /* event channel */
From: Etienne Carriere etienne.carriere@linaro.org
[ Upstream commit 45b9e04d5ba0b043783dfe2b19bb728e712cb32e ]
The defination for ARCH_COLD_RESET is wrong. Let us fix it according to the SCMI specification.
Link: https://lore.kernel.org/r/20201008143722.21888-5-etienne.carriere@linaro.org Fixes: 95a15d80aa0d ("firmware: arm_scmi: Add RESET protocol in SCMI v2.0") Signed-off-by: Etienne Carriere etienne.carriere@linaro.org Signed-off-by: Sudeep Holla sudeep.holla@arm.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/firmware/arm_scmi/reset.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/drivers/firmware/arm_scmi/reset.c b/drivers/firmware/arm_scmi/reset.c index 3691bafca0574..86bda46de8eb8 100644 --- a/drivers/firmware/arm_scmi/reset.c +++ b/drivers/firmware/arm_scmi/reset.c @@ -36,9 +36,7 @@ struct scmi_msg_reset_domain_reset { #define EXPLICIT_RESET_ASSERT BIT(1) #define ASYNCHRONOUS_RESET BIT(2) __le32 reset_state; -#define ARCH_RESET_TYPE BIT(31) -#define COLD_RESET_STATE BIT(0) -#define ARCH_COLD_RESET (ARCH_RESET_TYPE | COLD_RESET_STATE) +#define ARCH_COLD_RESET 0 };
struct scmi_msg_reset_notify {
From: Etienne Carriere etienne.carriere@linaro.org
[ Upstream commit 7adb2c8aaaa6a387af7140e57004beba2c04a4c6 ]
SMC/HVC can transmit only one message at the time as the shared memory needs to be protected and the calls are synchronous.
However, in order to allow multiple threads to send SCMI messages simultaneously, we need a larger poll of memory.
Let us just use value of 20 to keep it in sync mailbox transport implementation. Any other value must work perfectly.
Link: https://lore.kernel.org/r/20201008143722.21888-4-etienne.carriere@linaro.org Fixes: 1dc6558062da ("firmware: arm_scmi: Add smc/hvc transport") Cc: Peng Fan peng.fan@nxp.com Signed-off-by: Etienne Carriere etienne.carriere@linaro.org [sudeep.holla: reworded the commit message to indicate the practicality] Signed-off-by: Sudeep Holla sudeep.holla@arm.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/firmware/arm_scmi/smc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/firmware/arm_scmi/smc.c b/drivers/firmware/arm_scmi/smc.c index a1537d123e385..22f83af6853a1 100644 --- a/drivers/firmware/arm_scmi/smc.c +++ b/drivers/firmware/arm_scmi/smc.c @@ -149,6 +149,6 @@ static struct scmi_transport_ops scmi_smc_ops = { const struct scmi_desc scmi_smc_desc = { .ops = &scmi_smc_ops, .max_rx_timeout_ms = 30, - .max_msg = 1, + .max_msg = 20, .max_msg_size = 128, };
From: Sumit Garg sumit.garg@linaro.org
[ Upstream commit 722939528a37aa0cb22d441e2045c0cf53e78fb0 ]
Since the addition of session's client UUID generation via commit [1], login via REE kernel method was disallowed. So fix that via passing nill UUID in case of TEE_IOCTL_LOGIN_REE_KERNEL method as well.
Fixes: e33bcbab16d1 ("tee: add support for session's client UUID generation") [1] Signed-off-by: Sumit Garg sumit.garg@linaro.org Signed-off-by: Jens Wiklander jens.wiklander@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/tee/tee_core.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c index 64637e09a0953..2f6199ebf7698 100644 --- a/drivers/tee/tee_core.c +++ b/drivers/tee/tee_core.c @@ -200,7 +200,8 @@ int tee_session_calc_client_uuid(uuid_t *uuid, u32 connection_method, int name_len; int rc;
- if (connection_method == TEE_IOCTL_LOGIN_PUBLIC) { + if (connection_method == TEE_IOCTL_LOGIN_PUBLIC || + connection_method == TEE_IOCTL_LOGIN_REE_KERNEL) { /* Nil UUID to be passed to TEE environment */ uuid_copy(uuid, &uuid_null); return 0;
From: Sudeep Holla sudeep.holla@arm.com
[ Upstream commit 9724722fde8f9bbd2b87340f00b9300c9284001e ]
Few commands provide the list of description partially and require to be called consecutively until all the descriptors are fetched completely. In such cases, we don't release the buffers and reuse them for consecutive transmits.
However, currently we don't reset the Rx size which will be set as per the response for the last transmit. This may result in incorrect response size being interpretted as the firmware may repond with size greater than the one set but we read only upto the size set by previous response.
Let us reset the receive buffer size to max possible in such cases as we don't know the exact size of the response.
Link: https://lore.kernel.org/r/20201012141746.32575-1-sudeep.holla@arm.com Fixes: b6f20ff8bd94 ("firmware: arm_scmi: add common infrastructure and support for base protocol") Reported-by: Etienne Carriere etienne.carriere@linaro.org Signed-off-by: Sudeep Holla sudeep.holla@arm.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/firmware/arm_scmi/base.c | 2 ++ drivers/firmware/arm_scmi/clock.c | 2 ++ drivers/firmware/arm_scmi/common.h | 2 ++ drivers/firmware/arm_scmi/driver.c | 8 ++++++++ drivers/firmware/arm_scmi/perf.c | 2 ++ drivers/firmware/arm_scmi/sensors.c | 2 ++ 6 files changed, 18 insertions(+)
diff --git a/drivers/firmware/arm_scmi/base.c b/drivers/firmware/arm_scmi/base.c index 9853bd3c4d456..017e5d8bd869a 100644 --- a/drivers/firmware/arm_scmi/base.c +++ b/drivers/firmware/arm_scmi/base.c @@ -197,6 +197,8 @@ static int scmi_base_implementation_list_get(const struct scmi_handle *handle, protocols_imp[tot_num_ret + loop] = *(list + loop);
tot_num_ret += loop_num_ret; + + scmi_reset_rx_to_maxsz(handle, t); } while (loop_num_ret);
scmi_xfer_put(handle, t); diff --git a/drivers/firmware/arm_scmi/clock.c b/drivers/firmware/arm_scmi/clock.c index 75e39882746e1..fa3ad3a150c36 100644 --- a/drivers/firmware/arm_scmi/clock.c +++ b/drivers/firmware/arm_scmi/clock.c @@ -192,6 +192,8 @@ scmi_clock_describe_rates_get(const struct scmi_handle *handle, u32 clk_id, }
tot_rate_cnt += num_returned; + + scmi_reset_rx_to_maxsz(handle, t); /* * check for both returned and remaining to avoid infinite * loop due to buggy firmware diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h index c113e578cc6ce..6db59a7ac8531 100644 --- a/drivers/firmware/arm_scmi/common.h +++ b/drivers/firmware/arm_scmi/common.h @@ -147,6 +147,8 @@ int scmi_do_xfer_with_response(const struct scmi_handle *h, struct scmi_xfer *xfer); int scmi_xfer_get_init(const struct scmi_handle *h, u8 msg_id, u8 prot_id, size_t tx_size, size_t rx_size, struct scmi_xfer **p); +void scmi_reset_rx_to_maxsz(const struct scmi_handle *handle, + struct scmi_xfer *xfer); int scmi_handle_put(const struct scmi_handle *handle); struct scmi_handle *scmi_handle_get(struct device *dev); void scmi_set_handle(struct scmi_device *scmi_dev); diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c index 03ec74242c141..28a3e4902ea4e 100644 --- a/drivers/firmware/arm_scmi/driver.c +++ b/drivers/firmware/arm_scmi/driver.c @@ -402,6 +402,14 @@ int scmi_do_xfer(const struct scmi_handle *handle, struct scmi_xfer *xfer) return ret; }
+void scmi_reset_rx_to_maxsz(const struct scmi_handle *handle, + struct scmi_xfer *xfer) +{ + struct scmi_info *info = handle_to_scmi_info(handle); + + xfer->rx.len = info->desc->max_msg_size; +} + #define SCMI_MAX_RESPONSE_TIMEOUT (2 * MSEC_PER_SEC)
/** diff --git a/drivers/firmware/arm_scmi/perf.c b/drivers/firmware/arm_scmi/perf.c index 3e1e87012c95b..3e8b548a12b62 100644 --- a/drivers/firmware/arm_scmi/perf.c +++ b/drivers/firmware/arm_scmi/perf.c @@ -304,6 +304,8 @@ scmi_perf_describe_levels_get(const struct scmi_handle *handle, u32 domain, }
tot_opp_cnt += num_returned; + + scmi_reset_rx_to_maxsz(handle, t); /* * check for both returned and remaining to avoid infinite * loop due to buggy firmware diff --git a/drivers/firmware/arm_scmi/sensors.c b/drivers/firmware/arm_scmi/sensors.c index 1af0ad362e823..4beee439b84ba 100644 --- a/drivers/firmware/arm_scmi/sensors.c +++ b/drivers/firmware/arm_scmi/sensors.c @@ -166,6 +166,8 @@ static int scmi_sensor_description_get(const struct scmi_handle *handle, }
desc_index += num_returned; + + scmi_reset_rx_to_maxsz(handle, t); /* * check for both returned and remaining to avoid infinite * loop due to buggy firmware
From: Jiri Slaby jslaby@suse.cz
[ Upstream commit f2ac57a4c49d40409c21c82d23b5706df9b438af ]
GCC 10 optimizes the scheduler code differently than its predecessors.
When CONFIG_DEBUG_SECTION_MISMATCH=y, the Makefile forces GCC not to inline some functions (-fno-inline-functions-called-once). Before GCC 10, "no-inlined" __schedule() starts with the usual prologue:
push %bp mov %sp, %bp
So the ORC unwinder simply picks stack pointer from %bp and unwinds from __schedule() just perfectly:
$ cat /proc/1/stack [<0>] ep_poll+0x3e9/0x450 [<0>] do_epoll_wait+0xaa/0xc0 [<0>] __x64_sys_epoll_wait+0x1a/0x20 [<0>] do_syscall_64+0x33/0x40 [<0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9
But now, with GCC 10, there is no %bp prologue in __schedule():
$ cat /proc/1/stack <nothing>
The ORC entry of the point in __schedule() is:
sp:sp+88 bp:last_sp-48 type:call end:0
In this case, nobody subtracts sizeof "struct inactive_task_frame" in __unwind_start(). The struct is put on the stack by __switch_to_asm() and only then __switch_to_asm() stores %sp to task->thread.sp. But we start unwinding from a point in __schedule() (stored in frame->ret_addr by 'call') and not in __switch_to_asm().
So for these example values in __unwind_start():
sp=ffff94b50001fdc8 bp=ffff8e1f41d29340 ip=__schedule+0x1f0
The stack is:
ffff94b50001fdc8: ffff8e1f41578000 # struct inactive_task_frame ffff94b50001fdd0: 0000000000000000 ffff94b50001fdd8: ffff8e1f41d29340 ffff94b50001fde0: ffff8e1f41611d40 # ... ffff94b50001fde8: ffffffff93c41920 # bx ffff94b50001fdf0: ffff8e1f41d29340 # bp ffff94b50001fdf8: ffffffff9376cad0 # ret_addr (and end of the struct)
0xffffffff9376cad0 is __schedule+0x1f0 (after the call to __switch_to_asm). Now follow those 88 bytes from the ORC entry (sp+88). The entry is correct, __schedule() really pushes 48 bytes (8*7) + 32 bytes via subq to store some local values (like 4U below). So to unwind, look at the offset 88-sizeof(long) = 0x50 from here:
ffff94b50001fe00: ffff8e1f41578618 ffff94b50001fe08: 00000cc000000255 ffff94b50001fe10: 0000000500000004 ffff94b50001fe18: 7793fab6956b2d00 # NOTE (see below) ffff94b50001fe20: ffff8e1f41578000 ffff94b50001fe28: ffff8e1f41578000 ffff94b50001fe30: ffff8e1f41578000 ffff94b50001fe38: ffff8e1f41578000 ffff94b50001fe40: ffff94b50001fed8 ffff94b50001fe48: ffff8e1f41577ff0 ffff94b50001fe50: ffffffff9376cf12
Here ^^^^^^^^^^^^^^^^ is the correct ret addr from __schedule(). It translates to schedule+0x42 (insn after a call to __schedule()).
BUT, unwind_next_frame() tries to take the address starting from 0xffff94b50001fdc8. That is exactly from thread.sp+88-sizeof(long) = 0xffff94b50001fdc8+88-8 = 0xffff94b50001fe18, which is garbage marked as NOTE above. So this quits the unwinding as 7793fab6956b2d00 is obviously not a kernel address.
There was a fix to skip 'struct inactive_task_frame' in unwind_get_return_address_ptr in the following commit:
187b96db5ca7 ("x86/unwind/orc: Fix unwind_get_return_address_ptr() for inactive tasks")
But we need to skip the struct already in the unwinder proper. So subtract the size (increase the stack pointer) of the structure in __unwind_start() directly. This allows for removal of the code added by commit 187b96db5ca7 completely, as the address is now at '(unsigned long *)state->sp - 1', the same as in the generic case.
[ mingo: Cleaned up the changelog a bit, for better readability. ]
Fixes: ee9f8fce9964 ("x86/unwind: Add the ORC unwinder") Bug: https://bugzilla.suse.com/show_bug.cgi?id=1176907 Signed-off-by: Jiri Slaby jslaby@suse.cz Signed-off-by: Ingo Molnar mingo@kernel.org Link: https://lore.kernel.org/r/20201014053051.24199-1-jslaby@suse.cz Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/kernel/unwind_orc.c | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-)
diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c index ec88bbe08a328..4a96aa3de7d8a 100644 --- a/arch/x86/kernel/unwind_orc.c +++ b/arch/x86/kernel/unwind_orc.c @@ -320,19 +320,12 @@ EXPORT_SYMBOL_GPL(unwind_get_return_address);
unsigned long *unwind_get_return_address_ptr(struct unwind_state *state) { - struct task_struct *task = state->task; - if (unwind_done(state)) return NULL;
if (state->regs) return &state->regs->ip;
- if (task != current && state->sp == task->thread.sp) { - struct inactive_task_frame *frame = (void *)task->thread.sp; - return &frame->ret_addr; - } - if (state->sp) return (unsigned long *)state->sp - 1;
@@ -662,7 +655,7 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task, } else { struct inactive_task_frame *frame = (void *)task->thread.sp;
- state->sp = task->thread.sp; + state->sp = task->thread.sp + sizeof(*frame); state->bp = READ_ONCE_NOCHECK(frame->bp); state->ip = READ_ONCE_NOCHECK(frame->ret_addr); state->signal = (void *)state->ip == ret_from_fork;
From: Cristian Marussi cristian.marussi@arm.com
[ Upstream commit c7821c2d9c0dda0adf2bcf88e79b02a19a430be4 ]
When a protocol registers its events, the notification core takes care to rescan the hashtable of pending event handlers and activate all the possibly existent handlers referring to any of the events that are just registered by the new protocol. When a pending handler becomes active the core requests and enables the corresponding events in the SCMI firmware.
If, for whatever reason, the enable fails, such invalid event handler must be finally removed and freed. Let us ensure to use the scmi_put_active_handler() helper which handles properly the needed additional locking.
Failing to properly acquire all the needed mutexes exposes a race that leads to the following splat being observed:
WARNING: CPU: 0 PID: 388 at lib/refcount.c:28 refcount_warn_saturate+0xf8/0x148 Hardware name: ARM LTD ARM Juno Development Platform/ARM Juno Development Platform, BIOS EDK II Jun 30 2020 pstate: 40000005 (nZcv daif -PAN -UAO BTYPE=--) pc : refcount_warn_saturate+0xf8/0x148 lr : refcount_warn_saturate+0xf8/0x148 Call trace: refcount_warn_saturate+0xf8/0x148 scmi_put_handler_unlocked.isra.10+0x204/0x208 scmi_put_handler+0x50/0xa0 scmi_unregister_notifier+0x1bc/0x240 scmi_notify_tester_remove+0x4c/0x68 [dummy_scmi_consumer] scmi_dev_remove+0x54/0x68 device_release_driver_internal+0x114/0x1e8 driver_detach+0x58/0xe8 bus_remove_driver+0x88/0xe0 driver_unregister+0x38/0x68 scmi_driver_unregister+0x1c/0x28 scmi_drv_exit+0x1c/0xae0 [dummy_scmi_consumer] __arm64_sys_delete_module+0x1a4/0x268 el0_svc_common.constprop.3+0x94/0x178 do_el0_svc+0x2c/0x98 el0_sync_handler+0x148/0x1a8 el0_sync+0x158/0x180
Link: https://lore.kernel.org/r/20201013133109.49821-1-cristian.marussi@arm.com Fixes: e7c215f358a35 ("firmware: arm_scmi: Add notification callbacks-registration") Signed-off-by: Cristian Marussi cristian.marussi@arm.com Signed-off-by: Sudeep Holla sudeep.holla@arm.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/firmware/arm_scmi/notify.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-)
diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c index 4731daaacd19e..4d9f6de3a7fae 100644 --- a/drivers/firmware/arm_scmi/notify.c +++ b/drivers/firmware/arm_scmi/notify.c @@ -1403,15 +1403,21 @@ static void scmi_protocols_late_init(struct work_struct *work) "finalized PENDING handler - key:%X\n", hndl->key); ret = scmi_event_handler_enable_events(hndl); + if (ret) { + dev_dbg(ni->handle->dev, + "purging INVALID handler - key:%X\n", + hndl->key); + scmi_put_active_handler(ni, hndl); + } } else { ret = scmi_valid_pending_handler(ni, hndl); - } - if (ret) { - dev_dbg(ni->handle->dev, - "purging PENDING handler - key:%X\n", - hndl->key); - /* this hndl can be only a pending one */ - scmi_put_handler_unlocked(ni, hndl); + if (ret) { + dev_dbg(ni->handle->dev, + "purging PENDING handler - key:%X\n", + hndl->key); + /* this hndl can be only a pending one */ + scmi_put_handler_unlocked(ni, hndl); + } } } mutex_unlock(&ni->pending_mtx);
From: Florian Fainelli f.fainelli@gmail.com
[ Upstream commit b9ceca6be43233845be70792be9b5ab315d2e010 ]
When more than a single SCMI device are present in the system, the creation of the notification workqueue with the WQ_SYSFS flag will lead to the following sysfs duplicate node warning:
sysfs: cannot create duplicate filename '/devices/virtual/workqueue/scmi_notify' CPU: 0 PID: 20 Comm: kworker/0:1 Not tainted 5.9.0-gdf4dd84a3f7d #29 Hardware name: Broadcom STB (Flattened Device Tree) Workqueue: events deferred_probe_work_func Backtrace: show_stack + 0x20/0x24 dump_stack + 0xbc/0xe0 sysfs_warn_dup + 0x70/0x80 sysfs_create_dir_ns + 0x15c/0x1a4 kobject_add_internal + 0x140/0x4d0 kobject_add + 0xc8/0x138 device_add + 0x1dc/0xc20 device_register + 0x24/0x28 workqueue_sysfs_register + 0xe4/0x1f0 alloc_workqueue + 0x448/0x6ac scmi_notification_init + 0x78/0x1dc scmi_probe + 0x268/0x4fc platform_drv_probe + 0x70/0xc8 really_probe + 0x184/0x728 driver_probe_device + 0xa4/0x278 __device_attach_driver + 0xe8/0x148 bus_for_each_drv + 0x108/0x158 __device_attach + 0x190/0x234 device_initial_probe + 0x1c/0x20 bus_probe_device + 0xdc/0xec deferred_probe_work_func + 0xd4/0x11c process_one_work + 0x420/0x8f0 worker_thread + 0x4fc/0x91c kthread + 0x21c/0x22c ret_from_fork + 0x14/0x20 kobject_add_internal failed for scmi_notify with -EEXIST, don't try to register things with the same name in the same directory. arm-scmi brcm_scmi@1: SCMI Notifications - Initialization Failed. arm-scmi brcm_scmi@1: SCMI Notifications NOT available. arm-scmi brcm_scmi@1: SCMI Protocol v1.0 'brcm-scmi:' Firmware version 0x1
Fix this by using dev_name(handle->dev) which guarantees that the name is unique and this also helps correlate which notification workqueue corresponds to which SCMI device instance.
Link: https://lore.kernel.org/r/20201014021737.287340-1-f.fainelli@gmail.com Fixes: bd31b249692e ("firmware: arm_scmi: Add notification dispatch and delivery") Signed-off-by: Florian Fainelli f.fainelli@gmail.com [sudeep.holla: trimmed backtrace to remove all unwanted hexcodes and timestamps] Signed-off-by: Sudeep Holla sudeep.holla@arm.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/firmware/arm_scmi/notify.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c index 4d9f6de3a7fae..51c5a376fb472 100644 --- a/drivers/firmware/arm_scmi/notify.c +++ b/drivers/firmware/arm_scmi/notify.c @@ -1474,7 +1474,7 @@ int scmi_notification_init(struct scmi_handle *handle) ni->gid = gid; ni->handle = handle;
- ni->notify_wq = alloc_workqueue("scmi_notify", + ni->notify_wq = alloc_workqueue(dev_name(handle->dev), WQ_UNBOUND | WQ_FREEZABLE | WQ_SYSFS, 0); if (!ni->notify_wq)
From: Juergen Gross jgross@suse.com
[ Upstream commit abee7c494d8c41bb388839bccc47e06247f0d7de ]
When running in lazy TLB mode the currently active page tables might be the ones of a previous process, e.g. when running a kernel thread.
This can be problematic in case kernel code is being modified via text_poke() in a kernel thread, and on another processor exit_mmap() is active for the process which was running on the first cpu before the kernel thread.
As text_poke() is using a temporary address space and the former address space (obtained via cpu_tlbstate.loaded_mm) is restored afterwards, there is a race possible in case the cpu on which exit_mmap() is running wants to make sure there are no stale references to that address space on any cpu active (this e.g. is required when running as a Xen PV guest, where this problem has been observed and analyzed).
In order to avoid that, drop off TLB lazy mode before switching to the temporary address space.
Fixes: cefa929c034eb5d ("x86/mm: Introduce temporary mm structs") Signed-off-by: Juergen Gross jgross@suse.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Link: https://lkml.kernel.org/r/20201009144225.12019-1-jgross@suse.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/kernel/alternative.c | 9 +++++++++ 1 file changed, 9 insertions(+)
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index cdaab30880b91..cd6be6f143e85 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -807,6 +807,15 @@ static inline temp_mm_state_t use_temporary_mm(struct mm_struct *mm) temp_mm_state_t temp_state;
lockdep_assert_irqs_disabled(); + + /* + * Make sure not to be in TLB lazy mode, as otherwise we'll end up + * with a stale address space WITHOUT being in lazy mode after + * restoring the previous mm. + */ + if (this_cpu_read(cpu_tlbstate.is_lazy)) + leave_mm(smp_processor_id()); + temp_state.mm = this_cpu_read(cpu_tlbstate.loaded_mm); switch_mm_irqs_off(NULL, mm, current);
[ Upstream commit 43ecf7b46f2688fd37909801aee264f288b3917b ]
Kmemleak pointed out to us that ionic_rx_flush() is sending skbs into napi_gro_XXX with a disabled napi context, and these end up getting lost and leaked. We can safely remove the flush.
Fixes: 0f3154e6bcb3 ("ionic: Add Tx and Rx handling") Signed-off-by: Shannon Nelson snelson@pensando.io Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/pensando/ionic/ionic_lif.c | 1 - drivers/net/ethernet/pensando/ionic/ionic_txrx.c | 13 ------------- drivers/net/ethernet/pensando/ionic/ionic_txrx.h | 1 - 3 files changed, 15 deletions(-)
diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c index 26988ad7ec979..8867d4ac871c1 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c @@ -1512,7 +1512,6 @@ static void ionic_txrx_deinit(struct ionic_lif *lif) if (lif->rxqcqs) { for (i = 0; i < lif->nxqs; i++) { ionic_lif_qcq_deinit(lif, lif->rxqcqs[i].qcq); - ionic_rx_flush(&lif->rxqcqs[i].qcq->cq); ionic_rx_empty(&lif->rxqcqs[i].qcq->q); } } diff --git a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c index def65fee27b5a..39e85870c15e9 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c @@ -253,19 +253,6 @@ static bool ionic_rx_service(struct ionic_cq *cq, struct ionic_cq_info *cq_info) return true; }
-void ionic_rx_flush(struct ionic_cq *cq) -{ - struct ionic_dev *idev = &cq->lif->ionic->idev; - u32 work_done; - - work_done = ionic_cq_service(cq, cq->num_descs, - ionic_rx_service, NULL, NULL); - - if (work_done) - ionic_intr_credits(idev->intr_ctrl, cq->bound_intr->index, - work_done, IONIC_INTR_CRED_RESET_COALESCE); -} - static struct page *ionic_rx_page_alloc(struct ionic_queue *q, dma_addr_t *dma_addr) { diff --git a/drivers/net/ethernet/pensando/ionic/ionic_txrx.h b/drivers/net/ethernet/pensando/ionic/ionic_txrx.h index a5883be0413f6..7667b72232b8a 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_txrx.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_txrx.h @@ -4,7 +4,6 @@ #ifndef _IONIC_TXRX_H_ #define _IONIC_TXRX_H_
-void ionic_rx_flush(struct ionic_cq *cq); void ionic_tx_flush(struct ionic_cq *cq);
void ionic_rx_fill(struct ionic_queue *q);
From: Parav Pandit parav@nvidia.com
[ Upstream commit fbdd0049d98d44914fc57d4b91f867f4996c787b ]
When a mlx5 core devlink instance is reloaded in different net namespace, its associated IB device is deleted and recreated.
Example sequence is: $ ip netns add foo $ devlink dev reload pci/0000:00:08.0 netns foo $ ip netns del foo
mlx5 IB device needs to attach and detach the netdevice to it through the netdev notifier chain during load and unload sequence. A below call graph of the unload flow.
cleanup_net() down_read(&pernet_ops_rwsem); <- first sem acquired ops_pre_exit_list() pre_exit() devlink_pernet_pre_exit() devlink_reload() mlx5_devlink_reload_down() mlx5_unload_one() [...] mlx5_ib_remove() mlx5_ib_unbind_slave_port() mlx5_remove_netdev_notifier() unregister_netdevice_notifier() down_write(&pernet_ops_rwsem);<- recurrsive lock
Hence, when net namespace is deleted, mlx5 reload results in deadlock.
When deadlock occurs, devlink mutex is also held. This not only deadlocks the mlx5 device under reload, but all the processes which attempt to access unrelated devlink devices are deadlocked.
Hence, fix this by mlx5 ib driver to register for per net netdev notifier instead of global one, which operats on the net namespace without holding the pernet_ops_rwsem.
Fixes: 4383cfcc65e7 ("net/mlx5: Add devlink reload") Link: https://lore.kernel.org/r/20201026134359.23150-1-parav@nvidia.com Signed-off-by: Parav Pandit parav@nvidia.com Signed-off-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Jason Gunthorpe jgg@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/hw/mlx5/main.c | 6 ++++-- .../net/ethernet/mellanox/mlx5/core/lib/mlx5.h | 5 ----- include/linux/mlx5/driver.h | 18 ++++++++++++++++++ 3 files changed, 22 insertions(+), 7 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index b805cc8124657..2a7b5ffb2a2ef 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -3318,7 +3318,8 @@ static int mlx5_add_netdev_notifier(struct mlx5_ib_dev *dev, u8 port_num) int err;
dev->port[port_num].roce.nb.notifier_call = mlx5_netdev_event; - err = register_netdevice_notifier(&dev->port[port_num].roce.nb); + err = register_netdevice_notifier_net(mlx5_core_net(dev->mdev), + &dev->port[port_num].roce.nb); if (err) { dev->port[port_num].roce.nb.notifier_call = NULL; return err; @@ -3330,7 +3331,8 @@ static int mlx5_add_netdev_notifier(struct mlx5_ib_dev *dev, u8 port_num) static void mlx5_remove_netdev_notifier(struct mlx5_ib_dev *dev, u8 port_num) { if (dev->port[port_num].roce.nb.notifier_call) { - unregister_netdevice_notifier(&dev->port[port_num].roce.nb); + unregister_netdevice_notifier_net(mlx5_core_net(dev->mdev), + &dev->port[port_num].roce.nb); dev->port[port_num].roce.nb.notifier_call = NULL; } } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h index d046db7bb047d..3a9fa629503f0 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h @@ -90,9 +90,4 @@ int mlx5_create_encryption_key(struct mlx5_core_dev *mdev, u32 key_type, u32 *p_key_id); void mlx5_destroy_encryption_key(struct mlx5_core_dev *mdev, u32 key_id);
-static inline struct net *mlx5_core_net(struct mlx5_core_dev *dev) -{ - return devlink_net(priv_to_devlink(dev)); -} - #endif diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h index 372100c755e7f..e30be3dd5be0e 100644 --- a/include/linux/mlx5/driver.h +++ b/include/linux/mlx5/driver.h @@ -1212,4 +1212,22 @@ static inline bool mlx5_is_roce_enabled(struct mlx5_core_dev *dev) return val.vbool; }
+/** + * mlx5_core_net - Provide net namespace of the mlx5_core_dev + * @dev: mlx5 core device + * + * mlx5_core_net() returns the net namespace of mlx5 core device. + * This can be called only in below described limited context. + * (a) When a devlink instance for mlx5_core is registered and + * when devlink reload operation is disabled. + * or + * (b) during devlink reload reload_down() and reload_up callbacks + * where it is ensured that devlink instance's net namespace is + * stable. + */ +static inline struct net *mlx5_core_net(struct mlx5_core_dev *dev) +{ + return devlink_net(priv_to_devlink(dev)); +} + #endif /* MLX5_DRIVER_H */
From: Amit Cohen amcohen@nvidia.com
[ Upstream commit 0daf2bf5a2dcf33d446b76360908f109816e2e21 ]
Each EMAD transaction stores the skb used to issue the EMAD request ('trans->tx_skb') so that the request could be retried in case of a timeout. The skb can be freed when a corresponding response is received or as part of the retry logic (e.g., failed retransmit, exceeded maximum number of retries).
The two tasks (i.e., response processing and retransmits) are synchronized by the atomic 'trans->active' field which ensures that responses to inactive transactions are ignored.
In case of a failed retransmit the transaction is finished and all of its resources are freed. However, the current code does not mark it as inactive. Syzkaller was able to hit a race condition in which a concurrent response is processed while the transaction's resources are being freed, resulting in a use-after-free [1].
Fix the issue by making sure to mark the transaction as inactive after a failed retransmit and free its resources only if a concurrent task did not already do that.
[1] BUG: KASAN: use-after-free in consume_skb+0x30/0x370 net/core/skbuff.c:833 Read of size 4 at addr ffff88804f570494 by task syz-executor.0/1004
CPU: 0 PID: 1004 Comm: syz-executor.0 Not tainted 5.8.0-rc7+ #68 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.org 04/01/2014 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0xf6/0x16e lib/dump_stack.c:118 print_address_description.constprop.0+0x1c/0x250 mm/kasan/report.c:383 __kasan_report mm/kasan/report.c:513 [inline] kasan_report.cold+0x1f/0x37 mm/kasan/report.c:530 check_memory_region_inline mm/kasan/generic.c:186 [inline] check_memory_region+0x14e/0x1b0 mm/kasan/generic.c:192 instrument_atomic_read include/linux/instrumented.h:56 [inline] atomic_read include/asm-generic/atomic-instrumented.h:27 [inline] refcount_read include/linux/refcount.h:147 [inline] skb_unref include/linux/skbuff.h:1044 [inline] consume_skb+0x30/0x370 net/core/skbuff.c:833 mlxsw_emad_trans_finish+0x64/0x1c0 drivers/net/ethernet/mellanox/mlxsw/core.c:592 mlxsw_emad_process_response drivers/net/ethernet/mellanox/mlxsw/core.c:651 [inline] mlxsw_emad_rx_listener_func+0x5c9/0xac0 drivers/net/ethernet/mellanox/mlxsw/core.c:672 mlxsw_core_skb_receive+0x4df/0x770 drivers/net/ethernet/mellanox/mlxsw/core.c:2063 mlxsw_pci_cqe_rdq_handle drivers/net/ethernet/mellanox/mlxsw/pci.c:595 [inline] mlxsw_pci_cq_tasklet+0x12a6/0x2520 drivers/net/ethernet/mellanox/mlxsw/pci.c:651 tasklet_action_common.isra.0+0x13f/0x3e0 kernel/softirq.c:550 __do_softirq+0x223/0x964 kernel/softirq.c:292 asm_call_on_stack+0x12/0x20 arch/x86/entry/entry_64.S:711
Allocated by task 1006: save_stack+0x1b/0x40 mm/kasan/common.c:48 set_track mm/kasan/common.c:56 [inline] __kasan_kmalloc mm/kasan/common.c:494 [inline] __kasan_kmalloc.constprop.0+0xc2/0xd0 mm/kasan/common.c:467 slab_post_alloc_hook mm/slab.h:586 [inline] slab_alloc_node mm/slub.c:2824 [inline] slab_alloc mm/slub.c:2832 [inline] kmem_cache_alloc+0xcd/0x2e0 mm/slub.c:2837 __build_skb+0x21/0x60 net/core/skbuff.c:311 __netdev_alloc_skb+0x1e2/0x360 net/core/skbuff.c:464 netdev_alloc_skb include/linux/skbuff.h:2810 [inline] mlxsw_emad_alloc drivers/net/ethernet/mellanox/mlxsw/core.c:756 [inline] mlxsw_emad_reg_access drivers/net/ethernet/mellanox/mlxsw/core.c:787 [inline] mlxsw_core_reg_access_emad+0x1ab/0x1420 drivers/net/ethernet/mellanox/mlxsw/core.c:1817 mlxsw_reg_trans_query+0x39/0x50 drivers/net/ethernet/mellanox/mlxsw/core.c:1831 mlxsw_sp_sb_pm_occ_clear drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c:260 [inline] mlxsw_sp_sb_occ_max_clear+0xbff/0x10a0 drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c:1365 mlxsw_devlink_sb_occ_max_clear+0x76/0xb0 drivers/net/ethernet/mellanox/mlxsw/core.c:1037 devlink_nl_cmd_sb_occ_max_clear_doit+0x1ec/0x280 net/core/devlink.c:1765 genl_family_rcv_msg_doit net/netlink/genetlink.c:669 [inline] genl_family_rcv_msg net/netlink/genetlink.c:714 [inline] genl_rcv_msg+0x617/0x980 net/netlink/genetlink.c:731 netlink_rcv_skb+0x152/0x440 net/netlink/af_netlink.c:2470 genl_rcv+0x24/0x40 net/netlink/genetlink.c:742 netlink_unicast_kernel net/netlink/af_netlink.c:1304 [inline] netlink_unicast+0x53a/0x750 net/netlink/af_netlink.c:1330 netlink_sendmsg+0x850/0xd90 net/netlink/af_netlink.c:1919 sock_sendmsg_nosec net/socket.c:651 [inline] sock_sendmsg+0x150/0x190 net/socket.c:671 ____sys_sendmsg+0x6d8/0x840 net/socket.c:2359 ___sys_sendmsg+0xff/0x170 net/socket.c:2413 __sys_sendmsg+0xe5/0x1b0 net/socket.c:2446 do_syscall_64+0x56/0xa0 arch/x86/entry/common.c:384 entry_SYSCALL_64_after_hwframe+0x44/0xa9
Freed by task 73: save_stack+0x1b/0x40 mm/kasan/common.c:48 set_track mm/kasan/common.c:56 [inline] kasan_set_free_info mm/kasan/common.c:316 [inline] __kasan_slab_free+0x12c/0x170 mm/kasan/common.c:455 slab_free_hook mm/slub.c:1474 [inline] slab_free_freelist_hook mm/slub.c:1507 [inline] slab_free mm/slub.c:3072 [inline] kmem_cache_free+0xbe/0x380 mm/slub.c:3088 kfree_skbmem net/core/skbuff.c:622 [inline] kfree_skbmem+0xef/0x1b0 net/core/skbuff.c:616 __kfree_skb net/core/skbuff.c:679 [inline] consume_skb net/core/skbuff.c:837 [inline] consume_skb+0xe1/0x370 net/core/skbuff.c:831 mlxsw_emad_trans_finish+0x64/0x1c0 drivers/net/ethernet/mellanox/mlxsw/core.c:592 mlxsw_emad_transmit_retry.isra.0+0x9d/0xc0 drivers/net/ethernet/mellanox/mlxsw/core.c:613 mlxsw_emad_trans_timeout_work+0x43/0x50 drivers/net/ethernet/mellanox/mlxsw/core.c:625 process_one_work+0xa3e/0x17a0 kernel/workqueue.c:2269 worker_thread+0x9e/0x1050 kernel/workqueue.c:2415 kthread+0x355/0x470 kernel/kthread.c:291 ret_from_fork+0x22/0x30 arch/x86/entry/entry_64.S:293
The buggy address belongs to the object at ffff88804f5703c0 which belongs to the cache skbuff_head_cache of size 224 The buggy address is located 212 bytes inside of 224-byte region [ffff88804f5703c0, ffff88804f5704a0) The buggy address belongs to the page: page:ffffea00013d5c00 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 flags: 0x100000000000200(slab) raw: 0100000000000200 dead000000000100 dead000000000122 ffff88806c625400 raw: 0000000000000000 00000000000c000c 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected
Memory state around the buggy address: ffff88804f570380: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb ffff88804f570400: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff88804f570480: fb fb fb fb fc fc fc fc fc fc fc fc fc fc fc fc
^ ffff88804f570500: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ffff88804f570580: 00 00 00 00 00 00 00 00 00 00 00 00 fc fc fc fc
Fixes: caf7297e7ab5f ("mlxsw: core: Introduce support for asynchronous EMAD register access") Signed-off-by: Amit Cohen amcohen@nvidia.com Reviewed-by: Jiri Pirko jiri@nvidia.com Signed-off-by: Ido Schimmel idosch@nvidia.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/mellanox/mlxsw/core.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c b/drivers/net/ethernet/mellanox/mlxsw/core.c index f6aa80fe343f5..05e90ef15871c 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/core.c +++ b/drivers/net/ethernet/mellanox/mlxsw/core.c @@ -607,6 +607,9 @@ static void mlxsw_emad_transmit_retry(struct mlxsw_core *mlxsw_core, err = mlxsw_emad_transmit(trans->core, trans); if (err == 0) return; + + if (!atomic_dec_and_test(&trans->active)) + return; } else { err = -EIO; }
[ Upstream commit 761a8c58db6bc884994b28cd6d9707b467d680c1 ]
There was a memory corruption bug happening while running the synthetic event selftests:
kmemleak: Cannot insert 0xffff8c196fa2afe5 into the object search tree (overlaps existing) CPU: 5 PID: 6866 Comm: ftracetest Tainted: G W 5.9.0-rc5-test+ #577 Hardware name: Hewlett-Packard HP Compaq Pro 6300 SFF/339A, BIOS K01 v03.03 07/14/2016 Call Trace: dump_stack+0x8d/0xc0 create_object.cold+0x3b/0x60 slab_post_alloc_hook+0x57/0x510 ? tracing_map_init+0x178/0x340 __kmalloc+0x1b1/0x390 tracing_map_init+0x178/0x340 event_hist_trigger_func+0x523/0xa40 trigger_process_regex+0xc5/0x110 event_trigger_write+0x71/0xd0 vfs_write+0xca/0x210 ksys_write+0x70/0xf0 do_syscall_64+0x33/0x40 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x7fef0a63a487 Code: 64 89 02 48 c7 c0 ff ff ff ff eb bb 0f 1f 80 00 00 00 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 51 c3 48 83 ec 28 48 89 54 24 18 48 89 74 24 RSP: 002b:00007fff76f18398 EFLAGS: 00000246 ORIG_RAX: 0000000000000001 RAX: ffffffffffffffda RBX: 0000000000000039 RCX: 00007fef0a63a487 RDX: 0000000000000039 RSI: 000055eb3b26d690 RDI: 0000000000000001 RBP: 000055eb3b26d690 R08: 000000000000000a R09: 0000000000000038 R10: 000055eb3b2cdb80 R11: 0000000000000246 R12: 0000000000000039 R13: 00007fef0a70b500 R14: 0000000000000039 R15: 00007fef0a70b700 kmemleak: Kernel memory leak detector disabled kmemleak: Object 0xffff8c196fa2afe0 (size 8): kmemleak: comm "ftracetest", pid 6866, jiffies 4295082531 kmemleak: min_count = 1 kmemleak: count = 0 kmemleak: flags = 0x1 kmemleak: checksum = 0 kmemleak: backtrace: __kmalloc+0x1b1/0x390 tracing_map_init+0x1be/0x340 event_hist_trigger_func+0x523/0xa40 trigger_process_regex+0xc5/0x110 event_trigger_write+0x71/0xd0 vfs_write+0xca/0x210 ksys_write+0x70/0xf0 do_syscall_64+0x33/0x40 entry_SYSCALL_64_after_hwframe+0x44/0xa9
The cause came down to a use of strcat() that was adding an string that was shorten, but the strcat() did not take that into account.
strcat() is extremely dangerous as it does not care how big the buffer is. Replace it with seq_buf operations that prevent the buffer from being overwritten if what is being written is bigger than the buffer.
Fixes: 10819e25799a ("tracing: Handle synthetic event array field type checking correctly") Reviewed-by: Tom Zanussi zanussi@kernel.org Tested-by: Tom Zanussi zanussi@kernel.org Signed-off-by: Steven Rostedt (VMware) rostedt@goodmis.org Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/trace/trace_events_synth.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-)
diff --git a/kernel/trace/trace_events_synth.c b/kernel/trace/trace_events_synth.c index c8892156db341..65e8c27141c02 100644 --- a/kernel/trace/trace_events_synth.c +++ b/kernel/trace/trace_events_synth.c @@ -465,6 +465,7 @@ static struct synth_field *parse_synth_field(int argc, const char **argv, struct synth_field *field; const char *prefix = NULL, *field_type = argv[0], *field_name, *array; int len, ret = 0; + struct seq_buf s; ssize_t size;
if (field_type[0] == ';') @@ -503,13 +504,9 @@ static struct synth_field *parse_synth_field(int argc, const char **argv, field_type++; len = strlen(field_type) + 1;
- if (array) { - int l = strlen(array); + if (array) + len += strlen(array);
- if (l && array[l - 1] == ';') - l--; - len += l; - } if (prefix) len += strlen(prefix);
@@ -518,14 +515,18 @@ static struct synth_field *parse_synth_field(int argc, const char **argv, ret = -ENOMEM; goto free; } + seq_buf_init(&s, field->type, len); if (prefix) - strcat(field->type, prefix); - strcat(field->type, field_type); + seq_buf_puts(&s, prefix); + seq_buf_puts(&s, field_type); if (array) { - strcat(field->type, array); - if (field->type[len - 1] == ';') - field->type[len - 1] = '\0'; + seq_buf_puts(&s, array); + if (s.buffer[s.len - 1] == ';') + s.len--; } + if (WARN_ON_ONCE(!seq_buf_buffer_left(&s))) + goto free; + s.buffer[s.len] = '\0';
size = synth_field_size(field->type); if (size <= 0) {
From: Dan Carpenter dan.carpenter@oracle.com
[ Upstream commit 248c944e2159de4868bef558feea40214aaf8464 ]
The "op" pointer is freed earlier when we call afs_put_operation().
Fixes: e49c7b2f6de7 ("afs: Build an abstraction around an "operation" concept") Signed-off-by: Dan Carpenter dan.carpenter@oracle.com Signed-off-by: David Howells dhowells@redhat.com cc: Colin Ian King colin.king@canonical.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/afs/xattr.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/afs/xattr.c b/fs/afs/xattr.c index 84f3c4f575318..38884d6c57cdc 100644 --- a/fs/afs/xattr.c +++ b/fs/afs/xattr.c @@ -85,7 +85,7 @@ static int afs_xattr_get_acl(const struct xattr_handler *handler, if (acl->size <= size) memcpy(buffer, acl->data, acl->size); else - op->error = -ERANGE; + ret = -ERANGE; } }
From: David Howells dhowells@redhat.com
[ Upstream commit d383e346f97d6bb0d654bb3d63c44ab106d92d29 ]
Fix afs_launder_page() to not clear PG_writeback on the page it is laundering as the flag isn't set in this case.
Fixes: 4343d00872e1 ("afs: Get rid of the afs_writeback record") Signed-off-by: David Howells dhowells@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/afs/internal.h | 1 + fs/afs/write.c | 10 ++++++---- 2 files changed, 7 insertions(+), 4 deletions(-)
diff --git a/fs/afs/internal.h b/fs/afs/internal.h index 06e617ee4cd1e..c8acb58ac5d8f 100644 --- a/fs/afs/internal.h +++ b/fs/afs/internal.h @@ -811,6 +811,7 @@ struct afs_operation { pgoff_t last; /* last page in mapping to deal with */ unsigned first_offset; /* offset into mapping[first] */ unsigned last_to; /* amount of mapping[last] */ + bool laundering; /* Laundering page, PG_writeback not set */ } store; struct { struct iattr *attr; diff --git a/fs/afs/write.c b/fs/afs/write.c index da12abd6db213..b937ec047ec98 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -396,7 +396,8 @@ static void afs_store_data_success(struct afs_operation *op) op->ctime = op->file[0].scb.status.mtime_client; afs_vnode_commit_status(op, &op->file[0]); if (op->error == 0) { - afs_pages_written_back(vnode, op->store.first, op->store.last); + if (!op->store.laundering) + afs_pages_written_back(vnode, op->store.first, op->store.last); afs_stat_v(vnode, n_stores); atomic_long_add((op->store.last * PAGE_SIZE + op->store.last_to) - (op->store.first * PAGE_SIZE + op->store.first_offset), @@ -415,7 +416,7 @@ static const struct afs_operation_ops afs_store_data_operation = { */ static int afs_store_data(struct address_space *mapping, pgoff_t first, pgoff_t last, - unsigned offset, unsigned to) + unsigned offset, unsigned to, bool laundering) { struct afs_vnode *vnode = AFS_FS_I(mapping->host); struct afs_operation *op; @@ -448,6 +449,7 @@ static int afs_store_data(struct address_space *mapping, op->store.last = last; op->store.first_offset = offset; op->store.last_to = to; + op->store.laundering = laundering; op->mtime = vnode->vfs_inode.i_mtime; op->flags |= AFS_OPERATION_UNINTR; op->ops = &afs_store_data_operation; @@ -601,7 +603,7 @@ no_more: if (end > i_size) to = i_size & ~PAGE_MASK;
- ret = afs_store_data(mapping, first, last, offset, to); + ret = afs_store_data(mapping, first, last, offset, to, false); switch (ret) { case 0: ret = count; @@ -921,7 +923,7 @@ int afs_launder_page(struct page *page)
trace_afs_page_dirty(vnode, tracepoint_string("launder"), page->index, priv); - ret = afs_store_data(mapping, page->index, page->index, t, f); + ret = afs_store_data(mapping, page->index, page->index, t, f, true); }
trace_afs_page_dirty(vnode, tracepoint_string("laundered"),
From: Alok Prasad palok@marvell.com
[ Upstream commit a2267f8a52eea9096861affd463f691be0f0e8c9 ]
Fixes memory leak in iWARP CM
Fixes: e411e0587e0d ("RDMA/qedr: Add iWARP connection management functions") Link: https://lore.kernel.org/r/20201021115008.28138-1-palok@marvell.com Signed-off-by: Michal Kalderon michal.kalderon@marvell.com Signed-off-by: Igor Russkikh irusskikh@marvell.com Signed-off-by: Alok Prasad palok@marvell.com Signed-off-by: Jason Gunthorpe jgg@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/hw/qedr/qedr_iw_cm.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/infiniband/hw/qedr/qedr_iw_cm.c b/drivers/infiniband/hw/qedr/qedr_iw_cm.c index c7169d2c69e5b..c4bc58736e489 100644 --- a/drivers/infiniband/hw/qedr/qedr_iw_cm.c +++ b/drivers/infiniband/hw/qedr/qedr_iw_cm.c @@ -727,6 +727,7 @@ int qedr_iw_destroy_listen(struct iw_cm_id *cm_id) listener->qed_handle);
cm_id->rem_ref(cm_id); + kfree(listener); return rc; }
From: Sascha Hauer s.hauer@pengutronix.de
[ Upstream commit 8e4c309f9f33b76c09daa02b796ef87918eee494 ]
ata_qc_complete_multiple() has to be called with the tags physically active, that is the hw tag is at bit 0. ap->qc_active has the same tag at bit ATA_TAG_INTERNAL instead, so call ata_qc_get_active() to fix that up. This is done in the vein of 8385d756e114 ("libata: Fix retrieving of active qcs").
Fixes: 28361c403683 ("libata: add extra internal command") Tested-by: Pali Rohár pali@kernel.org Signed-off-by: Sascha Hauer s.hauer@pengutronix.de Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/ata/sata_nv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/ata/sata_nv.c b/drivers/ata/sata_nv.c index eb9dc14e5147a..20190f66ced98 100644 --- a/drivers/ata/sata_nv.c +++ b/drivers/ata/sata_nv.c @@ -2100,7 +2100,7 @@ static int nv_swncq_sdbfis(struct ata_port *ap) pp->dhfis_bits &= ~done_mask; pp->dmafis_bits &= ~done_mask; pp->sdbfis_bits |= done_mask; - ata_qc_complete_multiple(ap, ap->qc_active ^ done_mask); + ata_qc_complete_multiple(ap, ata_qc_get_active(ap) ^ done_mask);
if (!ap->qc_active) { DPRINTK("over\n");
From: Ard Biesheuvel ardb@kernel.org
[ Upstream commit a2d50c1c77aa879af24f9f67b33186737b3d4885 ]
Commit 76085aff29f5 ("efi/libstub/arm64: align PE/COFF sections to segment alignment") increased the PE/COFF section alignment to match the minimum segment alignment of the kernel image, which ensures that the kernel does not need to be moved around in memory by the EFI stub if it was built as relocatable.
However, the first PE/COFF section starts at _stext, which is only 4 KB aligned, and so the section layout is inconsistent. Existing EFI loaders seem to care little about this, but it is better to clean this up.
So let's pad the header to 64 KB to match the PE/COFF section alignment.
Fixes: 76085aff29f5 ("efi/libstub/arm64: align PE/COFF sections to segment alignment") Signed-off-by: Ard Biesheuvel ardb@kernel.org Link: https://lore.kernel.org/r/20201027073209.2897-2-ardb@kernel.org Signed-off-by: Will Deacon will@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/kernel/efi-header.S | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/efi-header.S b/arch/arm64/kernel/efi-header.S index df67c0f2a077e..a71844fb923ee 100644 --- a/arch/arm64/kernel/efi-header.S +++ b/arch/arm64/kernel/efi-header.S @@ -147,6 +147,6 @@ efi_debug_entry: * correctly at this alignment, we must ensure that .text is * placed at a 4k boundary in the Image to begin with. */ - .align 12 + .balign SEGMENT_ALIGN efi_header_end: .endm
From: David Howells dhowells@redhat.com
[ Upstream commit fa04a40b169fcee615afbae97f71a09332993f64 ]
Fix afs to take a ref on a page when it sets PG_private on it and to drop the ref when removing the flag.
Note that in afs_write_begin(), a lot of the time, PG_private is already set on a page to which we're going to add some data. In such a case, we leave the bit set and mustn't increment the page count.
As suggested by Matthew Wilcox, use attach/detach_page_private() where possible.
Fixes: 31143d5d515e ("AFS: implement basic file write support") Reported-by: Matthew Wilcox (Oracle) willy@infradead.org Signed-off-by: David Howells dhowells@redhat.com Reviewed-by: Matthew Wilcox (Oracle) willy@infradead.org Signed-off-by: Sasha Levin sashal@kernel.org --- fs/afs/dir.c | 12 ++++-------- fs/afs/dir_edit.c | 6 ++---- fs/afs/file.c | 8 ++------ fs/afs/write.c | 18 ++++++++++-------- 4 files changed, 18 insertions(+), 26 deletions(-)
diff --git a/fs/afs/dir.c b/fs/afs/dir.c index 1d2e61e0ab047..1bb5b9d7f0a2c 100644 --- a/fs/afs/dir.c +++ b/fs/afs/dir.c @@ -281,8 +281,7 @@ retry: if (ret < 0) goto error;
- set_page_private(req->pages[i], 1); - SetPagePrivate(req->pages[i]); + attach_page_private(req->pages[i], (void *)1); unlock_page(req->pages[i]); i++; } else { @@ -1975,8 +1974,7 @@ static int afs_dir_releasepage(struct page *page, gfp_t gfp_flags)
_enter("{{%llx:%llu}[%lu]}", dvnode->fid.vid, dvnode->fid.vnode, page->index);
- set_page_private(page, 0); - ClearPagePrivate(page); + detach_page_private(page);
/* The directory will need reloading. */ if (test_and_clear_bit(AFS_VNODE_DIR_VALID, &dvnode->flags)) @@ -2003,8 +2001,6 @@ static void afs_dir_invalidatepage(struct page *page, unsigned int offset, afs_stat_v(dvnode, n_inval);
/* we clean up only if the entire page is being invalidated */ - if (offset == 0 && length == PAGE_SIZE) { - set_page_private(page, 0); - ClearPagePrivate(page); - } + if (offset == 0 && length == PAGE_SIZE) + detach_page_private(page); } diff --git a/fs/afs/dir_edit.c b/fs/afs/dir_edit.c index b108528bf010d..2ffe09abae7fc 100644 --- a/fs/afs/dir_edit.c +++ b/fs/afs/dir_edit.c @@ -243,10 +243,8 @@ void afs_edit_dir_add(struct afs_vnode *vnode, index, gfp); if (!page) goto error; - if (!PagePrivate(page)) { - set_page_private(page, 1); - SetPagePrivate(page); - } + if (!PagePrivate(page)) + attach_page_private(page, (void *)1); dir_page = kmap(page); }
diff --git a/fs/afs/file.c b/fs/afs/file.c index 371d1488cc549..bdcf418e4a5c0 100644 --- a/fs/afs/file.c +++ b/fs/afs/file.c @@ -626,11 +626,9 @@ static void afs_invalidatepage(struct page *page, unsigned int offset, #endif
if (PagePrivate(page)) { - priv = page_private(page); + priv = (unsigned long)detach_page_private(page); trace_afs_page_dirty(vnode, tracepoint_string("inval"), page->index, priv); - set_page_private(page, 0); - ClearPagePrivate(page); } }
@@ -660,11 +658,9 @@ static int afs_releasepage(struct page *page, gfp_t gfp_flags) #endif
if (PagePrivate(page)) { - priv = page_private(page); + priv = (unsigned long)detach_page_private(page); trace_afs_page_dirty(vnode, tracepoint_string("rel"), page->index, priv); - set_page_private(page, 0); - ClearPagePrivate(page); }
/* indicate that the page can be released */ diff --git a/fs/afs/write.c b/fs/afs/write.c index b937ec047ec98..02facb19a0f1d 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -151,8 +151,10 @@ try_again: priv |= f; trace_afs_page_dirty(vnode, tracepoint_string("begin"), page->index, priv); - SetPagePrivate(page); - set_page_private(page, priv); + if (PagePrivate(page)) + set_page_private(page, priv); + else + attach_page_private(page, (void *)priv); _leave(" = 0"); return 0;
@@ -334,10 +336,9 @@ static void afs_pages_written_back(struct afs_vnode *vnode, ASSERTCMP(pv.nr, ==, count);
for (loop = 0; loop < count; loop++) { - priv = page_private(pv.pages[loop]); + priv = (unsigned long)detach_page_private(pv.pages[loop]); trace_afs_page_dirty(vnode, tracepoint_string("clear"), pv.pages[loop]->index, priv); - set_page_private(pv.pages[loop], 0); end_page_writeback(pv.pages[loop]); } first += count; @@ -863,8 +864,10 @@ vm_fault_t afs_page_mkwrite(struct vm_fault *vmf) priv |= 0; /* From */ trace_afs_page_dirty(vnode, tracepoint_string("mkwrite"), vmf->page->index, priv); - SetPagePrivate(vmf->page); - set_page_private(vmf->page, priv); + if (PagePrivate(vmf->page)) + set_page_private(vmf->page, priv); + else + attach_page_private(vmf->page, (void *)priv); file_update_time(file);
sb_end_pagefault(inode->i_sb); @@ -926,10 +929,9 @@ int afs_launder_page(struct page *page) ret = afs_store_data(mapping, page->index, page->index, t, f, true); }
+ priv = (unsigned long)detach_page_private(page); trace_afs_page_dirty(vnode, tracepoint_string("laundered"), page->index, priv); - set_page_private(page, 0); - ClearPagePrivate(page);
#ifdef CONFIG_AFS_FSCACHE if (PageFsCache(page)) {
From: David Howells dhowells@redhat.com
[ Upstream commit 21db2cdc667f744691a407105b7712bc18d74023 ]
Fix the leak of the target page in afs_write_begin() when it fails.
Fixes: 15b4650e55e0 ("afs: convert to new aops") Signed-off-by: David Howells dhowells@redhat.com cc: Nick Piggin npiggin@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/afs/write.c | 23 +++++++++++------------ 1 file changed, 11 insertions(+), 12 deletions(-)
diff --git a/fs/afs/write.c b/fs/afs/write.c index 02facb19a0f1d..7fae9f8b38eb3 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -76,7 +76,7 @@ static int afs_fill_page(struct afs_vnode *vnode, struct key *key, */ int afs_write_begin(struct file *file, struct address_space *mapping, loff_t pos, unsigned len, unsigned flags, - struct page **pagep, void **fsdata) + struct page **_page, void **fsdata) { struct afs_vnode *vnode = AFS_FS_I(file_inode(file)); struct page *page; @@ -110,9 +110,6 @@ int afs_write_begin(struct file *file, struct address_space *mapping, SetPageUptodate(page); }
- /* page won't leak in error case: it eventually gets cleaned off LRU */ - *pagep = page; - try_again: /* See if this page is already partially written in a way that we can * merge the new write with. @@ -155,6 +152,7 @@ try_again: set_page_private(page, priv); else attach_page_private(page, (void *)priv); + *_page = page; _leave(" = 0"); return 0;
@@ -164,17 +162,18 @@ try_again: flush_conflicting_write: _debug("flush conflict"); ret = write_one_page(page); - if (ret < 0) { - _leave(" = %d", ret); - return ret; - } + if (ret < 0) + goto error;
ret = lock_page_killable(page); - if (ret < 0) { - _leave(" = %d", ret); - return ret; - } + if (ret < 0) + goto error; goto try_again; + +error: + put_page(page); + _leave(" = %d", ret); + return ret; }
/*
From: David Howells dhowells@redhat.com
[ Upstream commit f792e3ac82fe2c6c863e93187eb7ddfccab68fa7 ]
In afs, page->private is set to indicate the dirty region of a page. This is done in afs_write_begin(), but that can't take account of whether the copy into the page actually worked.
Fix this by moving the change of page->private into afs_write_end().
Fixes: 4343d00872e1 ("afs: Get rid of the afs_writeback record") Signed-off-by: David Howells dhowells@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/afs/write.c | 41 ++++++++++++++++++++++++++--------------- 1 file changed, 26 insertions(+), 15 deletions(-)
diff --git a/fs/afs/write.c b/fs/afs/write.c index 7fae9f8b38eb3..f28d85c38cd89 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -135,23 +135,8 @@ try_again: if (!test_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags) && (to < f || from > t)) goto flush_conflicting_write; - if (from < f) - f = from; - if (to > t) - t = to; - } else { - f = from; - t = to; }
- priv = (unsigned long)t << AFS_PRIV_SHIFT; - priv |= f; - trace_afs_page_dirty(vnode, tracepoint_string("begin"), - page->index, priv); - if (PagePrivate(page)) - set_page_private(page, priv); - else - attach_page_private(page, (void *)priv); *_page = page; _leave(" = 0"); return 0; @@ -185,6 +170,9 @@ int afs_write_end(struct file *file, struct address_space *mapping, { struct afs_vnode *vnode = AFS_FS_I(file_inode(file)); struct key *key = afs_file_key(file); + unsigned long priv; + unsigned int f, from = pos & (PAGE_SIZE - 1); + unsigned int t, to = from + copied; loff_t i_size, maybe_i_size; int ret;
@@ -216,6 +204,29 @@ int afs_write_end(struct file *file, struct address_space *mapping, SetPageUptodate(page); }
+ if (PagePrivate(page)) { + priv = page_private(page); + f = priv & AFS_PRIV_MAX; + t = priv >> AFS_PRIV_SHIFT; + if (from < f) + f = from; + if (to > t) + t = to; + priv = (unsigned long)t << AFS_PRIV_SHIFT; + priv |= f; + set_page_private(page, priv); + trace_afs_page_dirty(vnode, tracepoint_string("dirty+"), + page->index, priv); + } else { + f = from; + t = to; + priv = (unsigned long)t << AFS_PRIV_SHIFT; + priv |= f; + attach_page_private(page, (void *)priv); + trace_afs_page_dirty(vnode, tracepoint_string("dirty"), + page->index, priv); + } + set_page_dirty(page); if (PageDirty(page)) _debug("dirtied");
From: David Howells dhowells@redhat.com
[ Upstream commit 185f0c7073bd5c78f86265f703f5daf1306ab5a7 ]
The afs filesystem uses page->private to store the dirty range within a page such that in the event of a conflicting 3rd-party write to the server, we write back just the bits that got changed locally.
However, there are a couple of problems with this:
(1) I need a bit to note if the page might be mapped so that partial invalidation doesn't shrink the range.
(2) There aren't necessarily sufficient bits to store the entire range of data altered (say it's a 32-bit system with 64KiB pages or transparent huge pages are in use).
So wrap the accesses in inline functions so that future commits can change how this works.
Also move them out of the tracing header into the in-directory header. There's not really any need for them to be in the tracing header.
Signed-off-by: David Howells dhowells@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/afs/internal.h | 28 ++++++++++++++++++++++++++++ fs/afs/write.c | 31 +++++++++++++------------------ include/trace/events/afs.h | 19 +++---------------- 3 files changed, 44 insertions(+), 34 deletions(-)
diff --git a/fs/afs/internal.h b/fs/afs/internal.h index c8acb58ac5d8f..523bf9698ecdc 100644 --- a/fs/afs/internal.h +++ b/fs/afs/internal.h @@ -857,6 +857,34 @@ struct afs_vnode_cache_aux { u64 data_version; } __packed;
+/* + * We use page->private to hold the amount of the page that we've written to, + * splitting the field into two parts. However, we need to represent a range + * 0...PAGE_SIZE inclusive, so we can't support 64K pages on a 32-bit system. + */ +#if PAGE_SIZE > 32768 +#define __AFS_PAGE_PRIV_MASK 0xffffffffUL +#define __AFS_PAGE_PRIV_SHIFT 32 +#else +#define __AFS_PAGE_PRIV_MASK 0xffffUL +#define __AFS_PAGE_PRIV_SHIFT 16 +#endif + +static inline size_t afs_page_dirty_from(unsigned long priv) +{ + return priv & __AFS_PAGE_PRIV_MASK; +} + +static inline size_t afs_page_dirty_to(unsigned long priv) +{ + return (priv >> __AFS_PAGE_PRIV_SHIFT) & __AFS_PAGE_PRIV_MASK; +} + +static inline unsigned long afs_page_dirty(size_t from, size_t to) +{ + return ((unsigned long)to << __AFS_PAGE_PRIV_SHIFT) | from; +} + #include <trace/events/afs.h>
/*****************************************************************************/ diff --git a/fs/afs/write.c b/fs/afs/write.c index f28d85c38cd89..ea1768b3c0b56 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -117,8 +117,8 @@ try_again: t = f = 0; if (PagePrivate(page)) { priv = page_private(page); - f = priv & AFS_PRIV_MAX; - t = priv >> AFS_PRIV_SHIFT; + f = afs_page_dirty_from(priv); + t = afs_page_dirty_to(priv); ASSERTCMP(f, <=, t); }
@@ -206,22 +206,18 @@ int afs_write_end(struct file *file, struct address_space *mapping,
if (PagePrivate(page)) { priv = page_private(page); - f = priv & AFS_PRIV_MAX; - t = priv >> AFS_PRIV_SHIFT; + f = afs_page_dirty_from(priv); + t = afs_page_dirty_to(priv); if (from < f) f = from; if (to > t) t = to; - priv = (unsigned long)t << AFS_PRIV_SHIFT; - priv |= f; + priv = afs_page_dirty(f, t); set_page_private(page, priv); trace_afs_page_dirty(vnode, tracepoint_string("dirty+"), page->index, priv); } else { - f = from; - t = to; - priv = (unsigned long)t << AFS_PRIV_SHIFT; - priv |= f; + priv = afs_page_dirty(from, to); attach_page_private(page, (void *)priv); trace_afs_page_dirty(vnode, tracepoint_string("dirty"), page->index, priv); @@ -522,8 +518,8 @@ static int afs_write_back_from_locked_page(struct address_space *mapping, */ start = primary_page->index; priv = page_private(primary_page); - offset = priv & AFS_PRIV_MAX; - to = priv >> AFS_PRIV_SHIFT; + offset = afs_page_dirty_from(priv); + to = afs_page_dirty_to(priv); trace_afs_page_dirty(vnode, tracepoint_string("store"), primary_page->index, priv);
@@ -568,8 +564,8 @@ static int afs_write_back_from_locked_page(struct address_space *mapping, }
priv = page_private(page); - f = priv & AFS_PRIV_MAX; - t = priv >> AFS_PRIV_SHIFT; + f = afs_page_dirty_from(priv); + t = afs_page_dirty_to(priv); if (f != 0 && !test_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags)) { unlock_page(page); @@ -870,8 +866,7 @@ vm_fault_t afs_page_mkwrite(struct vm_fault *vmf) */ wait_on_page_writeback(vmf->page);
- priv = (unsigned long)PAGE_SIZE << AFS_PRIV_SHIFT; /* To */ - priv |= 0; /* From */ + priv = afs_page_dirty(0, PAGE_SIZE); trace_afs_page_dirty(vnode, tracepoint_string("mkwrite"), vmf->page->index, priv); if (PagePrivate(vmf->page)) @@ -930,8 +925,8 @@ int afs_launder_page(struct page *page) f = 0; t = PAGE_SIZE; if (PagePrivate(page)) { - f = priv & AFS_PRIV_MAX; - t = priv >> AFS_PRIV_SHIFT; + f = afs_page_dirty_from(priv); + t = afs_page_dirty_to(priv); }
trace_afs_page_dirty(vnode, tracepoint_string("launder"), diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h index 5f0c1cf1ea130..05b5506cd24ca 100644 --- a/include/trace/events/afs.h +++ b/include/trace/events/afs.h @@ -884,19 +884,6 @@ TRACE_EVENT(afs_dir_check_failed, __entry->vnode, __entry->off, __entry->i_size) );
-/* - * We use page->private to hold the amount of the page that we've written to, - * splitting the field into two parts. However, we need to represent a range - * 0...PAGE_SIZE inclusive, so we can't support 64K pages on a 32-bit system. - */ -#if PAGE_SIZE > 32768 -#define AFS_PRIV_MAX 0xffffffff -#define AFS_PRIV_SHIFT 32 -#else -#define AFS_PRIV_MAX 0xffff -#define AFS_PRIV_SHIFT 16 -#endif - TRACE_EVENT(afs_page_dirty, TP_PROTO(struct afs_vnode *vnode, const char *where, pgoff_t page, unsigned long priv), @@ -917,10 +904,10 @@ TRACE_EVENT(afs_page_dirty, __entry->priv = priv; ),
- TP_printk("vn=%p %lx %s %lu-%lu", + TP_printk("vn=%p %lx %s %zx-%zx", __entry->vnode, __entry->page, __entry->where, - __entry->priv & AFS_PRIV_MAX, - __entry->priv >> AFS_PRIV_SHIFT) + afs_page_dirty_from(__entry->priv), + afs_page_dirty_to(__entry->priv)) );
TRACE_EVENT(afs_call_state,
From: David Howells dhowells@redhat.com
[ Upstream commit 65dd2d6072d393a3aa14ded8afa9a12f27d9c8ad ]
Currently, page->private on an afs page is used to store the range of dirtied data within the page, where the range includes the lower bound, but excludes the upper bound (e.g. 0-1 is a range covering a single byte).
This, however, requires a superfluous bit for the last-byte bound so that on a 4KiB page, it can say 0-4096 to indicate the whole page, the idea being that having both numbers the same would indicate an empty range. This is unnecessary as the PG_private bit is clear if it's an empty range (as is PG_dirty).
Alter the way the dirty range is encoded in page->private such that the upper bound is reduced by 1 (e.g. 0-0 is then specified the same single byte range mentioned above).
Applying this to both bounds frees up two bits, one of which can be used in a future commit.
This allows the afs filesystem to be compiled on ppc32 with 64K pages; without this, the following warnings are seen:
../fs/afs/internal.h: In function 'afs_page_dirty_to': ../fs/afs/internal.h:881:15: warning: right shift count >= width of type [-Wshift-count-overflow] 881 | return (priv >> __AFS_PAGE_PRIV_SHIFT) & __AFS_PAGE_PRIV_MASK; | ^~ ../fs/afs/internal.h: In function 'afs_page_dirty': ../fs/afs/internal.h:886:28: warning: left shift count >= width of type [-Wshift-count-overflow] 886 | return ((unsigned long)to << __AFS_PAGE_PRIV_SHIFT) | from; | ^~
Fixes: 4343d00872e1 ("afs: Get rid of the afs_writeback record") Signed-off-by: David Howells dhowells@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/afs/internal.h | 6 +++--- fs/afs/write.c | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/fs/afs/internal.h b/fs/afs/internal.h index 523bf9698ecdc..fc96c090893f7 100644 --- a/fs/afs/internal.h +++ b/fs/afs/internal.h @@ -862,7 +862,7 @@ struct afs_vnode_cache_aux { * splitting the field into two parts. However, we need to represent a range * 0...PAGE_SIZE inclusive, so we can't support 64K pages on a 32-bit system. */ -#if PAGE_SIZE > 32768 +#ifdef CONFIG_64BIT #define __AFS_PAGE_PRIV_MASK 0xffffffffUL #define __AFS_PAGE_PRIV_SHIFT 32 #else @@ -877,12 +877,12 @@ static inline size_t afs_page_dirty_from(unsigned long priv)
static inline size_t afs_page_dirty_to(unsigned long priv) { - return (priv >> __AFS_PAGE_PRIV_SHIFT) & __AFS_PAGE_PRIV_MASK; + return ((priv >> __AFS_PAGE_PRIV_SHIFT) & __AFS_PAGE_PRIV_MASK) + 1; }
static inline unsigned long afs_page_dirty(size_t from, size_t to) { - return ((unsigned long)to << __AFS_PAGE_PRIV_SHIFT) | from; + return ((unsigned long)(to - 1) << __AFS_PAGE_PRIV_SHIFT) | from; }
#include <trace/events/afs.h> diff --git a/fs/afs/write.c b/fs/afs/write.c index ea1768b3c0b56..1a49f5c89342e 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -93,7 +93,7 @@ int afs_write_begin(struct file *file, struct address_space *mapping, /* We want to store information about how much of a page is altered in * page->private. */ - BUILD_BUG_ON(PAGE_SIZE > 32768 && sizeof(page->private) < 8); + BUILD_BUG_ON(PAGE_SIZE - 1 > __AFS_PAGE_PRIV_MASK && sizeof(page->private) < 8);
page = grab_cache_page_write_begin(mapping, index, flags); if (!page)
From: David Howells dhowells@redhat.com
[ Upstream commit f86726a69dec5df6ba051baf9265584419478b64 ]
Fix afs_invalidatepage() to adjust the dirty region recorded in page->private when truncating a page. If the dirty region is entirely removed, then the private data is cleared and the page dirty state is cleared.
Without this, if the page is truncated and then expanded again by truncate, zeros from the expanded, but no-longer dirty region may get written back to the server if the page gets laundered due to a conflicting 3rd-party write.
It mustn't, however, shorten the dirty region of the page if that page is still mmapped and has been marked dirty by afs_page_mkwrite(), so a flag is stored in page->private to record this.
Fixes: 4343d00872e1 ("afs: Get rid of the afs_writeback record") Signed-off-by: David Howells dhowells@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/afs/file.c | 71 ++++++++++++++++++++++++++++++++------ fs/afs/internal.h | 16 +++++++-- fs/afs/write.c | 1 + include/trace/events/afs.h | 5 +-- 4 files changed, 79 insertions(+), 14 deletions(-)
diff --git a/fs/afs/file.c b/fs/afs/file.c index bdcf418e4a5c0..5015f2b107824 100644 --- a/fs/afs/file.c +++ b/fs/afs/file.c @@ -600,6 +600,63 @@ static int afs_readpages(struct file *file, struct address_space *mapping, return ret; }
+/* + * Adjust the dirty region of the page on truncation or full invalidation, + * getting rid of the markers altogether if the region is entirely invalidated. + */ +static void afs_invalidate_dirty(struct page *page, unsigned int offset, + unsigned int length) +{ + struct afs_vnode *vnode = AFS_FS_I(page->mapping->host); + unsigned long priv; + unsigned int f, t, end = offset + length; + + priv = page_private(page); + + /* we clean up only if the entire page is being invalidated */ + if (offset == 0 && length == thp_size(page)) + goto full_invalidate; + + /* If the page was dirtied by page_mkwrite(), the PTE stays writable + * and we don't get another notification to tell us to expand it + * again. + */ + if (afs_is_page_dirty_mmapped(priv)) + return; + + /* We may need to shorten the dirty region */ + f = afs_page_dirty_from(priv); + t = afs_page_dirty_to(priv); + + if (t <= offset || f >= end) + return; /* Doesn't overlap */ + + if (f < offset && t > end) + return; /* Splits the dirty region - just absorb it */ + + if (f >= offset && t <= end) + goto undirty; + + if (f < offset) + t = offset; + else + f = end; + if (f == t) + goto undirty; + + priv = afs_page_dirty(f, t); + set_page_private(page, priv); + trace_afs_page_dirty(vnode, tracepoint_string("trunc"), page->index, priv); + return; + +undirty: + trace_afs_page_dirty(vnode, tracepoint_string("undirty"), page->index, priv); + clear_page_dirty_for_io(page); +full_invalidate: + priv = (unsigned long)detach_page_private(page); + trace_afs_page_dirty(vnode, tracepoint_string("inval"), page->index, priv); +} + /* * invalidate part or all of a page * - release a page and clean up its private data if offset is 0 (indicating @@ -608,29 +665,23 @@ static int afs_readpages(struct file *file, struct address_space *mapping, static void afs_invalidatepage(struct page *page, unsigned int offset, unsigned int length) { - struct afs_vnode *vnode = AFS_FS_I(page->mapping->host); - unsigned long priv; - _enter("{%lu},%u,%u", page->index, offset, length);
BUG_ON(!PageLocked(page));
+#ifdef CONFIG_AFS_FSCACHE /* we clean up only if the entire page is being invalidated */ if (offset == 0 && length == PAGE_SIZE) { -#ifdef CONFIG_AFS_FSCACHE if (PageFsCache(page)) { struct afs_vnode *vnode = AFS_FS_I(page->mapping->host); fscache_wait_on_page_write(vnode->cache, page); fscache_uncache_page(vnode->cache, page); } + } #endif
- if (PagePrivate(page)) { - priv = (unsigned long)detach_page_private(page); - trace_afs_page_dirty(vnode, tracepoint_string("inval"), - page->index, priv); - } - } + if (PagePrivate(page)) + afs_invalidate_dirty(page, offset, length);
_leave(""); } diff --git a/fs/afs/internal.h b/fs/afs/internal.h index fc96c090893f7..dbe4120e9a422 100644 --- a/fs/afs/internal.h +++ b/fs/afs/internal.h @@ -863,11 +863,13 @@ struct afs_vnode_cache_aux { * 0...PAGE_SIZE inclusive, so we can't support 64K pages on a 32-bit system. */ #ifdef CONFIG_64BIT -#define __AFS_PAGE_PRIV_MASK 0xffffffffUL +#define __AFS_PAGE_PRIV_MASK 0x7fffffffUL #define __AFS_PAGE_PRIV_SHIFT 32 +#define __AFS_PAGE_PRIV_MMAPPED 0x80000000UL #else -#define __AFS_PAGE_PRIV_MASK 0xffffUL +#define __AFS_PAGE_PRIV_MASK 0x7fffUL #define __AFS_PAGE_PRIV_SHIFT 16 +#define __AFS_PAGE_PRIV_MMAPPED 0x8000UL #endif
static inline size_t afs_page_dirty_from(unsigned long priv) @@ -885,6 +887,16 @@ static inline unsigned long afs_page_dirty(size_t from, size_t to) return ((unsigned long)(to - 1) << __AFS_PAGE_PRIV_SHIFT) | from; }
+static inline unsigned long afs_page_dirty_mmapped(unsigned long priv) +{ + return priv | __AFS_PAGE_PRIV_MMAPPED; +} + +static inline bool afs_is_page_dirty_mmapped(unsigned long priv) +{ + return priv & __AFS_PAGE_PRIV_MMAPPED; +} + #include <trace/events/afs.h>
/*****************************************************************************/ diff --git a/fs/afs/write.c b/fs/afs/write.c index 1a49f5c89342e..a2511e3ad2cca 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -867,6 +867,7 @@ vm_fault_t afs_page_mkwrite(struct vm_fault *vmf) wait_on_page_writeback(vmf->page);
priv = afs_page_dirty(0, PAGE_SIZE); + priv = afs_page_dirty_mmapped(priv); trace_afs_page_dirty(vnode, tracepoint_string("mkwrite"), vmf->page->index, priv); if (PagePrivate(vmf->page)) diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h index 05b5506cd24ca..13c05e28c0b6c 100644 --- a/include/trace/events/afs.h +++ b/include/trace/events/afs.h @@ -904,10 +904,11 @@ TRACE_EVENT(afs_page_dirty, __entry->priv = priv; ),
- TP_printk("vn=%p %lx %s %zx-%zx", + TP_printk("vn=%p %lx %s %zx-%zx%s", __entry->vnode, __entry->page, __entry->where, afs_page_dirty_from(__entry->priv), - afs_page_dirty_to(__entry->priv)) + afs_page_dirty_to(__entry->priv), + afs_is_page_dirty_mmapped(__entry->priv) ? " M" : "") );
TRACE_EVENT(afs_call_state,
From: David Howells dhowells@redhat.com
[ Upstream commit 2d9900f26ad61e63a34f239bc76c80d2f8a6ff41 ]
The dirty region bounds stored in page->private on an afs page are 15 bits on a 32-bit box and can, at most, represent a range of up to 32K within a 32K page with a resolution of 1 byte. This is a problem for powerpc32 with 64K pages enabled.
Further, transparent huge pages may get up to 2M, which will be a problem for the afs filesystem on all 32-bit arches in the future.
Fix this by decreasing the resolution. For the moment, a 64K page will have a resolution determined from PAGE_SIZE. In the future, the page will need to be passed in to the helper functions so that the page size can be assessed and the resolution determined dynamically.
Note that this might not be the ideal way to handle this, since it may allow some leakage of undirtied zero bytes to the server's copy in the case of a 3rd-party conflict. Fixing that would require a separately allocated record and is a more complicated fix.
Fixes: 4343d00872e1 ("afs: Get rid of the afs_writeback record") Reported-by: kernel test robot lkp@intel.com Signed-off-by: David Howells dhowells@redhat.com Reviewed-by: Matthew Wilcox (Oracle) willy@infradead.org Signed-off-by: Sasha Levin sashal@kernel.org --- fs/afs/internal.h | 24 ++++++++++++++++++++---- fs/afs/write.c | 5 ----- 2 files changed, 20 insertions(+), 9 deletions(-)
diff --git a/fs/afs/internal.h b/fs/afs/internal.h index dbe4120e9a422..17336cbb8419f 100644 --- a/fs/afs/internal.h +++ b/fs/afs/internal.h @@ -860,7 +860,8 @@ struct afs_vnode_cache_aux { /* * We use page->private to hold the amount of the page that we've written to, * splitting the field into two parts. However, we need to represent a range - * 0...PAGE_SIZE inclusive, so we can't support 64K pages on a 32-bit system. + * 0...PAGE_SIZE, so we reduce the resolution if the size of the page + * exceeds what we can encode. */ #ifdef CONFIG_64BIT #define __AFS_PAGE_PRIV_MASK 0x7fffffffUL @@ -872,19 +873,34 @@ struct afs_vnode_cache_aux { #define __AFS_PAGE_PRIV_MMAPPED 0x8000UL #endif
+static inline unsigned int afs_page_dirty_resolution(void) +{ + int shift = PAGE_SHIFT - (__AFS_PAGE_PRIV_SHIFT - 1); + return (shift > 0) ? shift : 0; +} + static inline size_t afs_page_dirty_from(unsigned long priv) { - return priv & __AFS_PAGE_PRIV_MASK; + unsigned long x = priv & __AFS_PAGE_PRIV_MASK; + + /* The lower bound is inclusive */ + return x << afs_page_dirty_resolution(); }
static inline size_t afs_page_dirty_to(unsigned long priv) { - return ((priv >> __AFS_PAGE_PRIV_SHIFT) & __AFS_PAGE_PRIV_MASK) + 1; + unsigned long x = (priv >> __AFS_PAGE_PRIV_SHIFT) & __AFS_PAGE_PRIV_MASK; + + /* The upper bound is immediately beyond the region */ + return (x + 1) << afs_page_dirty_resolution(); }
static inline unsigned long afs_page_dirty(size_t from, size_t to) { - return ((unsigned long)(to - 1) << __AFS_PAGE_PRIV_SHIFT) | from; + unsigned int res = afs_page_dirty_resolution(); + from >>= res; + to = (to - 1) >> res; + return (to << __AFS_PAGE_PRIV_SHIFT) | from; }
static inline unsigned long afs_page_dirty_mmapped(unsigned long priv) diff --git a/fs/afs/write.c b/fs/afs/write.c index a2511e3ad2cca..50371207f3273 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -90,11 +90,6 @@ int afs_write_begin(struct file *file, struct address_space *mapping, _enter("{%llx:%llu},{%lx},%u,%u", vnode->fid.vid, vnode->fid.vnode, index, from, to);
- /* We want to store information about how much of a page is altered in - * page->private. - */ - BUILD_BUG_ON(PAGE_SIZE - 1 > __AFS_PAGE_PRIV_MASK && sizeof(page->private) < 8); - page = grab_cache_page_write_begin(mapping, index, flags); if (!page) return -ENOMEM;
From: Laurent Vivier lvivier@redhat.com
[ Upstream commit 4a6a42db53aae049a8a64d4b273761bc80c46ebf ]
vdpa_sim generates a ramdom MAC address but it is never used by upper layers because the VIRTIO_NET_F_MAC bit is not set in the features list.
Because of that, virtio-net always regenerates a random MAC address each time it is loaded whereas the address should only change on vdpa_sim load/unload.
Fix that by adding VIRTIO_NET_F_MAC in the features list of vdpa_sim.
Fixes: 2c53d0f64c06 ("vdpasim: vDPA device simulator") Cc: jasowang@redhat.com Signed-off-by: Laurent Vivier lvivier@redhat.com Link: https://lore.kernel.org/r/20201029122050.776445-2-lvivier@redhat.com Signed-off-by: Michael S. Tsirkin mst@redhat.com Acked-by: Jason Wang jasowang@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/vdpa/vdpa_sim/vdpa_sim.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c index 62d6403271450..01ef22d03a8c1 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c @@ -60,7 +60,8 @@ struct vdpasim_virtqueue {
static u64 vdpasim_features = (1ULL << VIRTIO_F_ANY_LAYOUT) | (1ULL << VIRTIO_F_VERSION_1) | - (1ULL << VIRTIO_F_ACCESS_PLATFORM); + (1ULL << VIRTIO_F_ACCESS_PLATFORM) | + (1ULL << VIRTIO_NET_F_MAC);
/* State of each vdpasim device */ struct vdpasim {
From: Georgi Djakov georgi.djakov@linaro.org
[ Upstream commit 5be1805dc3961ce0465bcb0beab85fe8580af08d ]
After enabling interconnect scaling for display on the db845c board, in certain configurations the board hangs, while the following errors are observed on the console:
Error sending AMC RPMH requests (-110) qcom_rpmh TCS Busy, retrying RPMH message send: addr=0x50000 qcom_rpmh TCS Busy, retrying RPMH message send: addr=0x50000 qcom_rpmh TCS Busy, retrying RPMH message send: addr=0x50000 ...
In this specific case, the above is related to one of the sequencers being stuck, while client drivers are returning from probe and trying to disable the currently unused clock and interconnect resources. Generally we want to keep the multimedia NoC enabled like the rest of the NoCs, so let's set the keepalive flag on it too.
Fixes: aae57773fbe0 ("interconnect: qcom: sdm845: Split qnodes into their respective NoCs") Reported-by: Amit Pundir amit.pundir@linaro.org Reviewed-by: Mike Tipton mdtipton@codeaurora.org Tested-by: John Stultz john.stultz@linaro.org Link: https://lore.kernel.org/r/20201012194034.26944-1-georgi.djakov@linaro.org Signed-off-by: Georgi Djakov georgi.djakov@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/interconnect/qcom/sdm845.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/interconnect/qcom/sdm845.c b/drivers/interconnect/qcom/sdm845.c index f6c7b969520d0..86f08c0f4c41b 100644 --- a/drivers/interconnect/qcom/sdm845.c +++ b/drivers/interconnect/qcom/sdm845.c @@ -151,7 +151,7 @@ DEFINE_QBCM(bcm_mc0, "MC0", true, &ebi); DEFINE_QBCM(bcm_sh0, "SH0", true, &qns_llcc); DEFINE_QBCM(bcm_mm0, "MM0", false, &qns_mem_noc_hf); DEFINE_QBCM(bcm_sh1, "SH1", false, &qns_apps_io); -DEFINE_QBCM(bcm_mm1, "MM1", false, &qxm_camnoc_hf0_uncomp, &qxm_camnoc_hf1_uncomp, &qxm_camnoc_sf_uncomp, &qxm_camnoc_hf0, &qxm_camnoc_hf1, &qxm_mdp0, &qxm_mdp1); +DEFINE_QBCM(bcm_mm1, "MM1", true, &qxm_camnoc_hf0_uncomp, &qxm_camnoc_hf1_uncomp, &qxm_camnoc_sf_uncomp, &qxm_camnoc_hf0, &qxm_camnoc_hf1, &qxm_mdp0, &qxm_mdp1); DEFINE_QBCM(bcm_sh2, "SH2", false, &qns_memnoc_snoc); DEFINE_QBCM(bcm_mm2, "MM2", false, &qns2_mem_noc); DEFINE_QBCM(bcm_sh3, "SH3", false, &acm_tcu);
From: Peter Zijlstra peterz@infradead.org
[ Upstream commit f8e48a3dca060e80f672d398d181db1298fbc86c ]
It is valid (albeit uncommon) to call local_irq_enable() without first having called local_irq_disable(). In this case we enter lockdep_hardirqs_on*() with IRQs enabled and trip a preemption warning for using __this_cpu_read().
Use this_cpu_read() instead to avoid the warning.
Fixes: 4d004099a6 ("lockdep: Fix lockdep recursion") Reported-by: syzbot+53f8ce8bbc07924b6417@syzkaller.appspotmail.com Reported-by: kernel test robot lkp@intel.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/locking/lockdep.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index 85d15f0362dc5..3eb35ad1b5241 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -3681,7 +3681,7 @@ void lockdep_hardirqs_on_prepare(unsigned long ip) if (unlikely(in_nmi())) return;
- if (unlikely(__this_cpu_read(lockdep_recursion))) + if (unlikely(this_cpu_read(lockdep_recursion))) return;
if (unlikely(lockdep_hardirqs_enabled())) { @@ -3750,7 +3750,7 @@ void noinstr lockdep_hardirqs_on(unsigned long ip) goto skip_checks; }
- if (unlikely(__this_cpu_read(lockdep_recursion))) + if (unlikely(this_cpu_read(lockdep_recursion))) return;
if (lockdep_hardirqs_enabled()) {
From: Tang Bin tangbin@cmss.chinamobile.com
[ Upstream commit 32d174d2d5eb318c34ff36771adefabdf227c186 ]
If the function platform_get_irq() failed, the negative value returned will not be detected here. So fix error handling in tegra_ehci_probe().
Fixes: 79ad3b5add4a ("usb: host: Add EHCI driver for NVIDIA Tegra SoCs") Acked-by: Alan Stern stern@rowland.harvard.edu Acked-by: Thierry Reding treding@nvidia.com Signed-off-by: Tang Bin tangbin@cmss.chinamobile.com Link: https://lore.kernel.org/r/20201026090657.49988-1-tangbin@cmss.chinamobile.co... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/host/ehci-tegra.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/host/ehci-tegra.c b/drivers/usb/host/ehci-tegra.c index e077b2ca53c51..869d9c4de5fcd 100644 --- a/drivers/usb/host/ehci-tegra.c +++ b/drivers/usb/host/ehci-tegra.c @@ -479,8 +479,8 @@ static int tegra_ehci_probe(struct platform_device *pdev) u_phy->otg->host = hcd_to_bus(hcd);
irq = platform_get_irq(pdev, 0); - if (!irq) { - err = -ENODEV; + if (irq < 0) { + err = irq; goto cleanup_phy; }
From: Mateusz Nosek mateusznosek0@gmail.com
[ Upstream commit 921c7ebd1337d1a46783d7e15a850e12aed2eaa0 ]
If should_futex_fail() returns true in futex_wake_pi(), then the 'ret' variable is set to -EFAULT and then immediately overwritten. So the failure injection is non-functional.
Fix it by actually leaving the function and returning -EFAULT.
The Fixes tag is kinda blury because the initial commit which introduced failure injection was already sloppy, but the below mentioned commit broke it completely.
[ tglx: Massaged changelog ]
Fixes: 6b4f4bc9cb22 ("locking/futex: Allow low-level atomic operations to return -EAGAIN") Signed-off-by: Mateusz Nosek mateusznosek0@gmail.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Link: https://lore.kernel.org/r/20200927000858.24219-1-mateusznosek0@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/futex.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/kernel/futex.c b/kernel/futex.c index a5876694a60eb..39681bf8b06ca 100644 --- a/kernel/futex.c +++ b/kernel/futex.c @@ -1502,8 +1502,10 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_pi_state *pi_ */ newval = FUTEX_WAITERS | task_pid_vnr(new_owner);
- if (unlikely(should_fail_futex(true))) + if (unlikely(should_fail_futex(true))) { ret = -EFAULT; + goto out_unlock; + }
ret = cmpxchg_futex_value_locked(&curval, uaddr, uval, newval); if (!ret && (curval != uval)) {
From: Aneesh Kumar K.V aneesh.kumar@linux.ibm.com
[ Upstream commit ccaea15296f9773abd43aaa17ee4b88848e4a505 ]
If we fail to allocate vmemmap list, we don't keep track of allocated vmemmap block buf. Hence on section deactivate we skip vmemmap block buf free. This results in memory leak.
Signed-off-by: Aneesh Kumar K.V aneesh.kumar@linux.ibm.com Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20200731113500.248306-1-aneesh.kumar@linux.ibm.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/powerpc/mm/init_64.c | 35 ++++++++++++++++++++++++++++------- 1 file changed, 28 insertions(+), 7 deletions(-)
diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c index 8459056cce671..2ae42c2a5cf04 100644 --- a/arch/powerpc/mm/init_64.c +++ b/arch/powerpc/mm/init_64.c @@ -162,16 +162,16 @@ static __meminit struct vmemmap_backing * vmemmap_list_alloc(int node) return next++; }
-static __meminit void vmemmap_list_populate(unsigned long phys, - unsigned long start, - int node) +static __meminit int vmemmap_list_populate(unsigned long phys, + unsigned long start, + int node) { struct vmemmap_backing *vmem_back;
vmem_back = vmemmap_list_alloc(node); if (unlikely(!vmem_back)) { - WARN_ON(1); - return; + pr_debug("vmemap list allocation failed\n"); + return -ENOMEM; }
vmem_back->phys = phys; @@ -179,6 +179,7 @@ static __meminit void vmemmap_list_populate(unsigned long phys, vmem_back->list = vmemmap_list;
vmemmap_list = vmem_back; + return 0; }
static bool altmap_cross_boundary(struct vmem_altmap *altmap, unsigned long start, @@ -199,6 +200,7 @@ static bool altmap_cross_boundary(struct vmem_altmap *altmap, unsigned long star int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap) { + bool altmap_alloc; unsigned long page_size = 1 << mmu_psize_defs[mmu_vmemmap_psize].shift;
/* Align to the page size of the linear mapping. */ @@ -228,13 +230,32 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, p = vmemmap_alloc_block_buf(page_size, node, altmap); if (!p) pr_debug("altmap block allocation failed, falling back to system memory"); + else + altmap_alloc = true; } - if (!p) + if (!p) { p = vmemmap_alloc_block_buf(page_size, node, NULL); + altmap_alloc = false; + } if (!p) return -ENOMEM;
- vmemmap_list_populate(__pa(p), start, node); + if (vmemmap_list_populate(__pa(p), start, node)) { + /* + * If we don't populate vmemap list, we don't have + * the ability to free the allocated vmemmap + * pages in section_deactivate. Hence free them + * here. + */ + int nr_pfns = page_size >> PAGE_SHIFT; + unsigned long page_order = get_order(page_size); + + if (altmap_alloc) + vmem_altmap_free(altmap, nr_pfns); + else + free_pages((unsigned long)p, page_order); + return -ENOMEM; + }
pr_debug(" * %016lx..%016lx allocated at %p\n", start, start + page_size, p);
From: Oliver O'Halloran oohall@gmail.com
[ Upstream commit f6bac19cf65c5be21d14a0c9684c8f560f2096dd ]
When building with W=1 we get the following warning:
arch/powerpc/platforms/powernv/smp.c: In function ‘pnv_smp_cpu_kill_self’: arch/powerpc/platforms/powernv/smp.c:276:16: error: suggest braces around empty body in an ‘if’ statement [-Werror=empty-body] 276 | cpu, srr1); | ^ cc1: all warnings being treated as errors
The full context is this block:
if (srr1 && !generic_check_cpu_restart(cpu)) DBG("CPU%d Unexpected exit while offline srr1=%lx!\n", cpu, srr1);
When building with DEBUG undefined DBG() expands to nothing and GCC emits the warning due to the lack of braces around an empty statement.
Signed-off-by: Oliver O'Halloran oohall@gmail.com Reviewed-by: Joel Stanley joel@jms.id.au Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20200804005410.146094-2-oohall@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/powerpc/platforms/powernv/smp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/platforms/powernv/smp.c b/arch/powerpc/platforms/powernv/smp.c index b2ba3e95bda73..bbf361f23ae86 100644 --- a/arch/powerpc/platforms/powernv/smp.c +++ b/arch/powerpc/platforms/powernv/smp.c @@ -43,7 +43,7 @@ #include <asm/udbg.h> #define DBG(fmt...) udbg_printf(fmt) #else -#define DBG(fmt...) +#define DBG(fmt...) do { } while (0) #endif
static void pnv_smp_setup_cpu(int cpu)
From: Jason Gunthorpe jgg@nvidia.com
[ Upstream commit f553246f7f794675da1794ae7ee07d1f35e561ae ]
Currently it triggers a WARN_ON and then goes ahead and destroys the uobject anyhow, leaking any driver memory.
The only place that leaks driver memory should be during FD close() in uverbs_destroy_ufile_hw().
Drivers are only allowed to fail destroy uobjects if they guarantee destroy will eventually succeed. uverbs_destroy_ufile_hw() provides the loop to give the driver that chance.
Link: https://lore.kernel.org/r/20200902081708.746631-1-leon@kernel.org Signed-off-by: Leon Romanovsky leonro@nvidia.com Signed-off-by: Jason Gunthorpe jgg@nvidia.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/infiniband/core/rdma_core.c | 30 ++++++++++++++--------------- include/rdma/ib_verbs.h | 5 ----- 2 files changed, 15 insertions(+), 20 deletions(-)
diff --git a/drivers/infiniband/core/rdma_core.c b/drivers/infiniband/core/rdma_core.c index 6d3ed7c6e19eb..3962da54ffbf4 100644 --- a/drivers/infiniband/core/rdma_core.c +++ b/drivers/infiniband/core/rdma_core.c @@ -130,17 +130,6 @@ static int uverbs_destroy_uobject(struct ib_uobject *uobj, lockdep_assert_held(&ufile->hw_destroy_rwsem); assert_uverbs_usecnt(uobj, UVERBS_LOOKUP_WRITE);
- if (reason == RDMA_REMOVE_ABORT_HWOBJ) { - reason = RDMA_REMOVE_ABORT; - ret = uobj->uapi_object->type_class->destroy_hw(uobj, reason, - attrs); - /* - * Drivers are not permitted to ignore RDMA_REMOVE_ABORT, see - * ib_is_destroy_retryable, cleanup_retryable == false here. - */ - WARN_ON(ret); - } - if (reason == RDMA_REMOVE_ABORT) { WARN_ON(!list_empty(&uobj->list)); WARN_ON(!uobj->context); @@ -674,11 +663,22 @@ void rdma_alloc_abort_uobject(struct ib_uobject *uobj, bool hw_obj_valid) { struct ib_uverbs_file *ufile = uobj->ufile; + int ret; + + if (hw_obj_valid) { + ret = uobj->uapi_object->type_class->destroy_hw( + uobj, RDMA_REMOVE_ABORT, attrs); + /* + * If the driver couldn't destroy the object then go ahead and + * commit it. Leaking objects that can't be destroyed is only + * done during FD close after the driver has a few more tries to + * destroy it. + */ + if (WARN_ON(ret)) + return rdma_alloc_commit_uobject(uobj, attrs); + }
- uverbs_destroy_uobject(uobj, - hw_obj_valid ? RDMA_REMOVE_ABORT_HWOBJ : - RDMA_REMOVE_ABORT, - attrs); + uverbs_destroy_uobject(uobj, RDMA_REMOVE_ABORT, attrs);
/* Matches the down_read in rdma_alloc_begin_uobject */ up_read(&ufile->hw_destroy_rwsem); diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 5b4f0efc4241f..ef7b786b8675c 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -1463,11 +1463,6 @@ enum rdma_remove_reason { RDMA_REMOVE_DRIVER_REMOVE, /* uobj is being cleaned-up before being committed */ RDMA_REMOVE_ABORT, - /* - * uobj has been fully created, with the uobj->object set, but is being - * cleaned up before being comitted - */ - RDMA_REMOVE_ABORT_HWOBJ, };
struct ib_rdmacg_object {
From: Chao Yu yuchao0@huawei.com
[ Upstream commit 0e2b7385cb59e566520cfd0a04b4b53bc9461e98 ]
As 5kft 5kft@5kft.org reported:
kworker/u9:3: page allocation failure: order:9, mode:0x40c40(GFP_NOFS|__GFP_COMP), nodemask=(null),cpuset=/,mems_allowed=0 CPU: 3 PID: 8168 Comm: kworker/u9:3 Tainted: G C 5.8.3-sunxi #trunk Hardware name: Allwinner sun8i Family Workqueue: f2fs_post_read_wq f2fs_post_read_work [<c010d6d5>] (unwind_backtrace) from [<c0109a55>] (show_stack+0x11/0x14) [<c0109a55>] (show_stack) from [<c056d489>] (dump_stack+0x75/0x84) [<c056d489>] (dump_stack) from [<c0243b53>] (warn_alloc+0xa3/0x104) [<c0243b53>] (warn_alloc) from [<c024473b>] (__alloc_pages_nodemask+0xb87/0xc40) [<c024473b>] (__alloc_pages_nodemask) from [<c02267c5>] (kmalloc_order+0x19/0x38) [<c02267c5>] (kmalloc_order) from [<c02267fd>] (kmalloc_order_trace+0x19/0x90) [<c02267fd>] (kmalloc_order_trace) from [<c047c665>] (zstd_init_decompress_ctx+0x21/0x88) [<c047c665>] (zstd_init_decompress_ctx) from [<c047e9cf>] (f2fs_decompress_pages+0x97/0x228) [<c047e9cf>] (f2fs_decompress_pages) from [<c045d0ab>] (__read_end_io+0xfb/0x130) [<c045d0ab>] (__read_end_io) from [<c045d141>] (f2fs_post_read_work+0x61/0x84) [<c045d141>] (f2fs_post_read_work) from [<c0130b2f>] (process_one_work+0x15f/0x3b0) [<c0130b2f>] (process_one_work) from [<c0130e7b>] (worker_thread+0xfb/0x3e0) [<c0130e7b>] (worker_thread) from [<c0135c3b>] (kthread+0xeb/0x10c) [<c0135c3b>] (kthread) from [<c0100159>]
zstd may allocate large size memory for {,de}compression, it may cause file copy failure on low-end device which has very few memory.
For decompression, let's just allocate proper size memory based on current file's cluster size instead of max cluster size.
Reported-by: 5kft 5kft@5kft.org Signed-off-by: Chao Yu yuchao0@huawei.com Signed-off-by: Jaegeuk Kim jaegeuk@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- fs/f2fs/compress.c | 7 ++++--- fs/f2fs/f2fs.h | 2 +- 2 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c index 1dfb126a0cb20..1cd4b3f9c9f8c 100644 --- a/fs/f2fs/compress.c +++ b/fs/f2fs/compress.c @@ -382,16 +382,17 @@ static int zstd_init_decompress_ctx(struct decompress_io_ctx *dic) ZSTD_DStream *stream; void *workspace; unsigned int workspace_size; + unsigned int max_window_size = + MAX_COMPRESS_WINDOW_SIZE(dic->log_cluster_size);
- workspace_size = ZSTD_DStreamWorkspaceBound(MAX_COMPRESS_WINDOW_SIZE); + workspace_size = ZSTD_DStreamWorkspaceBound(max_window_size);
workspace = f2fs_kvmalloc(F2FS_I_SB(dic->inode), workspace_size, GFP_NOFS); if (!workspace) return -ENOMEM;
- stream = ZSTD_initDStream(MAX_COMPRESS_WINDOW_SIZE, - workspace, workspace_size); + stream = ZSTD_initDStream(max_window_size, workspace, workspace_size); if (!stream) { printk_ratelimited("%sF2FS-fs (%s): %s ZSTD_initDStream failed\n", KERN_ERR, F2FS_I_SB(dic->inode)->sb->s_id, diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index d9e52a7f3702f..98c4b166f192b 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -1394,7 +1394,7 @@ struct decompress_io_ctx { #define NULL_CLUSTER ((unsigned int)(~0)) #define MIN_COMPRESS_LOG_SIZE 2 #define MAX_COMPRESS_LOG_SIZE 8 -#define MAX_COMPRESS_WINDOW_SIZE ((PAGE_SIZE) << MAX_COMPRESS_LOG_SIZE) +#define MAX_COMPRESS_WINDOW_SIZE(log_size) ((PAGE_SIZE) << (log_size))
struct f2fs_sb_info { struct super_block *sb; /* pointer to VFS super block */
From: Ravi Bangoria ravi.bangoria@linux.ibm.com
[ Upstream commit 9b6b7c680cc20971444d9f836e49fc98848bcd0a ]
When kernel is compiled with CONFIG_HAVE_HW_BREAKPOINT=N, user can still create watchpoint using PPC_PTRACE_SETHWDEBUG, with limited functionalities. But, such watchpoints are never firing because of the missing privilege settings. Fix that.
It's safe to set HW_BRK_TYPE_PRIV_ALL because we don't really leak any kernel address in signal info. Setting HW_BRK_TYPE_PRIV_ALL will also help to find scenarios when kernel accesses user memory.
Reported-by: Pedro Miraglia Franco de Carvalho pedromfc@linux.ibm.com Suggested-by: Pedro Miraglia Franco de Carvalho pedromfc@linux.ibm.com Signed-off-by: Ravi Bangoria ravi.bangoria@linux.ibm.com Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20200902042945.129369-4-ravi.bangoria@linux.ibm.co... Signed-off-by: Sasha Levin sashal@kernel.org --- arch/powerpc/kernel/ptrace/ptrace-noadv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/ptrace/ptrace-noadv.c b/arch/powerpc/kernel/ptrace/ptrace-noadv.c index 8bd8d8de5c40b..a570782e954be 100644 --- a/arch/powerpc/kernel/ptrace/ptrace-noadv.c +++ b/arch/powerpc/kernel/ptrace/ptrace-noadv.c @@ -217,7 +217,7 @@ long ppc_set_hwdebug(struct task_struct *child, struct ppc_hw_breakpoint *bp_inf return -EIO;
brk.address = ALIGN_DOWN(bp_info->addr, HW_BREAKPOINT_SIZE); - brk.type = HW_BRK_TYPE_TRANSLATE; + brk.type = HW_BRK_TYPE_TRANSLATE | HW_BRK_TYPE_PRIV_ALL; brk.len = DABR_MAX_LEN; brk.hw_len = DABR_MAX_LEN; if (bp_info->trigger_type & PPC_BREAKPOINT_TRIGGER_READ)
From: Nicholas Piggin npiggin@gmail.com
[ Upstream commit d53c3dfb23c45f7d4f910c3a3ca84bf0a99c6143 ]
Reading and modifying current->mm and current->active_mm and switching mm should be done with irqs off, to prevent races seeing an intermediate state.
This is similar to commit 38cf307c1f20 ("mm: fix kthread_use_mm() vs TLB invalidate"). At exec-time when the new mm is activated, the old one should usually be single-threaded and no longer used, unless something else is holding an mm_users reference (which may be possible).
Absent other mm_users, there is also a race with preemption and lazy tlb switching. Consider the kernel_execve case where the current thread is using a lazy tlb active mm:
call_usermodehelper() kernel_execve() old_mm = current->mm; active_mm = current->active_mm; *** preempt *** --------------------> schedule() prev->active_mm = NULL; mmdrop(prev active_mm); ... <-------------------- schedule() current->mm = mm; current->active_mm = mm; if (!old_mm) mmdrop(active_mm);
If we switch back to the kernel thread from a different mm, there is a double free of the old active_mm, and a missing free of the new one.
Closing this race only requires interrupts to be disabled while ->mm and ->active_mm are being switched, but the TLB problem requires also holding interrupts off over activate_mm. Unfortunately not all archs can do that yet, e.g., arm defers the switch if irqs are disabled and expects finish_arch_post_lock_switch() to be called to complete the flush; um takes a blocking lock in activate_mm().
So as a first step, disable interrupts across the mm/active_mm updates to close the lazy tlb preempt race, and provide an arch option to extend that to activate_mm which allows architectures doing IPI based TLB shootdowns to close the second race.
This is a bit ugly, but in the interest of fixing the bug and backporting before all architectures are converted this is a compromise.
Signed-off-by: Nicholas Piggin npiggin@gmail.com Acked-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20200914045219.3736466-2-npiggin@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/Kconfig | 7 +++++++ fs/exec.c | 17 +++++++++++++++-- 2 files changed, 22 insertions(+), 2 deletions(-)
diff --git a/arch/Kconfig b/arch/Kconfig index af14a567b493f..94821e3f94d16 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -414,6 +414,13 @@ config MMU_GATHER_NO_GATHER bool depends on MMU_GATHER_TABLE_FREE
+config ARCH_WANT_IRQS_OFF_ACTIVATE_MM + bool + help + Temporary select until all architectures can be converted to have + irqs disabled over activate_mm. Architectures that do IPI based TLB + shootdowns should enable this. + config ARCH_HAVE_NMI_SAFE_CMPXCHG bool
diff --git a/fs/exec.c b/fs/exec.c index 07910f5032e74..3622681489864 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -1131,11 +1131,24 @@ static int exec_mmap(struct mm_struct *mm) }
task_lock(tsk); - active_mm = tsk->active_mm; membarrier_exec_mmap(mm); - tsk->mm = mm; + + local_irq_disable(); + active_mm = tsk->active_mm; tsk->active_mm = mm; + tsk->mm = mm; + /* + * This prevents preemption while active_mm is being loaded and + * it and mm are being updated, which could cause problems for + * lazy tlb mm refcounting when these are updated by context + * switches. Not all architectures can handle irqs off over + * activate_mm yet. + */ + if (!IS_ENABLED(CONFIG_ARCH_WANT_IRQS_OFF_ACTIVATE_MM)) + local_irq_enable(); activate_mm(active_mm, mm); + if (IS_ENABLED(CONFIG_ARCH_WANT_IRQS_OFF_ACTIVATE_MM)) + local_irq_enable(); tsk->mm->vmacache_seqnum = 0; vmacache_flush(tsk); task_unlock(tsk);
From: Nicholas Piggin npiggin@gmail.com
[ Upstream commit 66acd46080bd9e5ad2be4b0eb1d498d5145d058e ]
powerpc uses IPIs in some situations to switch a kernel thread away from a lazy tlb mm, which is subject to the TLB flushing race described in the changelog introducing ARCH_WANT_IRQS_OFF_ACTIVATE_MM.
Signed-off-by: Nicholas Piggin npiggin@gmail.com Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20200914045219.3736466-3-npiggin@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/powerpc/Kconfig | 1 + arch/powerpc/include/asm/mmu_context.h | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 2b15b4870565d..857b258de8aa5 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -148,6 +148,7 @@ config PPC select ARCH_USE_QUEUED_RWLOCKS if PPC_QUEUED_SPINLOCKS select ARCH_USE_QUEUED_SPINLOCKS if PPC_QUEUED_SPINLOCKS select ARCH_WANT_IPC_PARSE_VERSION + select ARCH_WANT_IRQS_OFF_ACTIVATE_MM select ARCH_WEAK_RELEASE_ACQUIRE select BINFMT_ELF select BUILDTIME_TABLE_SORT diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h index 7f3658a973846..e02aa793420b8 100644 --- a/arch/powerpc/include/asm/mmu_context.h +++ b/arch/powerpc/include/asm/mmu_context.h @@ -244,7 +244,7 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, */ static inline void activate_mm(struct mm_struct *prev, struct mm_struct *next) { - switch_mm(prev, next, current); + switch_mm_irqs_off(prev, next, current); }
/* We don't currently use enter_lazy_tlb() for anything */
From: Nicholas Piggin npiggin@gmail.com
[ Upstream commit bafb056ce27940c9994ea905336aa8f27b4f7275 ]
The de facto (and apparently uncommented) standard for using an mm had, thanks to this code in sparc if nothing else, been that you must have a reference on mm_users *and that reference must have been obtained with mmget()*, i.e., from a thread with a reference to mm_users that had used the mm.
The introduction of mmget_not_zero() in commit d2005e3f41d4 ("userfaultfd: don't pin the user memory in userfaultfd_file_create()") allowed mm_count holders to aoperate on user mappings asynchronously from the actual threads using the mm, but they were not to load those mappings into their TLB (i.e., walking vmas and page tables is okay, kthread_use_mm() is not).
io_uring 2b188cc1bb857 ("Add io_uring IO interface") added code which does a kthread_use_mm() from a mmget_not_zero() refcount.
The problem with this is code which previously assumed mm == current->mm and mm->mm_users == 1 implies the mm will remain single-threaded at least until this thread creates another mm_users reference, has now broken.
arch/sparc/kernel/smp_64.c:
if (atomic_read(&mm->mm_users) == 1) { cpumask_copy(mm_cpumask(mm), cpumask_of(cpu)); goto local_flush_and_out; }
vs fs/io_uring.c
if (unlikely(!(ctx->flags & IORING_SETUP_SQPOLL) || !mmget_not_zero(ctx->sqo_mm))) return -EFAULT; kthread_use_mm(ctx->sqo_mm);
mmget_not_zero() could come in right after the mm_users == 1 test, then kthread_use_mm() which sets its CPU in the mm_cpumask. That update could be lost if cpumask_copy() occurs afterward.
I propose we fix this by allowing mmget_not_zero() to be a first-class reference, and not have this obscure undocumented and unchecked restriction.
The basic fix for sparc64 is to remove its mm_cpumask clearing code. The optimisation could be effectively restored by sending IPIs to mm_cpumask members and having them remove themselves from mm_cpumask. This is more tricky so I leave it as an exercise for someone with a sparc64 SMP. powerpc has a (currently similarly broken) example.
Signed-off-by: Nicholas Piggin npiggin@gmail.com Acked-by: David S. Miller davem@davemloft.net Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20200914045219.3736466-4-npiggin@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/sparc/kernel/smp_64.c | 65 ++++++++------------------------------ 1 file changed, 14 insertions(+), 51 deletions(-)
diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c index e286e2badc8a4..e38d8bf454e86 100644 --- a/arch/sparc/kernel/smp_64.c +++ b/arch/sparc/kernel/smp_64.c @@ -1039,38 +1039,9 @@ void smp_fetch_global_pmu(void) * are flush_tlb_*() routines, and these run after flush_cache_*() * which performs the flushw. * - * The SMP TLB coherency scheme we use works as follows: - * - * 1) mm->cpu_vm_mask is a bit mask of which cpus an address - * space has (potentially) executed on, this is the heuristic - * we use to avoid doing cross calls. - * - * Also, for flushing from kswapd and also for clones, we - * use cpu_vm_mask as the list of cpus to make run the TLB. - * - * 2) TLB context numbers are shared globally across all processors - * in the system, this allows us to play several games to avoid - * cross calls. - * - * One invariant is that when a cpu switches to a process, and - * that processes tsk->active_mm->cpu_vm_mask does not have the - * current cpu's bit set, that tlb context is flushed locally. - * - * If the address space is non-shared (ie. mm->count == 1) we avoid - * cross calls when we want to flush the currently running process's - * tlb state. This is done by clearing all cpu bits except the current - * processor's in current->mm->cpu_vm_mask and performing the - * flush locally only. This will force any subsequent cpus which run - * this task to flush the context from the local tlb if the process - * migrates to another cpu (again). - * - * 3) For shared address spaces (threads) and swapping we bite the - * bullet for most cases and perform the cross call (but only to - * the cpus listed in cpu_vm_mask). - * - * The performance gain from "optimizing" away the cross call for threads is - * questionable (in theory the big win for threads is the massive sharing of - * address space state across processors). + * mm->cpu_vm_mask is a bit mask of which cpus an address + * space has (potentially) executed on, this is the heuristic + * we use to limit cross calls. */
/* This currently is only used by the hugetlb arch pre-fault @@ -1080,18 +1051,13 @@ void smp_fetch_global_pmu(void) void smp_flush_tlb_mm(struct mm_struct *mm) { u32 ctx = CTX_HWBITS(mm->context); - int cpu = get_cpu();
- if (atomic_read(&mm->mm_users) == 1) { - cpumask_copy(mm_cpumask(mm), cpumask_of(cpu)); - goto local_flush_and_out; - } + get_cpu();
smp_cross_call_masked(&xcall_flush_tlb_mm, ctx, 0, 0, mm_cpumask(mm));
-local_flush_and_out: __flush_tlb_mm(ctx, SECONDARY_CONTEXT);
put_cpu(); @@ -1114,17 +1080,15 @@ void smp_flush_tlb_pending(struct mm_struct *mm, unsigned long nr, unsigned long { u32 ctx = CTX_HWBITS(mm->context); struct tlb_pending_info info; - int cpu = get_cpu(); + + get_cpu();
info.ctx = ctx; info.nr = nr; info.vaddrs = vaddrs;
- if (mm == current->mm && atomic_read(&mm->mm_users) == 1) - cpumask_copy(mm_cpumask(mm), cpumask_of(cpu)); - else - smp_call_function_many(mm_cpumask(mm), tlb_pending_func, - &info, 1); + smp_call_function_many(mm_cpumask(mm), tlb_pending_func, + &info, 1);
__flush_tlb_pending(ctx, nr, vaddrs);
@@ -1134,14 +1098,13 @@ void smp_flush_tlb_pending(struct mm_struct *mm, unsigned long nr, unsigned long void smp_flush_tlb_page(struct mm_struct *mm, unsigned long vaddr) { unsigned long context = CTX_HWBITS(mm->context); - int cpu = get_cpu();
- if (mm == current->mm && atomic_read(&mm->mm_users) == 1) - cpumask_copy(mm_cpumask(mm), cpumask_of(cpu)); - else - smp_cross_call_masked(&xcall_flush_tlb_page, - context, vaddr, 0, - mm_cpumask(mm)); + get_cpu(); + + smp_cross_call_masked(&xcall_flush_tlb_page, + context, vaddr, 0, + mm_cpumask(mm)); + __flush_tlb_page(context, vaddr);
put_cpu();
From: Zhang Qilong zhangqilong3@huawei.com
[ Upstream commit 9b66482282888d02832b7d90239e1cdb18e4b431 ]
Missing the trace exit in f2fs_sync_dirty_inodes
Signed-off-by: Zhang Qilong zhangqilong3@huawei.com Reviewed-by: Chao Yu yuchao0@huawei.com Signed-off-by: Jaegeuk Kim jaegeuk@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- fs/f2fs/checkpoint.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c index ff807e14c8911..bf190a718ca6c 100644 --- a/fs/f2fs/checkpoint.c +++ b/fs/f2fs/checkpoint.c @@ -1047,8 +1047,12 @@ int f2fs_sync_dirty_inodes(struct f2fs_sb_info *sbi, enum inode_type type) get_pages(sbi, is_dir ? F2FS_DIRTY_DENTS : F2FS_DIRTY_DATA)); retry: - if (unlikely(f2fs_cp_error(sbi))) + if (unlikely(f2fs_cp_error(sbi))) { + trace_f2fs_sync_dirty_inodes_exit(sbi->sb, is_dir, + get_pages(sbi, is_dir ? + F2FS_DIRTY_DENTS : F2FS_DIRTY_DATA)); return -EIO; + }
spin_lock(&sbi->inode_lock[type]);
From: Chao Yu yuchao0@huawei.com
[ Upstream commit 07eb1d699452de04e9d389ff17fb8fc9e975d7bf ]
sbi->devs would be initialized only if image enables multiple device feature or blkzoned feature, if blkzoned feature flag was set by fuzz in non-blkzoned device, we will suffer below panic:
get_zone_idx fs/f2fs/segment.c:4892 [inline] f2fs_usable_zone_blks_in_seg fs/f2fs/segment.c:4943 [inline] f2fs_usable_blks_in_seg+0x39b/0xa00 fs/f2fs/segment.c:4999 Call Trace: check_block_count+0x69/0x4e0 fs/f2fs/segment.h:704 build_sit_entries fs/f2fs/segment.c:4403 [inline] f2fs_build_segment_manager+0x51da/0xa370 fs/f2fs/segment.c:5100 f2fs_fill_super+0x3880/0x6ff0 fs/f2fs/super.c:3684 mount_bdev+0x32e/0x3f0 fs/super.c:1417 legacy_get_tree+0x105/0x220 fs/fs_context.c:592 vfs_get_tree+0x89/0x2f0 fs/super.c:1547 do_new_mount fs/namespace.c:2896 [inline] path_mount+0x12ae/0x1e70 fs/namespace.c:3216 do_mount fs/namespace.c:3229 [inline] __do_sys_mount fs/namespace.c:3437 [inline] __se_sys_mount fs/namespace.c:3414 [inline] __x64_sys_mount+0x27f/0x300 fs/namespace.c:3414 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
Add sanity check to inconsistency on factors: blkzoned flag, device path and device character to avoid above panic.
Signed-off-by: Chao Yu yuchao0@huawei.com Signed-off-by: Jaegeuk Kim jaegeuk@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- fs/f2fs/super.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c index dfa072fa80815..be5050292caa5 100644 --- a/fs/f2fs/super.c +++ b/fs/f2fs/super.c @@ -2832,6 +2832,12 @@ static int sanity_check_raw_super(struct f2fs_sb_info *sbi, segment_count, dev_seg_count); return -EFSCORRUPTED; } + } else { + if (__F2FS_HAS_FEATURE(raw_super, F2FS_FEATURE_BLKZONED) && + !bdev_is_zoned(sbi->sb->s_bdev)) { + f2fs_info(sbi, "Zoned block device path is missing"); + return -EFSCORRUPTED; + } }
if (secs_per_zone > total_sections || !secs_per_zone) {
From: Chao Yu yuchao0@huawei.com
[ Upstream commit 6d7ab88a98c1b7a47c228f8ffb4f44d631eaf284 ]
As syzbot reported:
Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x21c/0x280 lib/dump_stack.c:118 kmsan_report+0xf7/0x1e0 mm/kmsan/kmsan_report.c:122 __msan_warning+0x58/0xa0 mm/kmsan/kmsan_instr.c:219 f2fs_lookup+0xe05/0x1a80 fs/f2fs/namei.c:503 lookup_open fs/namei.c:3082 [inline] open_last_lookups fs/namei.c:3177 [inline] path_openat+0x2729/0x6a90 fs/namei.c:3365 do_filp_open+0x2b8/0x710 fs/namei.c:3395 do_sys_openat2+0xa88/0x1140 fs/open.c:1168 do_sys_open fs/open.c:1184 [inline] __do_compat_sys_openat fs/open.c:1242 [inline] __se_compat_sys_openat+0x2a4/0x310 fs/open.c:1240 __ia32_compat_sys_openat+0x56/0x70 fs/open.c:1240 do_syscall_32_irqs_on arch/x86/entry/common.c:80 [inline] __do_fast_syscall_32+0x129/0x180 arch/x86/entry/common.c:139 do_fast_syscall_32+0x6a/0xc0 arch/x86/entry/common.c:162 do_SYSENTER_32+0x73/0x90 arch/x86/entry/common.c:205 entry_SYSENTER_compat_after_hwframe+0x4d/0x5c
In f2fs_lookup(), @res_page could be used before being initialized, because in __f2fs_find_entry(), once F2FS_I(dir)->i_current_depth was been fuzzed to zero, then @res_page will never be initialized, causing this kmsan warning, relocating @res_page initialization place to fix this bug.
Reported-by: syzbot+0eac6f0bbd558fd866d7@syzkaller.appspotmail.com Signed-off-by: Chao Yu yuchao0@huawei.com Signed-off-by: Jaegeuk Kim jaegeuk@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- fs/f2fs/dir.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c index 069f498af1e38..ceb4431b56690 100644 --- a/fs/f2fs/dir.c +++ b/fs/f2fs/dir.c @@ -357,16 +357,15 @@ struct f2fs_dir_entry *__f2fs_find_entry(struct inode *dir, unsigned int max_depth; unsigned int level;
+ *res_page = NULL; + if (f2fs_has_inline_dentry(dir)) { - *res_page = NULL; de = f2fs_find_in_inline_dir(dir, fname, res_page); goto out; }
- if (npages == 0) { - *res_page = NULL; + if (npages == 0) goto out; - }
max_depth = F2FS_I(dir)->i_current_depth; if (unlikely(max_depth > MAX_DIR_HASH_DEPTH)) { @@ -377,7 +376,6 @@ struct f2fs_dir_entry *__f2fs_find_entry(struct inode *dir, }
for (level = 0; level < max_depth; level++) { - *res_page = NULL; de = find_in_level(dir, level, fname, res_page); if (de || IS_ERR(*res_page)) break;
From: Chao Yu yuchao0@huawei.com
[ Upstream commit 6a257471fa42c8c9c04a875cd3a2a22db148e0f0 ]
As syzbot reported:
kernel BUG at fs/f2fs/segment.h:657! invalid opcode: 0000 [#1] PREEMPT SMP KASAN CPU: 1 PID: 16220 Comm: syz-executor.0 Not tainted 5.9.0-rc5-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 RIP: 0010:f2fs_ra_meta_pages+0xa51/0xdc0 fs/f2fs/segment.h:657 Call Trace: build_sit_entries fs/f2fs/segment.c:4195 [inline] f2fs_build_segment_manager+0x4b8a/0xa3c0 fs/f2fs/segment.c:4779 f2fs_fill_super+0x377d/0x6b80 fs/f2fs/super.c:3633 mount_bdev+0x32e/0x3f0 fs/super.c:1417 legacy_get_tree+0x105/0x220 fs/fs_context.c:592 vfs_get_tree+0x89/0x2f0 fs/super.c:1547 do_new_mount fs/namespace.c:2875 [inline] path_mount+0x1387/0x2070 fs/namespace.c:3192 do_mount fs/namespace.c:3205 [inline] __do_sys_mount fs/namespace.c:3413 [inline] __se_sys_mount fs/namespace.c:3390 [inline] __x64_sys_mount+0x27f/0x300 fs/namespace.c:3390 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x44/0xa9
@blkno in f2fs_ra_meta_pages could exceed max segment count, causing panic in following sanity check in current_sit_addr(), add check condition to avoid this issue.
Reported-by: syzbot+3698081bcf0bb2d12174@syzkaller.appspotmail.com Signed-off-by: Chao Yu yuchao0@huawei.com Signed-off-by: Jaegeuk Kim jaegeuk@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- fs/f2fs/checkpoint.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c index bf190a718ca6c..0b7aec059f112 100644 --- a/fs/f2fs/checkpoint.c +++ b/fs/f2fs/checkpoint.c @@ -243,6 +243,8 @@ int f2fs_ra_meta_pages(struct f2fs_sb_info *sbi, block_t start, int nrpages, blkno * NAT_ENTRY_PER_BLOCK); break; case META_SIT: + if (unlikely(blkno >= TOTAL_SEGS(sbi))) + goto out; /* get sit block addr */ fio.new_blkaddr = current_sit_addr(sbi, blkno * SIT_ENTRY_PER_BLOCK);
From: Vasily Gorbik gor@linux.ibm.com
[ Upstream commit 2835c2ea95d50625108e47a459e1a47f6be836ce ]
Currently we overflow save_area_sync and write over save_area_async. Although this is not a real problem make startup_pgm_check_handler consistent with late pgm check handler and store [%r0,%r7] directly into gpregs_save_area.
Reviewed-by: Sven Schnelle svens@linux.ibm.com Signed-off-by: Vasily Gorbik gor@linux.ibm.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/s390/boot/head.S | 21 +++++++++++---------- 1 file changed, 11 insertions(+), 10 deletions(-)
diff --git a/arch/s390/boot/head.S b/arch/s390/boot/head.S index dae10961d0724..1a2c2b1ed9649 100644 --- a/arch/s390/boot/head.S +++ b/arch/s390/boot/head.S @@ -360,22 +360,23 @@ ENTRY(startup_kdump) # the save area and does disabled wait with a faulty address. # ENTRY(startup_pgm_check_handler) - stmg %r0,%r15,__LC_SAVE_AREA_SYNC - la %r1,4095 - stctg %c0,%c15,__LC_CREGS_SAVE_AREA-4095(%r1) - mvc __LC_GPREGS_SAVE_AREA-4095(128,%r1),__LC_SAVE_AREA_SYNC - mvc __LC_PSW_SAVE_AREA-4095(16,%r1),__LC_PGM_OLD_PSW + stmg %r8,%r15,__LC_SAVE_AREA_SYNC + la %r8,4095 + stctg %c0,%c15,__LC_CREGS_SAVE_AREA-4095(%r8) + stmg %r0,%r7,__LC_GPREGS_SAVE_AREA-4095(%r8) + mvc __LC_GPREGS_SAVE_AREA-4095+64(64,%r8),__LC_SAVE_AREA_SYNC + mvc __LC_PSW_SAVE_AREA-4095(16,%r8),__LC_PGM_OLD_PSW mvc __LC_RETURN_PSW(16),__LC_PGM_OLD_PSW ni __LC_RETURN_PSW,0xfc # remove IO and EX bits ni __LC_RETURN_PSW+1,0xfb # remove MCHK bit oi __LC_RETURN_PSW+1,0x2 # set wait state bit - larl %r2,.Lold_psw_disabled_wait - stg %r2,__LC_PGM_NEW_PSW+8 - l %r15,.Ldump_info_stack-.Lold_psw_disabled_wait(%r2) + larl %r9,.Lold_psw_disabled_wait + stg %r9,__LC_PGM_NEW_PSW+8 + l %r15,.Ldump_info_stack-.Lold_psw_disabled_wait(%r9) brasl %r14,print_pgm_check_info .Lold_psw_disabled_wait: - la %r1,4095 - lmg %r0,%r15,__LC_GPREGS_SAVE_AREA-4095(%r1) + la %r8,4095 + lmg %r0,%r15,__LC_GPREGS_SAVE_AREA-4095(%r8) lpswe __LC_RETURN_PSW # disabled wait .Ldump_info_stack: .long 0x5000 + PAGE_SIZE - STACK_FRAME_OVERHEAD
From: Chao Yu yuchao0@huawei.com
[ Upstream commit 519a5a2f37b850f4eb86674a10d143088670a390 ]
Compressed inode and normal inode has different layout, so we should disallow enabling compress on non-empty file to avoid race condition during inode .i_addr array parsing and updating.
Signed-off-by: Chao Yu yuchao0@huawei.com [Jaegeuk Kim: Fix missing condition] Signed-off-by: Jaegeuk Kim jaegeuk@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- fs/f2fs/file.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c index 8a422400e824d..4ec10256dc67f 100644 --- a/fs/f2fs/file.c +++ b/fs/f2fs/file.c @@ -1836,6 +1836,8 @@ static int f2fs_setflags_common(struct inode *inode, u32 iflags, u32 mask) if (iflags & F2FS_COMPR_FL) { if (!f2fs_may_compress(inode)) return -EINVAL; + if (S_ISREG(inode->i_mode) && inode->i_size) + return -EINVAL;
set_compress_context(inode); }
From: Harald Freudenberger freude@linux.ibm.com
[ Upstream commit e0332629e33d1926c93348d918aaaf451ef9a16b ]
Revisit the ap queue error handling: Based on discussions and evaluatios with the firmware folk here is now a rework of the response code handling for all the AP instructions. The idea is to distinguish between failures because of some kind of invalid request where a retry does not make any sense and a failure where another attempt to send the very same request may succeed. The first case is handled by returning EINVAL to the userspace application. The second case results in retries within the zcrypt API controlled by a per message retry counter.
Revisit the zcrpyt error handling: Similar here, based on discussions with the firmware people here comes a rework of the handling of all the reply codes. Main point here is that there are only very few cases left, where a zcrypt device queue is switched to offline. It should never be the case that an AP reply message is 'unknown' to the device driver as it indicates a total mismatch between device driver and crypto card firmware. In all other cases, the code distinguishes between failure because of invalid message (see above - EINVAL) or failures of the infrastructure (see above - EAGAIN).
Signed-off-by: Harald Freudenberger freude@linux.ibm.com Signed-off-by: Vasily Gorbik gor@linux.ibm.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/s390/crypto/ap_bus.h | 1 + drivers/s390/crypto/ap_queue.c | 8 +-- drivers/s390/crypto/zcrypt_debug.h | 8 +++ drivers/s390/crypto/zcrypt_error.h | 88 +++++++++--------------- drivers/s390/crypto/zcrypt_msgtype50.c | 50 +++++++------- drivers/s390/crypto/zcrypt_msgtype6.c | 92 +++++++++++++------------- 6 files changed, 116 insertions(+), 131 deletions(-)
diff --git a/drivers/s390/crypto/ap_bus.h b/drivers/s390/crypto/ap_bus.h index 1ea046324e8f6..c4afca0d773c6 100644 --- a/drivers/s390/crypto/ap_bus.h +++ b/drivers/s390/crypto/ap_bus.h @@ -50,6 +50,7 @@ static inline int ap_test_bit(unsigned int *ptr, unsigned int nr) #define AP_RESPONSE_NO_FIRST_PART 0x13 #define AP_RESPONSE_MESSAGE_TOO_BIG 0x15 #define AP_RESPONSE_REQ_FAC_NOT_INST 0x16 +#define AP_RESPONSE_INVALID_DOMAIN 0x42
/* * Known device types diff --git a/drivers/s390/crypto/ap_queue.c b/drivers/s390/crypto/ap_queue.c index 688ebebbf98cb..99f73bbb1c751 100644 --- a/drivers/s390/crypto/ap_queue.c +++ b/drivers/s390/crypto/ap_queue.c @@ -237,6 +237,9 @@ static enum ap_sm_wait ap_sm_write(struct ap_queue *aq) case AP_RESPONSE_RESET_IN_PROGRESS: aq->sm_state = AP_SM_STATE_RESET_WAIT; return AP_SM_WAIT_TIMEOUT; + case AP_RESPONSE_INVALID_DOMAIN: + AP_DBF(DBF_WARN, "AP_RESPONSE_INVALID_DOMAIN on NQAP\n"); + fallthrough; case AP_RESPONSE_MESSAGE_TOO_BIG: case AP_RESPONSE_REQ_FAC_NOT_INST: list_del_init(&ap_msg->list); @@ -278,11 +281,6 @@ static enum ap_sm_wait ap_sm_reset(struct ap_queue *aq) aq->sm_state = AP_SM_STATE_RESET_WAIT; aq->interrupt = AP_INTR_DISABLED; return AP_SM_WAIT_TIMEOUT; - case AP_RESPONSE_BUSY: - return AP_SM_WAIT_TIMEOUT; - case AP_RESPONSE_Q_NOT_AVAIL: - case AP_RESPONSE_DECONFIGURED: - case AP_RESPONSE_CHECKSTOPPED: default: aq->sm_state = AP_SM_STATE_BORKED; return AP_SM_WAIT_NONE; diff --git a/drivers/s390/crypto/zcrypt_debug.h b/drivers/s390/crypto/zcrypt_debug.h index 241dbb5f75bf3..3225489a1c411 100644 --- a/drivers/s390/crypto/zcrypt_debug.h +++ b/drivers/s390/crypto/zcrypt_debug.h @@ -21,6 +21,14 @@
#define ZCRYPT_DBF(...) \ debug_sprintf_event(zcrypt_dbf_info, ##__VA_ARGS__) +#define ZCRYPT_DBF_ERR(...) \ + debug_sprintf_event(zcrypt_dbf_info, DBF_ERR, ##__VA_ARGS__) +#define ZCRYPT_DBF_WARN(...) \ + debug_sprintf_event(zcrypt_dbf_info, DBF_WARN, ##__VA_ARGS__) +#define ZCRYPT_DBF_INFO(...) \ + debug_sprintf_event(zcrypt_dbf_info, DBF_INFO, ##__VA_ARGS__) +#define ZCRYPT_DBF_DBG(...) \ + debug_sprintf_event(zcrypt_dbf_info, DBF_DEBUG, ##__VA_ARGS__)
extern debug_info_t *zcrypt_dbf_info;
diff --git a/drivers/s390/crypto/zcrypt_error.h b/drivers/s390/crypto/zcrypt_error.h index 54a04f8c38ef9..39e626e3a3794 100644 --- a/drivers/s390/crypto/zcrypt_error.h +++ b/drivers/s390/crypto/zcrypt_error.h @@ -52,7 +52,6 @@ struct error_hdr { #define REP82_ERROR_INVALID_COMMAND 0x30 #define REP82_ERROR_MALFORMED_MSG 0x40 #define REP82_ERROR_INVALID_SPECIAL_CMD 0x41 -#define REP82_ERROR_INVALID_DOMAIN_PRECHECK 0x42 #define REP82_ERROR_RESERVED_FIELDO 0x50 /* old value */ #define REP82_ERROR_WORD_ALIGNMENT 0x60 #define REP82_ERROR_MESSAGE_LENGTH 0x80 @@ -67,7 +66,6 @@ struct error_hdr { #define REP82_ERROR_ZERO_BUFFER_LEN 0xB0
#define REP88_ERROR_MODULE_FAILURE 0x10 - #define REP88_ERROR_MESSAGE_TYPE 0x20 #define REP88_ERROR_MESSAGE_MALFORMD 0x22 #define REP88_ERROR_MESSAGE_LENGTH 0x23 @@ -85,78 +83,56 @@ static inline int convert_error(struct zcrypt_queue *zq, int queue = AP_QID_QUEUE(zq->queue->qid);
switch (ehdr->reply_code) { - case REP82_ERROR_OPERAND_INVALID: - case REP82_ERROR_OPERAND_SIZE: - case REP82_ERROR_EVEN_MOD_IN_OPND: - case REP88_ERROR_MESSAGE_MALFORMD: - case REP82_ERROR_INVALID_DOMAIN_PRECHECK: - case REP82_ERROR_INVALID_DOMAIN_PENDING: - case REP82_ERROR_INVALID_SPECIAL_CMD: - case REP82_ERROR_FILTERED_BY_HYPERVISOR: - // REP88_ERROR_INVALID_KEY // '82' CEX2A - // REP88_ERROR_OPERAND // '84' CEX2A - // REP88_ERROR_OPERAND_EVEN_MOD // '85' CEX2A - /* Invalid input data. */ + case REP82_ERROR_INVALID_MSG_LEN: /* 0x23 */ + case REP82_ERROR_RESERVD_FIELD: /* 0x24 */ + case REP82_ERROR_FORMAT_FIELD: /* 0x29 */ + case REP82_ERROR_MALFORMED_MSG: /* 0x40 */ + case REP82_ERROR_INVALID_SPECIAL_CMD: /* 0x41 */ + case REP82_ERROR_MESSAGE_LENGTH: /* 0x80 */ + case REP82_ERROR_OPERAND_INVALID: /* 0x82 */ + case REP82_ERROR_OPERAND_SIZE: /* 0x84 */ + case REP82_ERROR_EVEN_MOD_IN_OPND: /* 0x85 */ + case REP82_ERROR_INVALID_DOMAIN_PENDING: /* 0x8A */ + case REP82_ERROR_FILTERED_BY_HYPERVISOR: /* 0x8B */ + case REP82_ERROR_PACKET_TRUNCATED: /* 0xA0 */ + case REP88_ERROR_MESSAGE_MALFORMD: /* 0x22 */ + case REP88_ERROR_KEY_TYPE: /* 0x34 */ + /* RY indicates malformed request */ ZCRYPT_DBF(DBF_WARN, - "device=%02x.%04x reply=0x%02x => rc=EINVAL\n", + "dev=%02x.%04x RY=0x%02x => rc=EINVAL\n", card, queue, ehdr->reply_code); return -EINVAL; - case REP82_ERROR_MESSAGE_TYPE: - // REP88_ERROR_MESSAGE_TYPE // '20' CEX2A + case REP82_ERROR_MACHINE_FAILURE: /* 0x10 */ + case REP82_ERROR_MESSAGE_TYPE: /* 0x20 */ + case REP82_ERROR_TRANSPORT_FAIL: /* 0x90 */ /* - * To sent a message of the wrong type is a bug in the - * device driver. Send error msg, disable the device - * and then repeat the request. + * Msg to wrong type or card/infrastructure failure. + * Trigger rescan of the ap bus, trigger retry request. */ atomic_set(&zcrypt_rescan_req, 1); - zq->online = 0; - pr_err("Cryptographic device %02x.%04x failed and was set offline\n", - card, queue); - ZCRYPT_DBF(DBF_ERR, - "device=%02x.%04x reply=0x%02x => online=0 rc=EAGAIN\n", - card, queue, ehdr->reply_code); - return -EAGAIN; - case REP82_ERROR_TRANSPORT_FAIL: - /* Card or infrastructure failure, disable card */ - atomic_set(&zcrypt_rescan_req, 1); - zq->online = 0; - pr_err("Cryptographic device %02x.%04x failed and was set offline\n", - card, queue); /* For type 86 response show the apfs value (failure reason) */ - if (ehdr->type == TYPE86_RSP_CODE) { + if (ehdr->reply_code == REP82_ERROR_TRANSPORT_FAIL && + ehdr->type == TYPE86_RSP_CODE) { struct { struct type86_hdr hdr; struct type86_fmt2_ext fmt2; } __packed * head = reply->msg; unsigned int apfs = *((u32 *)head->fmt2.apfs);
- ZCRYPT_DBF(DBF_ERR, - "device=%02x.%04x reply=0x%02x apfs=0x%x => online=0 rc=EAGAIN\n", - card, queue, apfs, ehdr->reply_code); + ZCRYPT_DBF(DBF_WARN, + "dev=%02x.%04x RY=0x%02x apfs=0x%x => bus rescan, rc=EAGAIN\n", + card, queue, ehdr->reply_code, apfs); } else - ZCRYPT_DBF(DBF_ERR, - "device=%02x.%04x reply=0x%02x => online=0 rc=EAGAIN\n", + ZCRYPT_DBF(DBF_WARN, + "dev=%02x.%04x RY=0x%02x => bus rescan, rc=EAGAIN\n", card, queue, ehdr->reply_code); return -EAGAIN; - case REP82_ERROR_MACHINE_FAILURE: - // REP88_ERROR_MODULE_FAILURE // '10' CEX2A - /* If a card fails disable it and repeat the request. */ - atomic_set(&zcrypt_rescan_req, 1); - zq->online = 0; - pr_err("Cryptographic device %02x.%04x failed and was set offline\n", - card, queue); - ZCRYPT_DBF(DBF_ERR, - "device=%02x.%04x reply=0x%02x => online=0 rc=EAGAIN\n", - card, queue, ehdr->reply_code); - return -EAGAIN; default: - zq->online = 0; - pr_err("Cryptographic device %02x.%04x failed and was set offline\n", - card, queue); - ZCRYPT_DBF(DBF_ERR, - "device=%02x.%04x reply=0x%02x => online=0 rc=EAGAIN\n", + /* Assume request is valid and a retry will be worth it */ + ZCRYPT_DBF(DBF_WARN, + "dev=%02x.%04x RY=0x%02x => rc=EAGAIN\n", card, queue, ehdr->reply_code); - return -EAGAIN; /* repeat the request on a different device. */ + return -EAGAIN; } }
diff --git a/drivers/s390/crypto/zcrypt_msgtype50.c b/drivers/s390/crypto/zcrypt_msgtype50.c index 7aedc338b4459..88916addd513e 100644 --- a/drivers/s390/crypto/zcrypt_msgtype50.c +++ b/drivers/s390/crypto/zcrypt_msgtype50.c @@ -356,15 +356,15 @@ static int convert_type80(struct zcrypt_queue *zq, if (t80h->len < sizeof(*t80h) + outputdatalength) { /* The result is too short, the CEXxA card may not do that.. */ zq->online = 0; - pr_err("Cryptographic device %02x.%04x failed and was set offline\n", + pr_err("Crypto dev=%02x.%04x code=0x%02x => online=0 rc=EAGAIN\n", AP_QID_CARD(zq->queue->qid), - AP_QID_QUEUE(zq->queue->qid)); - ZCRYPT_DBF(DBF_ERR, - "device=%02x.%04x code=0x%02x => online=0 rc=EAGAIN\n", - AP_QID_CARD(zq->queue->qid), - AP_QID_QUEUE(zq->queue->qid), - t80h->code); - return -EAGAIN; /* repeat the request on a different device. */ + AP_QID_QUEUE(zq->queue->qid), + t80h->code); + ZCRYPT_DBF_ERR("dev=%02x.%04x code=0x%02x => online=0 rc=EAGAIN\n", + AP_QID_CARD(zq->queue->qid), + AP_QID_QUEUE(zq->queue->qid), + t80h->code); + return -EAGAIN; } if (zq->zcard->user_space_type == ZCRYPT_CEX2A) BUG_ON(t80h->len > CEX2A_MAX_RESPONSE_SIZE); @@ -376,10 +376,10 @@ static int convert_type80(struct zcrypt_queue *zq, return 0; }
-static int convert_response(struct zcrypt_queue *zq, - struct ap_message *reply, - char __user *outputdata, - unsigned int outputdatalength) +static int convert_response_cex2a(struct zcrypt_queue *zq, + struct ap_message *reply, + char __user *outputdata, + unsigned int outputdatalength) { /* Response type byte is the second byte in the response. */ unsigned char rtype = ((unsigned char *) reply->msg)[1]; @@ -393,15 +393,15 @@ static int convert_response(struct zcrypt_queue *zq, outputdata, outputdatalength); default: /* Unknown response type, this should NEVER EVER happen */ zq->online = 0; - pr_err("Cryptographic device %02x.%04x failed and was set offline\n", + pr_err("Crypto dev=%02x.%04x unknown response type 0x%02x => online=0 rc=EAGAIN\n", AP_QID_CARD(zq->queue->qid), - AP_QID_QUEUE(zq->queue->qid)); - ZCRYPT_DBF(DBF_ERR, - "device=%02x.%04x rtype=0x%02x => online=0 rc=EAGAIN\n", - AP_QID_CARD(zq->queue->qid), - AP_QID_QUEUE(zq->queue->qid), - (unsigned int) rtype); - return -EAGAIN; /* repeat the request on a different device. */ + AP_QID_QUEUE(zq->queue->qid), + (int) rtype); + ZCRYPT_DBF_ERR("dev=%02x.%04x unknown response type 0x%02x => online=0 rc=EAGAIN\n", + AP_QID_CARD(zq->queue->qid), + AP_QID_QUEUE(zq->queue->qid), + (int) rtype); + return -EAGAIN; } }
@@ -476,8 +476,9 @@ static long zcrypt_cex2a_modexpo(struct zcrypt_queue *zq, if (rc == 0) { rc = ap_msg.rc; if (rc == 0) - rc = convert_response(zq, &ap_msg, mex->outputdata, - mex->outputdatalength); + rc = convert_response_cex2a(zq, &ap_msg, + mex->outputdata, + mex->outputdatalength); } else /* Signal pending. */ ap_cancel_message(zq->queue, &ap_msg); @@ -520,8 +521,9 @@ static long zcrypt_cex2a_modexpo_crt(struct zcrypt_queue *zq, if (rc == 0) { rc = ap_msg.rc; if (rc == 0) - rc = convert_response(zq, &ap_msg, crt->outputdata, - crt->outputdatalength); + rc = convert_response_cex2a(zq, &ap_msg, + crt->outputdata, + crt->outputdatalength); } else /* Signal pending. */ ap_cancel_message(zq->queue, &ap_msg); diff --git a/drivers/s390/crypto/zcrypt_msgtype6.c b/drivers/s390/crypto/zcrypt_msgtype6.c index d77991c74c252..21ea3b73c8674 100644 --- a/drivers/s390/crypto/zcrypt_msgtype6.c +++ b/drivers/s390/crypto/zcrypt_msgtype6.c @@ -650,23 +650,22 @@ static int convert_type86_ica(struct zcrypt_queue *zq, (service_rc == 8 && service_rs == 72) || (service_rc == 8 && service_rs == 770) || (service_rc == 12 && service_rs == 769)) { - ZCRYPT_DBF(DBF_DEBUG, - "device=%02x.%04x rc/rs=%d/%d => rc=EINVAL\n", - AP_QID_CARD(zq->queue->qid), - AP_QID_QUEUE(zq->queue->qid), - (int) service_rc, (int) service_rs); + ZCRYPT_DBF_WARN("dev=%02x.%04x rc/rs=%d/%d => rc=EINVAL\n", + AP_QID_CARD(zq->queue->qid), + AP_QID_QUEUE(zq->queue->qid), + (int) service_rc, (int) service_rs); return -EINVAL; } zq->online = 0; - pr_err("Cryptographic device %02x.%04x failed and was set offline\n", + pr_err("Crypto dev=%02x.%04x rc/rs=%d/%d online=0 rc=EAGAIN\n", AP_QID_CARD(zq->queue->qid), - AP_QID_QUEUE(zq->queue->qid)); - ZCRYPT_DBF(DBF_ERR, - "device=%02x.%04x rc/rs=%d/%d => online=0 rc=EAGAIN\n", - AP_QID_CARD(zq->queue->qid), - AP_QID_QUEUE(zq->queue->qid), - (int) service_rc, (int) service_rs); - return -EAGAIN; /* repeat the request on a different device. */ + AP_QID_QUEUE(zq->queue->qid), + (int) service_rc, (int) service_rs); + ZCRYPT_DBF_ERR("dev=%02x.%04x rc/rs=%d/%d => online=0 rc=EAGAIN\n", + AP_QID_CARD(zq->queue->qid), + AP_QID_QUEUE(zq->queue->qid), + (int) service_rc, (int) service_rs); + return -EAGAIN; } data = msg->text; reply_len = msg->length - 2; @@ -800,17 +799,18 @@ static int convert_response_ica(struct zcrypt_queue *zq, return convert_type86_ica(zq, reply, outputdata, outputdatalength); fallthrough; /* wrong cprb version is an unknown response */ - default: /* Unknown response type, this should NEVER EVER happen */ + default: + /* Unknown response type, this should NEVER EVER happen */ zq->online = 0; - pr_err("Cryptographic device %02x.%04x failed and was set offline\n", + pr_err("Crypto dev=%02x.%04x unknown response type 0x%02x => online=0 rc=EAGAIN\n", AP_QID_CARD(zq->queue->qid), - AP_QID_QUEUE(zq->queue->qid)); - ZCRYPT_DBF(DBF_ERR, - "device=%02x.%04x rtype=0x%02x => online=0 rc=EAGAIN\n", - AP_QID_CARD(zq->queue->qid), - AP_QID_QUEUE(zq->queue->qid), - (int) msg->hdr.type); - return -EAGAIN; /* repeat the request on a different device. */ + AP_QID_QUEUE(zq->queue->qid), + (int) msg->hdr.type); + ZCRYPT_DBF_ERR("dev=%02x.%04x unknown response type 0x%02x => online=0 rc=EAGAIN\n", + AP_QID_CARD(zq->queue->qid), + AP_QID_QUEUE(zq->queue->qid), + (int) msg->hdr.type); + return -EAGAIN; } }
@@ -836,15 +836,15 @@ static int convert_response_xcrb(struct zcrypt_queue *zq, default: /* Unknown response type, this should NEVER EVER happen */ xcRB->status = 0x0008044DL; /* HDD_InvalidParm */ zq->online = 0; - pr_err("Cryptographic device %02x.%04x failed and was set offline\n", + pr_err("Crypto dev=%02x.%04x unknown response type 0x%02x => online=0 rc=EAGAIN\n", AP_QID_CARD(zq->queue->qid), - AP_QID_QUEUE(zq->queue->qid)); - ZCRYPT_DBF(DBF_ERR, - "device=%02x.%04x rtype=0x%02x => online=0 rc=EAGAIN\n", - AP_QID_CARD(zq->queue->qid), - AP_QID_QUEUE(zq->queue->qid), - (int) msg->hdr.type); - return -EAGAIN; /* repeat the request on a different device. */ + AP_QID_QUEUE(zq->queue->qid), + (int) msg->hdr.type); + ZCRYPT_DBF_ERR("dev=%02x.%04x unknown response type 0x%02x => online=0 rc=EAGAIN\n", + AP_QID_CARD(zq->queue->qid), + AP_QID_QUEUE(zq->queue->qid), + (int) msg->hdr.type); + return -EAGAIN; } }
@@ -865,15 +865,15 @@ static int convert_response_ep11_xcrb(struct zcrypt_queue *zq, fallthrough; /* wrong cprb version is an unknown resp */ default: /* Unknown response type, this should NEVER EVER happen */ zq->online = 0; - pr_err("Cryptographic device %02x.%04x failed and was set offline\n", + pr_err("Crypto dev=%02x.%04x unknown response type 0x%02x => online=0 rc=EAGAIN\n", AP_QID_CARD(zq->queue->qid), - AP_QID_QUEUE(zq->queue->qid)); - ZCRYPT_DBF(DBF_ERR, - "device=%02x.%04x rtype=0x%02x => online=0 rc=EAGAIN\n", - AP_QID_CARD(zq->queue->qid), - AP_QID_QUEUE(zq->queue->qid), - (int) msg->hdr.type); - return -EAGAIN; /* repeat the request on a different device. */ + AP_QID_QUEUE(zq->queue->qid), + (int) msg->hdr.type); + ZCRYPT_DBF_ERR("dev=%02x.%04x unknown response type 0x%02x => online=0 rc=EAGAIN\n", + AP_QID_CARD(zq->queue->qid), + AP_QID_QUEUE(zq->queue->qid), + (int) msg->hdr.type); + return -EAGAIN; } }
@@ -895,15 +895,15 @@ static int convert_response_rng(struct zcrypt_queue *zq, fallthrough; /* wrong cprb version is an unknown response */ default: /* Unknown response type, this should NEVER EVER happen */ zq->online = 0; - pr_err("Cryptographic device %02x.%04x failed and was set offline\n", + pr_err("Crypto dev=%02x.%04x unknown response type 0x%02x => online=0 rc=EAGAIN\n", AP_QID_CARD(zq->queue->qid), - AP_QID_QUEUE(zq->queue->qid)); - ZCRYPT_DBF(DBF_ERR, - "device=%02x.%04x rtype=0x%02x => online=0 rc=EAGAIN\n", - AP_QID_CARD(zq->queue->qid), - AP_QID_QUEUE(zq->queue->qid), - (int) msg->hdr.type); - return -EAGAIN; /* repeat the request on a different device. */ + AP_QID_QUEUE(zq->queue->qid), + (int) msg->hdr.type); + ZCRYPT_DBF_ERR("dev=%02x.%04x unknown response type 0x%02x => online=0 rc=EAGAIN\n", + AP_QID_CARD(zq->queue->qid), + AP_QID_QUEUE(zq->queue->qid), + (int) msg->hdr.type); + return -EAGAIN; } }
From: Johannes Berg johannes.berg@intel.com
[ Upstream commit f2d05059e15af3f70502074f4e3a504530af504a ]
Lockdep complains at boot:
============================= [ BUG: Invalid wait context ] 5.7.0-05093-g46d91ecd597b #98 Not tainted ----------------------------- swapper/1 is trying to lock: 0000000060931b98 (&desc[i].request_mutex){+.+.}-{3:3}, at: __setup_irq+0x11d/0x623 other info that might help us debug this: context-{4:4} 1 lock held by swapper/1: #0: 000000006074fed8 (sigio_spinlock){+.+.}-{2:2}, at: sigio_lock+0x1a/0x1c stack backtrace: CPU: 0 PID: 1 Comm: swapper Not tainted 5.7.0-05093-g46d91ecd597b #98 Stack: 7fa4fab0 6028dfd1 0000002a 6008bea5 7fa50700 7fa50040 7fa4fac0 6028e016 7fa4fb50 6007f6da 60959c18 00000000 Call Trace: [<60023a0e>] show_stack+0x13b/0x155 [<6028e016>] dump_stack+0x2a/0x2c [<6007f6da>] __lock_acquire+0x515/0x15f2 [<6007eb50>] lock_acquire+0x245/0x273 [<6050d9f1>] __mutex_lock+0xbd/0x325 [<6050dc76>] mutex_lock_nested+0x1d/0x1f [<6008e27e>] __setup_irq+0x11d/0x623 [<6008e8ed>] request_threaded_irq+0x169/0x1a6 [<60021eb0>] um_request_irq+0x1ee/0x24b [<600234ee>] write_sigio_irq+0x3b/0x76 [<600383ca>] sigio_broken+0x146/0x2e4 [<60020bd8>] do_one_initcall+0xde/0x281
Because we hold sigio_spinlock and then get into requesting an interrupt with a mutex.
Change the spinlock to a mutex to avoid that.
Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Richard Weinberger richard@nod.at Signed-off-by: Sasha Levin sashal@kernel.org --- arch/um/kernel/sigio.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/um/kernel/sigio.c b/arch/um/kernel/sigio.c index 10c99e058fcae..d1cffc2a7f212 100644 --- a/arch/um/kernel/sigio.c +++ b/arch/um/kernel/sigio.c @@ -35,14 +35,14 @@ int write_sigio_irq(int fd) }
/* These are called from os-Linux/sigio.c to protect its pollfds arrays. */ -static DEFINE_SPINLOCK(sigio_spinlock); +static DEFINE_MUTEX(sigio_mutex);
void sigio_lock(void) { - spin_lock(&sigio_spinlock); + mutex_lock(&sigio_mutex); }
void sigio_unlock(void) { - spin_unlock(&sigio_spinlock); + mutex_unlock(&sigio_mutex); }
From: Jaegeuk Kim jaegeuk@kernel.org
[ Upstream commit 86f33603f8c51537265ff7ac0320638fd2cbdb1b ]
First problem is we hit BUG_ON() in f2fs_get_sum_page given EIO on f2fs_get_meta_page_nofail().
Quick fix was not to give any error with infinite loop, but syzbot caught a case where it goes to that loop from fuzzed image. In turned out we abused f2fs_get_meta_page_nofail() like in the below call stack.
- f2fs_fill_super - f2fs_build_segment_manager - build_sit_entries - get_current_sit_page
INFO: task syz-executor178:6870 can't die for more than 143 seconds. task:syz-executor178 state:R stack:26960 pid: 6870 ppid: 6869 flags:0x00004006 Call Trace:
Showing all locks held in the system: 1 lock held by khungtaskd/1179: #0: ffffffff8a554da0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x53/0x260 kernel/locking/lockdep.c:6242 1 lock held by systemd-journal/3920: 1 lock held by in:imklog/6769: #0: ffff88809eebc130 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0xe9/0x100 fs/file.c:930 1 lock held by syz-executor178/6870: #0: ffff8880925120e0 (&type->s_umount_key#47/1){+.+.}-{3:3}, at: alloc_super+0x201/0xaf0 fs/super.c:229
Actually, we didn't have to use _nofail in this case, since we could return error to mount(2) already with the error handler.
As a result, this patch tries to 1) remove _nofail callers as much as possible, 2) deal with error case in last remaining caller, f2fs_get_sum_page().
Reported-by: syzbot+ee250ac8137be41d7b13@syzkaller.appspotmail.com Reviewed-by: Chao Yu yuchao0@huawei.com Signed-off-by: Jaegeuk Kim jaegeuk@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- fs/f2fs/checkpoint.c | 2 +- fs/f2fs/f2fs.h | 2 +- fs/f2fs/node.c | 2 +- fs/f2fs/segment.c | 12 +++++++++--- 4 files changed, 12 insertions(+), 6 deletions(-)
diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c index 0b7aec059f112..4a97fe4ddf789 100644 --- a/fs/f2fs/checkpoint.c +++ b/fs/f2fs/checkpoint.c @@ -107,7 +107,7 @@ struct page *f2fs_get_meta_page(struct f2fs_sb_info *sbi, pgoff_t index) return __get_meta_page(sbi, index, true); }
-struct page *f2fs_get_meta_page_nofail(struct f2fs_sb_info *sbi, pgoff_t index) +struct page *f2fs_get_meta_page_retry(struct f2fs_sb_info *sbi, pgoff_t index) { struct page *page; int count = 0; diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index 98c4b166f192b..d44c6c36de678 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -3385,7 +3385,7 @@ enum rw_hint f2fs_io_type_to_rw_hint(struct f2fs_sb_info *sbi, void f2fs_stop_checkpoint(struct f2fs_sb_info *sbi, bool end_io); struct page *f2fs_grab_meta_page(struct f2fs_sb_info *sbi, pgoff_t index); struct page *f2fs_get_meta_page(struct f2fs_sb_info *sbi, pgoff_t index); -struct page *f2fs_get_meta_page_nofail(struct f2fs_sb_info *sbi, pgoff_t index); +struct page *f2fs_get_meta_page_retry(struct f2fs_sb_info *sbi, pgoff_t index); struct page *f2fs_get_tmp_page(struct f2fs_sb_info *sbi, pgoff_t index); bool f2fs_is_valid_blkaddr(struct f2fs_sb_info *sbi, block_t blkaddr, int type); diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c index cb1b5b61a1dab..cc4700f6240db 100644 --- a/fs/f2fs/node.c +++ b/fs/f2fs/node.c @@ -109,7 +109,7 @@ static void clear_node_page_dirty(struct page *page)
static struct page *get_current_nat_page(struct f2fs_sb_info *sbi, nid_t nid) { - return f2fs_get_meta_page_nofail(sbi, current_nat_addr(sbi, nid)); + return f2fs_get_meta_page(sbi, current_nat_addr(sbi, nid)); }
static struct page *get_next_nat_page(struct f2fs_sb_info *sbi, nid_t nid) diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c index e247a5ef3713f..2628406f43f64 100644 --- a/fs/f2fs/segment.c +++ b/fs/f2fs/segment.c @@ -2344,7 +2344,9 @@ int f2fs_npages_for_summary_flush(struct f2fs_sb_info *sbi, bool for_ra) */ struct page *f2fs_get_sum_page(struct f2fs_sb_info *sbi, unsigned int segno) { - return f2fs_get_meta_page_nofail(sbi, GET_SUM_BLOCK(sbi, segno)); + if (unlikely(f2fs_cp_error(sbi))) + return ERR_PTR(-EIO); + return f2fs_get_meta_page_retry(sbi, GET_SUM_BLOCK(sbi, segno)); }
void f2fs_update_meta_page(struct f2fs_sb_info *sbi, @@ -2616,7 +2618,11 @@ static void change_curseg(struct f2fs_sb_info *sbi, int type) __next_free_blkoff(sbi, curseg, 0);
sum_page = f2fs_get_sum_page(sbi, new_segno); - f2fs_bug_on(sbi, IS_ERR(sum_page)); + if (IS_ERR(sum_page)) { + /* GC won't be able to use stale summary pages by cp_error */ + memset(curseg->sum_blk, 0, SUM_ENTRY_SIZE); + return; + } sum_node = (struct f2fs_summary_block *)page_address(sum_page); memcpy(curseg->sum_blk, sum_node, SUM_ENTRY_SIZE); f2fs_put_page(sum_page, 1); @@ -3781,7 +3787,7 @@ int f2fs_lookup_journal_in_cursum(struct f2fs_journal *journal, int type, static struct page *get_current_sit_page(struct f2fs_sb_info *sbi, unsigned int segno) { - return f2fs_get_meta_page_nofail(sbi, current_sit_addr(sbi, segno)); + return f2fs_get_meta_page(sbi, current_sit_addr(sbi, segno)); }
static struct page *get_next_sit_page(struct f2fs_sb_info *sbi,
From: David Howells dhowells@redhat.com
[ Upstream commit 7530d3eb3dcf1a30750e8e7f1f88b782b96b72b8 ]
Don't give an assertion failure on unpurgeable afs_server records - which kills the thread - but rather emit a trace line when we are purging a record (which only happens during network namespace removal or rmmod) and print a notice of the problem.
Signed-off-by: David Howells dhowells@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/afs/server.c | 7 ++++++- include/trace/events/afs.h | 2 ++ 2 files changed, 8 insertions(+), 1 deletion(-)
diff --git a/fs/afs/server.c b/fs/afs/server.c index e82e452e26124..684a2b02b9ff7 100644 --- a/fs/afs/server.c +++ b/fs/afs/server.c @@ -550,7 +550,12 @@ void afs_manage_servers(struct work_struct *work)
_debug("manage %pU %u", &server->uuid, active);
- ASSERTIFCMP(purging, active, ==, 0); + if (purging) { + trace_afs_server(server, atomic_read(&server->ref), + active, afs_server_trace_purging); + if (active != 0) + pr_notice("Can't purge s=%08x\n", server->debug_id); + }
if (active == 0) { time64_t expire_at = server->unuse_time; diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h index 13c05e28c0b6c..342b35fc33c59 100644 --- a/include/trace/events/afs.h +++ b/include/trace/events/afs.h @@ -40,6 +40,7 @@ enum afs_server_trace { afs_server_trace_get_new_cbi, afs_server_trace_get_probe, afs_server_trace_give_up_cb, + afs_server_trace_purging, afs_server_trace_put_call, afs_server_trace_put_cbi, afs_server_trace_put_find_rsq, @@ -270,6 +271,7 @@ enum afs_cb_break_reason { EM(afs_server_trace_get_new_cbi, "GET cbi ") \ EM(afs_server_trace_get_probe, "GET probe") \ EM(afs_server_trace_give_up_cb, "giveup-cb") \ + EM(afs_server_trace_purging, "PURGE ") \ EM(afs_server_trace_put_call, "PUT call ") \ EM(afs_server_trace_put_cbi, "PUT cbi ") \ EM(afs_server_trace_put_find_rsq, "PUT f-rsq") \
From: Nicholas Piggin npiggin@gmail.com
[ Upstream commit dc462267d2d7aacffc3c1d99b02d7a7c59db7c66 ]
The ISA v3.1 the copy-paste facility has a new memory move functionality which allows the copy buffer to be pasted to domestic memory (RAM) as opposed to foreign memory (accelerator).
This means the POWER9 trick of avoiding the cp_abort on context switch if the process had not mapped foreign memory does not work on POWER10. Do the cp_abort unconditionally there.
KVM must also cp_abort on guest exit to prevent copy buffer state leaking between contexts.
Signed-off-by: Nicholas Piggin npiggin@gmail.com Acked-by: Paul Mackerras paulus@ozlabs.org Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20200825075535.224536-1-npiggin@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/powerpc/kernel/process.c | 16 +++++++++------- arch/powerpc/kvm/book3s_hv.c | 7 +++++++ arch/powerpc/kvm/book3s_hv_rmhandlers.S | 8 ++++++++ 3 files changed, 24 insertions(+), 7 deletions(-)
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c index 73a57043ee662..3f2dc0675ea7a 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -1256,15 +1256,17 @@ struct task_struct *__switch_to(struct task_struct *prev, restore_math(current->thread.regs);
/* - * The copy-paste buffer can only store into foreign real - * addresses, so unprivileged processes can not see the - * data or use it in any way unless they have foreign real - * mappings. If the new process has the foreign real address - * mappings, we must issue a cp_abort to clear any state and - * prevent snooping, corruption or a covert channel. + * On POWER9 the copy-paste buffer can only paste into + * foreign real addresses, so unprivileged processes can not + * see the data or use it in any way unless they have + * foreign real mappings. If the new process has the foreign + * real address mappings, we must issue a cp_abort to clear + * any state and prevent snooping, corruption or a covert + * channel. ISA v3.1 supports paste into local memory. */ if (current->mm && - atomic_read(¤t->mm->context.vas_windows)) + (cpu_has_feature(CPU_FTR_ARCH_31) || + atomic_read(¤t->mm->context.vas_windows))) asm volatile(PPC_CP_ABORT); } #endif /* CONFIG_PPC_BOOK3S_64 */ diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 4ba06a2a306cf..3bd3118c76330 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3530,6 +3530,13 @@ static int kvmhv_load_hv_regs_and_go(struct kvm_vcpu *vcpu, u64 time_limit, */ asm volatile("eieio; tlbsync; ptesync");
+ /* + * cp_abort is required if the processor supports local copy-paste + * to clear the copy buffer that was under control of the guest. + */ + if (cpu_has_feature(CPU_FTR_ARCH_31)) + asm volatile(PPC_CP_ABORT); + mtspr(SPRN_LPID, vcpu->kvm->arch.host_lpid); /* restore host LPID */ isync();
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S index 799d6d0f4eade..cd9995ee84419 100644 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S @@ -1830,6 +1830,14 @@ END_FTR_SECTION_IFSET(CPU_FTR_P9_RADIX_PREFETCH_BUG) 2: #endif /* CONFIG_PPC_RADIX_MMU */
+ /* + * cp_abort is required if the processor supports local copy-paste + * to clear the copy buffer that was under control of the guest. + */ +BEGIN_FTR_SECTION + PPC_CP_ABORT +END_FTR_SECTION_IFSET(CPU_FTR_ARCH_31) + /* * POWER7/POWER8 guest -> host partition switch code. * We don't have to lock against tlbies but we do
From: Douglas Anderson dianders@chromium.org
[ Upstream commit 22c9e58299e5f18274788ce54c03d4fb761e3c5d ]
This is commit fdfeff0f9e3d ("arm64: hw_breakpoint: Handle inexact watchpoint addresses") but ported to arm32, which has the same problem.
This problem was found by Android CTS tests, notably the "watchpoint_imprecise" test [1]. I tested locally against a copycat (simplified) version of the test though.
[1] https://android.googlesource.com/platform/bionic/+/master/tests/sys_ptrace_t...
Link: https://lkml.kernel.org/r/20191019111216.1.I82eae759ca6dc28a245b043f485ca490...
Signed-off-by: Douglas Anderson dianders@chromium.org Reviewed-by: Matthias Kaehlcke mka@chromium.org Acked-by: Will Deacon will@kernel.org Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm/kernel/hw_breakpoint.c | 100 +++++++++++++++++++++++--------- 1 file changed, 72 insertions(+), 28 deletions(-)
diff --git a/arch/arm/kernel/hw_breakpoint.c b/arch/arm/kernel/hw_breakpoint.c index 7a4853b1213a8..08660ae9dcbce 100644 --- a/arch/arm/kernel/hw_breakpoint.c +++ b/arch/arm/kernel/hw_breakpoint.c @@ -683,6 +683,40 @@ static void disable_single_step(struct perf_event *bp) arch_install_hw_breakpoint(bp); }
+/* + * Arm32 hardware does not always report a watchpoint hit address that matches + * one of the watchpoints set. It can also report an address "near" the + * watchpoint if a single instruction access both watched and unwatched + * addresses. There is no straight-forward way, short of disassembling the + * offending instruction, to map that address back to the watchpoint. This + * function computes the distance of the memory access from the watchpoint as a + * heuristic for the likelyhood that a given access triggered the watchpoint. + * + * See this same function in the arm64 platform code, which has the same + * problem. + * + * The function returns the distance of the address from the bytes watched by + * the watchpoint. In case of an exact match, it returns 0. + */ +static u32 get_distance_from_watchpoint(unsigned long addr, u32 val, + struct arch_hw_breakpoint_ctrl *ctrl) +{ + u32 wp_low, wp_high; + u32 lens, lene; + + lens = __ffs(ctrl->len); + lene = __fls(ctrl->len); + + wp_low = val + lens; + wp_high = val + lene; + if (addr < wp_low) + return wp_low - addr; + else if (addr > wp_high) + return addr - wp_high; + else + return 0; +} + static int watchpoint_fault_on_uaccess(struct pt_regs *regs, struct arch_hw_breakpoint *info) { @@ -692,23 +726,25 @@ static int watchpoint_fault_on_uaccess(struct pt_regs *regs, static void watchpoint_handler(unsigned long addr, unsigned int fsr, struct pt_regs *regs) { - int i, access; - u32 val, ctrl_reg, alignment_mask; + int i, access, closest_match = 0; + u32 min_dist = -1, dist; + u32 val, ctrl_reg; struct perf_event *wp, **slots; struct arch_hw_breakpoint *info; struct arch_hw_breakpoint_ctrl ctrl;
slots = this_cpu_ptr(wp_on_reg);
+ /* + * Find all watchpoints that match the reported address. If no exact + * match is found. Attribute the hit to the closest watchpoint. + */ + rcu_read_lock(); for (i = 0; i < core_num_wrps; ++i) { - rcu_read_lock(); - wp = slots[i]; - if (wp == NULL) - goto unlock; + continue;
- info = counter_arch_bp(wp); /* * The DFAR is an unknown value on debug architectures prior * to 7.1. Since we only allow a single watchpoint on these @@ -717,33 +753,31 @@ static void watchpoint_handler(unsigned long addr, unsigned int fsr, */ if (debug_arch < ARM_DEBUG_ARCH_V7_1) { BUG_ON(i > 0); + info = counter_arch_bp(wp); info->trigger = wp->attr.bp_addr; } else { - if (info->ctrl.len == ARM_BREAKPOINT_LEN_8) - alignment_mask = 0x7; - else - alignment_mask = 0x3; - - /* Check if the watchpoint value matches. */ - val = read_wb_reg(ARM_BASE_WVR + i); - if (val != (addr & ~alignment_mask)) - goto unlock; - - /* Possible match, check the byte address select. */ - ctrl_reg = read_wb_reg(ARM_BASE_WCR + i); - decode_ctrl_reg(ctrl_reg, &ctrl); - if (!((1 << (addr & alignment_mask)) & ctrl.len)) - goto unlock; - /* Check that the access type matches. */ if (debug_exception_updates_fsr()) { access = (fsr & ARM_FSR_ACCESS_MASK) ? HW_BREAKPOINT_W : HW_BREAKPOINT_R; if (!(access & hw_breakpoint_type(wp))) - goto unlock; + continue; }
+ val = read_wb_reg(ARM_BASE_WVR + i); + ctrl_reg = read_wb_reg(ARM_BASE_WCR + i); + decode_ctrl_reg(ctrl_reg, &ctrl); + dist = get_distance_from_watchpoint(addr, val, &ctrl); + if (dist < min_dist) { + min_dist = dist; + closest_match = i; + } + /* Is this an exact match? */ + if (dist != 0) + continue; + /* We have a winner. */ + info = counter_arch_bp(wp); info->trigger = addr; }
@@ -765,13 +799,23 @@ static void watchpoint_handler(unsigned long addr, unsigned int fsr, * we can single-step over the watchpoint trigger. */ if (!is_default_overflow_handler(wp)) - goto unlock; - + continue; step: enable_single_step(wp, instruction_pointer(regs)); -unlock: - rcu_read_unlock(); } + + if (min_dist > 0 && min_dist != -1) { + /* No exact match found. */ + wp = slots[closest_match]; + info = counter_arch_bp(wp); + info->trigger = addr; + pr_debug("watchpoint fired: address = 0x%x\n", info->trigger); + perf_bp_event(wp, regs); + if (is_default_overflow_handler(wp)) + enable_single_step(wp, instruction_pointer(regs)); + } + + rcu_read_unlock(); }
static void watchpoint_single_step_handler(unsigned long pc)
From: Dave Wysochanski dwysocha@redhat.com
[ Upstream commit d8a6ad913c286d4763ae20b14c02fe6f39d7cd9f ]
The following oops is seen during xfstest/565 when the 'test' (source of the copy) is NFS4.0 and 'scratch' (destination) is NFS4.2 [ 59.692458] run fstests generic/565 at 2020-08-01 05:50:35 [ 60.613588] BUG: kernel NULL pointer dereference, address: 0000000000000008 [ 60.624970] #PF: supervisor read access in kernel mode [ 60.627671] #PF: error_code(0x0000) - not-present page [ 60.630347] PGD 0 P4D 0 [ 60.631853] Oops: 0000 [#1] SMP PTI [ 60.634086] CPU: 6 PID: 2828 Comm: xfs_io Kdump: loaded Not tainted 5.8.0-rc3 #1 [ 60.637676] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 [ 60.639901] RIP: 0010:nfs4_check_serverowner_major_id+0x5/0x30 [nfsv4] [ 60.642719] Code: 89 ff e8 3e b3 b8 e1 e9 71 fe ff ff 41 bc da d8 ff ff e9 c3 fe ff ff e8 e9 9d 08 e2 66 0f 1f 84 00 00 00 00 00 66 66 66 66 90 <8b> 57 08 31 c0 3b 56 08 75 12 48 83 c6 0c 48 83 c7 0c e8 c4 97 bb [ 60.652629] RSP: 0018:ffffc265417f7e10 EFLAGS: 00010287 [ 60.655379] RAX: ffffa0664b066400 RBX: 0000000000000000 RCX: 0000000000000001 [ 60.658754] RDX: ffffa066725fb000 RSI: ffffa066725fd000 RDI: 0000000000000000 [ 60.662292] RBP: 0000000000020000 R08: 0000000000020000 R09: 0000000000000000 [ 60.666189] R10: 0000000000000003 R11: 0000000000000000 R12: ffffa06648258d00 [ 60.669914] R13: 0000000000000000 R14: 0000000000000000 R15: ffffa06648258100 [ 60.673645] FS: 00007faa9fb35800(0000) GS:ffffa06677d80000(0000) knlGS:0000000000000000 [ 60.677698] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 60.680773] CR2: 0000000000000008 CR3: 0000000203f14000 CR4: 00000000000406e0 [ 60.684476] Call Trace: [ 60.685809] nfs4_copy_file_range+0xfc/0x230 [nfsv4] [ 60.688704] vfs_copy_file_range+0x2ee/0x310 [ 60.691104] __x64_sys_copy_file_range+0xd6/0x210 [ 60.693527] do_syscall_64+0x4d/0x90 [ 60.695512] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [ 60.698006] RIP: 0033:0x7faa9febc1bd
Signed-off-by: Dave Wysochanski dwysocha@redhat.com Signed-off-by: Anna Schumaker Anna.Schumaker@Netapp.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/nfs/nfs4file.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/nfs/nfs4file.c b/fs/nfs/nfs4file.c index 984938024011b..9d354de613dae 100644 --- a/fs/nfs/nfs4file.c +++ b/fs/nfs/nfs4file.c @@ -146,7 +146,8 @@ static ssize_t __nfs4_copy_file_range(struct file *file_in, loff_t pos_in, /* Only offload copy if superblock is the same */ if (file_in->f_op != &nfs4_file_operations) return -EXDEV; - if (!nfs_server_capable(file_inode(file_out), NFS_CAP_COPY)) + if (!nfs_server_capable(file_inode(file_out), NFS_CAP_COPY) || + !nfs_server_capable(file_inode(file_in), NFS_CAP_COPY)) return -EOPNOTSUPP; if (file_inode(file_in) == file_inode(file_out)) return -EOPNOTSUPP;
From: Chandan Babu R chandanrlinux@gmail.com
[ Upstream commit 72cc95132a93293dcd0b6f68353f4741591c9aeb ]
The following sequence of commands,
mkfs.xfs -f -m reflink=0 -r rtdev=/dev/loop1,size=10M /dev/loop0 mount -o rtdev=/dev/loop1 /dev/loop0 /mnt xfs_growfs /mnt
... causes the following call trace to be printed on the console,
XFS: Assertion failed: (bip->bli_flags & XFS_BLI_STALE) || (xfs_blft_from_flags(&bip->__bli_format) > XFS_BLFT_UNKNOWN_BUF && xfs_blft_from_flags(&bip->__bli_format) < XFS_BLFT_MAX_BUF), file: fs/xfs/xfs_buf_item.c, line: 331 Call Trace: xfs_buf_item_format+0x632/0x680 ? kmem_alloc_large+0x29/0x90 ? kmem_alloc+0x70/0x120 ? xfs_log_commit_cil+0x132/0x940 xfs_log_commit_cil+0x26f/0x940 ? xfs_buf_item_init+0x1ad/0x240 ? xfs_growfs_rt_alloc+0x1fc/0x280 __xfs_trans_commit+0xac/0x370 xfs_growfs_rt_alloc+0x1fc/0x280 xfs_growfs_rt+0x1a0/0x5e0 xfs_file_ioctl+0x3fd/0xc70 ? selinux_file_ioctl+0x174/0x220 ksys_ioctl+0x87/0xc0 __x64_sys_ioctl+0x16/0x20 do_syscall_64+0x3e/0x70 entry_SYSCALL_64_after_hwframe+0x44/0xa9
This occurs because the buffer being formatted has the value of XFS_BLFT_UNKNOWN_BUF assigned to the 'type' subfield of bip->bli_formats->blf_flags.
This commit fixes the issue by assigning one of XFS_BLFT_RTSUMMARY_BUF and XFS_BLFT_RTBITMAP_BUF to the 'type' subfield of bip->bli_formats->blf_flags before committing the corresponding transaction.
Reviewed-by: Darrick J. Wong darrick.wong@oracle.com Reviewed-by: Christoph Hellwig hch@lst.de Signed-off-by: Chandan Babu R chandanrlinux@gmail.com Signed-off-by: Darrick J. Wong darrick.wong@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/xfs/xfs_rtalloc.c | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/fs/xfs/xfs_rtalloc.c b/fs/xfs/xfs_rtalloc.c index 86994d7f7cba3..912f96a248f25 100644 --- a/fs/xfs/xfs_rtalloc.c +++ b/fs/xfs/xfs_rtalloc.c @@ -778,8 +778,14 @@ xfs_growfs_rt_alloc( struct xfs_bmbt_irec map; /* block map output */ int nmap; /* number of block maps */ int resblks; /* space reservation */ + enum xfs_blft buf_type; struct xfs_trans *tp;
+ if (ip == mp->m_rsumip) + buf_type = XFS_BLFT_RTSUMMARY_BUF; + else + buf_type = XFS_BLFT_RTBITMAP_BUF; + /* * Allocate space to the file, as necessary. */ @@ -841,6 +847,8 @@ xfs_growfs_rt_alloc( mp->m_bsize, 0, &bp); if (error) goto out_trans_cancel; + + xfs_trans_buf_set_type(tp, bp, buf_type); memset(bp->b_addr, 0, mp->m_sb.sb_blocksize); xfs_trans_log_buf(tp, bp, 0, mp->m_sb.sb_blocksize - 1); /*
From: Chandan Babu R chandanrlinux@gmail.com
[ Upstream commit c54e14d155f5fdbac73a8cd4bd2678cb252149dc ]
In xfs_growfs_rt(), we enlarge bitmap and summary files by allocating new blocks for both files. For each of the new blocks allocated, we allocate an xfs_buf, zero the payload, log the contents and commit the transaction. Hence these buffers will eventually find themselves appended to list at xfs_ail->ail_buf_list.
Later, xfs_growfs_rt() loops across all of the new blocks belonging to the bitmap inode to set the bitmap values to 1. In doing so, it allocates a new transaction and invokes the following sequence of functions, - xfs_rtfree_range() - xfs_rtmodify_range() - xfs_rtbuf_get() We pass '&xfs_rtbuf_ops' as the ops pointer to xfs_trans_read_buf(). - xfs_trans_read_buf() We find the xfs_buf of interest in per-ag hash table, invoke xfs_buf_reverify() which ends up assigning '&xfs_rtbuf_ops' to xfs_buf->b_ops.
On the other hand, if xfs_growfs_rt_alloc() had allocated a few blocks for the bitmap inode and returned with an error, all the xfs_bufs corresponding to the new bitmap blocks that have been allocated would continue to be on xfs_ail->ail_buf_list list without ever having a non-NULL value assigned to their b_ops members. An AIL flush operation would then trigger the following warning message to be printed on the console,
XFS (loop0): _xfs_buf_ioapply: no buf ops on daddr 0x58 len 8 00000000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00000010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00000020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00000030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00000040: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00000050: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00000060: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00000070: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ CPU: 3 PID: 449 Comm: xfsaild/loop0 Not tainted 5.8.0-rc4-chandan-00038-g4d8c2b9de9ab-dirty #37 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014 Call Trace: dump_stack+0x57/0x70 _xfs_buf_ioapply+0x37c/0x3b0 ? xfs_rw_bdev+0x1e0/0x1e0 ? xfs_buf_delwri_submit_buffers+0xd4/0x210 __xfs_buf_submit+0x6d/0x1f0 xfs_buf_delwri_submit_buffers+0xd4/0x210 xfsaild+0x2c8/0x9e0 ? __switch_to_asm+0x42/0x70 ? xfs_trans_ail_cursor_first+0x80/0x80 kthread+0xfe/0x140 ? kthread_park+0x90/0x90 ret_from_fork+0x22/0x30
This message indicates that the xfs_buf had its b_ops member set to NULL.
This commit fixes the issue by assigning "&xfs_rtbuf_ops" to b_ops member of each of the xfs_bufs logged by xfs_growfs_rt_alloc().
Signed-off-by: Chandan Babu R chandanrlinux@gmail.com Reviewed-by: Darrick J. Wong darrick.wong@oracle.com Signed-off-by: Darrick J. Wong darrick.wong@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/xfs/xfs_rtalloc.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/fs/xfs/xfs_rtalloc.c b/fs/xfs/xfs_rtalloc.c index 912f96a248f25..0753b1dfd0750 100644 --- a/fs/xfs/xfs_rtalloc.c +++ b/fs/xfs/xfs_rtalloc.c @@ -849,6 +849,7 @@ xfs_growfs_rt_alloc( goto out_trans_cancel;
xfs_trans_buf_set_type(tp, bp, buf_type); + bp->b_ops = &xfs_rtbuf_ops; memset(bp->b_addr, 0, mp->m_sb.sb_blocksize); xfs_trans_log_buf(tp, bp, 0, mp->m_sb.sb_blocksize - 1); /*
From: Darrick J. Wong darrick.wong@oracle.com
[ Upstream commit 93293bcbde93567efaf4e6bcd58cad270e1fcbf5 ]
During a code inspection, I found a serious bug in the log intent item recovery code when an intent item cannot complete all the work and decides to requeue itself to get that done. When this happens, the item recovery creates a new incore deferred op representing the remaining work and attaches it to the transaction that it allocated. At the end of _item_recover, it moves the entire chain of deferred ops to the dummy parent_tp that xlog_recover_process_intents passed to it, but fail to log a new intent item for the remaining work before committing the transaction for the single unit of work.
xlog_finish_defer_ops logs those new intent items once recovery has finished dealing with the intent items that it recovered, but this isn't sufficient. If the log is forced to disk after a recovered log item decides to requeue itself and the system goes down before we call xlog_finish_defer_ops, the second log recovery will never see the new intent item and therefore has no idea that there was more work to do. It will finish recovery leaving the filesystem in a corrupted state.
The same logic applies to /any/ deferred ops added during intent item recovery, not just the one handling the remaining work.
Signed-off-by: Darrick J. Wong darrick.wong@oracle.com Reviewed-by: Christoph Hellwig hch@lst.de Reviewed-by: Dave Chinner dchinner@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/xfs/libxfs/xfs_defer.c | 26 ++++++++++++++++++++++++-- fs/xfs/libxfs/xfs_defer.h | 6 ++++++ fs/xfs/xfs_bmap_item.c | 2 +- fs/xfs/xfs_refcount_item.c | 2 +- 4 files changed, 32 insertions(+), 4 deletions(-)
diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c index d8f586256add7..29e9762f3b77c 100644 --- a/fs/xfs/libxfs/xfs_defer.c +++ b/fs/xfs/libxfs/xfs_defer.c @@ -186,8 +186,9 @@ xfs_defer_create_intent( { const struct xfs_defer_op_type *ops = defer_op_types[dfp->dfp_type];
- dfp->dfp_intent = ops->create_intent(tp, &dfp->dfp_work, - dfp->dfp_count, sort); + if (!dfp->dfp_intent) + dfp->dfp_intent = ops->create_intent(tp, &dfp->dfp_work, + dfp->dfp_count, sort); }
/* @@ -390,6 +391,7 @@ xfs_defer_finish_one( list_add(li, &dfp->dfp_work); dfp->dfp_count++; dfp->dfp_done = NULL; + dfp->dfp_intent = NULL; xfs_defer_create_intent(tp, dfp, false); }
@@ -552,3 +554,23 @@ xfs_defer_move(
xfs_defer_reset(stp); } + +/* + * Prepare a chain of fresh deferred ops work items to be completed later. Log + * recovery requires the ability to put off until later the actual finishing + * work so that it can process unfinished items recovered from the log in + * correct order. + * + * Create and log intent items for all the work that we're capturing so that we + * can be assured that the items will get replayed if the system goes down + * before log recovery gets a chance to finish the work it put off. Then we + * move the chain from stp to dtp. + */ +void +xfs_defer_capture( + struct xfs_trans *dtp, + struct xfs_trans *stp) +{ + xfs_defer_create_intents(stp); + xfs_defer_move(dtp, stp); +} diff --git a/fs/xfs/libxfs/xfs_defer.h b/fs/xfs/libxfs/xfs_defer.h index 6b2ca580f2b06..3164199162b61 100644 --- a/fs/xfs/libxfs/xfs_defer.h +++ b/fs/xfs/libxfs/xfs_defer.h @@ -63,4 +63,10 @@ extern const struct xfs_defer_op_type xfs_rmap_update_defer_type; extern const struct xfs_defer_op_type xfs_extent_free_defer_type; extern const struct xfs_defer_op_type xfs_agfl_free_defer_type;
+/* + * Functions to capture a chain of deferred operations and continue them later. + * This doesn't normally happen except log recovery. + */ +void xfs_defer_capture(struct xfs_trans *dtp, struct xfs_trans *stp); + #endif /* __XFS_DEFER_H__ */ diff --git a/fs/xfs/xfs_bmap_item.c b/fs/xfs/xfs_bmap_item.c index ec3691372e7c0..815a0563288f4 100644 --- a/fs/xfs/xfs_bmap_item.c +++ b/fs/xfs/xfs_bmap_item.c @@ -534,7 +534,7 @@ xfs_bui_item_recover( xfs_bmap_unmap_extent(tp, ip, &irec); }
- xfs_defer_move(parent_tp, tp); + xfs_defer_capture(parent_tp, tp); error = xfs_trans_commit(tp); xfs_iunlock(ip, XFS_ILOCK_EXCL); xfs_irele(ip); diff --git a/fs/xfs/xfs_refcount_item.c b/fs/xfs/xfs_refcount_item.c index ca93b64883774..492d80a0b4060 100644 --- a/fs/xfs/xfs_refcount_item.c +++ b/fs/xfs/xfs_refcount_item.c @@ -555,7 +555,7 @@ xfs_cui_item_recover( }
xfs_refcount_finish_one_cleanup(tp, rcur, error); - xfs_defer_move(parent_tp, tp); + xfs_defer_capture(parent_tp, tp); error = xfs_trans_commit(tp); return error;
From: Krzysztof Kozlowski krzk@kernel.org
[ Upstream commit 7bf738ba110722b63e9dc8af760d3fb2aef25593 ]
Commit 6f24ff97e323 ("power: supply: bq27xxx_battery: Add the BQ27Z561 Battery monitor") and commit d74534c27775 ("power: bq27xxx_battery: Add support for additional bq27xxx family devices") added support for new device types by copying most of the code and adding necessary quirks.
However they did not copy the code in bq27xxx_battery_status() responsible for returning POWER_SUPPLY_STATUS_NOT_CHARGING.
Unify the bq27xxx_battery_status() so for all types when charger is supplied, it will return "not charging" status.
Signed-off-by: Krzysztof Kozlowski krzk@kernel.org Signed-off-by: Sebastian Reichel sebastian.reichel@collabora.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/power/supply/bq27xxx_battery.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/power/supply/bq27xxx_battery.c b/drivers/power/supply/bq27xxx_battery.c index a123f6e21f08a..08b9d025a3e81 100644 --- a/drivers/power/supply/bq27xxx_battery.c +++ b/drivers/power/supply/bq27xxx_battery.c @@ -1772,8 +1772,6 @@ static int bq27xxx_battery_status(struct bq27xxx_device_info *di, status = POWER_SUPPLY_STATUS_FULL; else if (di->cache.flags & BQ27000_FLAG_CHGS) status = POWER_SUPPLY_STATUS_CHARGING; - else if (power_supply_am_i_supplied(di->bat) > 0) - status = POWER_SUPPLY_STATUS_NOT_CHARGING; else status = POWER_SUPPLY_STATUS_DISCHARGING; } else if (di->opts & BQ27Z561_O_BITS) { @@ -1792,6 +1790,10 @@ static int bq27xxx_battery_status(struct bq27xxx_device_info *di, status = POWER_SUPPLY_STATUS_CHARGING; }
+ if ((status == POWER_SUPPLY_STATUS_DISCHARGING) && + (power_supply_am_i_supplied(di->bat) > 0)) + status = POWER_SUPPLY_STATUS_NOT_CHARGING; + val->intval = status;
return 0;
From: Darrick J. Wong darrick.wong@oracle.com
[ Upstream commit 27dada070d59c28a441f1907d2cec891b17dcb26 ]
The defer ops code has been finishing items in the wrong order -- if a top level defer op creates items A and B, and finishing item A creates more defer ops A1 and A2, we'll put the new items on the end of the chain and process them in the order A B A1 A2. This is kind of weird, since it's convenient for programmers to be able to think of A and B as an ordered sequence where all the sub-tasks for A must finish before we move on to B, e.g. A A1 A2 D.
Right now, our log intent items are not so complex that this matters, but this will become important for the atomic extent swapping patchset. In order to maintain correct reference counting of extents, we have to unmap and remap extents in that order, and we want to complete that work before moving on to the next range that the user wants to swap. This patch fixes defer ops to satsify that requirement.
The primary symptom of the incorrect order was noticed in an early performance analysis of the atomic extent swap code. An astonishingly large number of deferred work items accumulated when userspace requested an atomic update of two very fragmented files. The cause of this was traced to the same ordering bug in the inner loop of xfs_defer_finish_noroll.
If the ->finish_item method of a deferred operation queues new deferred operations, those new deferred ops are appended to the tail of the pending work list. To illustrate, say that a caller creates a transaction t0 with four deferred operations D0-D3. The first thing defer ops does is roll the transaction to t1, leaving us with:
t1: D0(t0), D1(t0), D2(t0), D3(t0)
Let's say that finishing each of D0-D3 will create two new deferred ops. After finish D0 and roll, we'll have the following chain:
t2: D1(t0), D2(t0), D3(t0), d4(t1), d5(t1)
d4 and d5 were logged to t1. Notice that while we're about to start work on D1, we haven't actually completed all the work implied by D0 being finished. So far we've been careful (or lucky) to structure the dfops callers such that D1 doesn't depend on d4 or d5 being finished, but this is a potential logic bomb.
There's a second problem lurking. Let's see what happens as we finish D1-D3:
t3: D2(t0), D3(t0), d4(t1), d5(t1), d6(t2), d7(t2) t4: D3(t0), d4(t1), d5(t1), d6(t2), d7(t2), d8(t3), d9(t3) t5: d4(t1), d5(t1), d6(t2), d7(t2), d8(t3), d9(t3), d10(t4), d11(t4)
Let's say that d4-d11 are simple work items that don't queue any other operations, which means that we can complete each d4 and roll to t6:
t6: d5(t1), d6(t2), d7(t2), d8(t3), d9(t3), d10(t4), d11(t4) t7: d6(t2), d7(t2), d8(t3), d9(t3), d10(t4), d11(t4) ... t11: d10(t4), d11(t4) t12: d11(t4) <done>
When we try to roll to transaction #12, we're holding defer op d11, which we logged way back in t4. This means that the tail of the log is pinned at t4. If the log is very small or there are a lot of other threads updating metadata, this means that we might have wrapped the log and cannot get roll to t11 because there isn't enough space left before we'd run into t4.
Let's shift back to the original failure. I mentioned before that I discovered this flaw while developing the atomic file update code. In that scenario, we have a defer op (D0) that finds a range of file blocks to remap, creates a handful of new defer ops to do that, and then asks to be continued with however much work remains.
So, D0 is the original swapext deferred op. The first thing defer ops does is rolls to t1:
t1: D0(t0)
We try to finish D0, logging d1 and d2 in the process, but can't get all the work done. We log a done item and a new intent item for the work that D0 still has to do, and roll to t2:
t2: D0'(t1), d1(t1), d2(t1)
We roll and try to finish D0', but still can't get all the work done, so we log a done item and a new intent item for it, requeue D0 a second time, and roll to t3:
t3: D0''(t2), d1(t1), d2(t1), d3(t2), d4(t2)
If it takes 48 more rolls to complete D0, then we'll finally dispense with D0 in t50:
t50: D<fifty primes>(t49), d1(t1), ..., d102(t50)
We then try to roll again to get a chain like this:
t51: d1(t1), d2(t1), ..., d101(t50), d102(t50) ... t152: d102(t50) <done>
Notice that in rolling to transaction #51, we're holding on to a log intent item for d1 that was logged in transaction #1. This means that the tail of the log is pinned at t1. If the log is very small or there are a lot of other threads updating metadata, this means that we might have wrapped the log and cannot roll to t51 because there isn't enough space left before we'd run into t1. This is of course problem #2 again.
But notice the third problem with this scenario: we have 102 defer ops tied to this transaction! Each of these items are backed by pinned kernel memory, which means that we risk OOM if the chains get too long.
Yikes. Problem #1 is a subtle logic bomb that could hit someone in the future; problem #2 applies (rarely) to the current upstream, and problem #3 applies to work under development.
This is not how incremental deferred operations were supposed to work. The dfops design of logging in the same transaction an intent-done item and a new intent item for the work remaining was to make it so that we only have to juggle enough deferred work items to finish that one small piece of work. Deferred log item recovery will find that first unfinished work item and restart it, no matter how many other intent items might follow it in the log. Therefore, it's ok to put the new intents at the start of the dfops chain.
For the first example, the chains look like this:
t2: d4(t1), d5(t1), D1(t0), D2(t0), D3(t0) t3: d5(t1), D1(t0), D2(t0), D3(t0) ... t9: d9(t7), D3(t0) t10: D3(t0) t11: d10(t10), d11(t10) t12: d11(t10)
For the second example, the chains look like this:
t1: D0(t0) t2: d1(t1), d2(t1), D0'(t1) t3: d2(t1), D0'(t1) t4: D0'(t1) t5: d1(t4), d2(t4), D0''(t4) ... t148: D0<50 primes>(t147) t149: d101(t148), d102(t148) t150: d102(t148) <done>
This actually sucks more for pinning the log tail (we try to roll to t10 while holding an intent item that was logged in t1) but we've solved problem #1. We've also reduced the maximum chain length from:
sum(all the new items) + nr_original_items
to:
max(new items that each original item creates) + nr_original_items
This solves problem #3 by sharply reducing the number of defer ops that can be attached to a transaction at any given time. The change makes the problem of log tail pinning worse, but is improvement we need to solve problem #2. Actually solving #2, however, is left to the next patch.
Note that a subsequent analysis of some hard-to-trigger reflink and COW livelocks on extremely fragmented filesystems (or systems running a lot of IO threads) showed the same symptoms -- uncomfortably large numbers of incore deferred work items and occasional stalls in the transaction grant code while waiting for log reservations. I think this patch and the next one will also solve these problems.
As originally written, the code used list_splice_tail_init instead of list_splice_init, so change that, and leave a short comment explaining our actions.
Signed-off-by: Darrick J. Wong darrick.wong@oracle.com Reviewed-by: Dave Chinner dchinner@redhat.com Reviewed-by: Brian Foster bfoster@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/xfs/libxfs/xfs_defer.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c index 29e9762f3b77c..4959d8a32b606 100644 --- a/fs/xfs/libxfs/xfs_defer.c +++ b/fs/xfs/libxfs/xfs_defer.c @@ -430,8 +430,17 @@ xfs_defer_finish_noroll(
/* Until we run out of pending work to finish... */ while (!list_empty(&dop_pending) || !list_empty(&(*tp)->t_dfops)) { + /* + * Deferred items that are created in the process of finishing + * other deferred work items should be queued at the head of + * the pending list, which puts them ahead of the deferred work + * that was created by the caller. This keeps the number of + * pending work items to a minimum, which decreases the amount + * of time that any one intent item can stick around in memory, + * pinning the log tail. + */ xfs_defer_create_intents(*tp); - list_splice_tail_init(&(*tp)->t_dfops, &dop_pending); + list_splice_init(&(*tp)->t_dfops, &dop_pending);
error = xfs_defer_trans_roll(tp); if (error)
From: Darrick J. Wong darrick.wong@oracle.com
[ Upstream commit f4c32e87de7d66074d5612567c5eac7325024428 ]
The realtime bitmap and summary files are regular files that are hidden away from the directory tree. Since they're regular files, inode inactivation will try to purge what it thinks are speculative preallocations beyond the incore size of the file. Unfortunately, xfs_growfs_rt forgets to update the incore size when it resizes the inodes, with the result that inactivating the rt inodes at unmount time will cause their contents to be truncated.
Fix this by updating the incore size when we change the ondisk size as part of updating the superblock. Note that we don't do this when we're allocating blocks to the rt inodes because we actually want those blocks to get purged if the growfs fails.
This fixes corruption complaints from the online rtsummary checker when running xfs/233. Since that test requires rmap, one can also trigger this by growing an rt volume, cycling the mount, and creating rt files.
Signed-off-by: Darrick J. Wong darrick.wong@oracle.com Reviewed-by: Chandan Babu R chandanrlinux@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/xfs/xfs_rtalloc.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/fs/xfs/xfs_rtalloc.c b/fs/xfs/xfs_rtalloc.c index 0753b1dfd0750..be01bfbc3ad93 100644 --- a/fs/xfs/xfs_rtalloc.c +++ b/fs/xfs/xfs_rtalloc.c @@ -1027,10 +1027,13 @@ xfs_growfs_rt( xfs_ilock(mp->m_rbmip, XFS_ILOCK_EXCL); xfs_trans_ijoin(tp, mp->m_rbmip, XFS_ILOCK_EXCL); /* - * Update the bitmap inode's size. + * Update the bitmap inode's size ondisk and incore. We need + * to update the incore size so that inode inactivation won't + * punch what it thinks are "posteof" blocks. */ mp->m_rbmip->i_d.di_size = nsbp->sb_rbmblocks * nsbp->sb_blocksize; + i_size_write(VFS_I(mp->m_rbmip), mp->m_rbmip->i_d.di_size); xfs_trans_log_inode(tp, mp->m_rbmip, XFS_ILOG_CORE); /* * Get the summary inode into the transaction. @@ -1038,9 +1041,12 @@ xfs_growfs_rt( xfs_ilock(mp->m_rsumip, XFS_ILOCK_EXCL); xfs_trans_ijoin(tp, mp->m_rsumip, XFS_ILOCK_EXCL); /* - * Update the summary inode's size. + * Update the summary inode's size. We need to update the + * incore size so that inode inactivation won't punch what it + * thinks are "posteof" blocks. */ mp->m_rsumip->i_d.di_size = nmp->m_rsumsize; + i_size_write(VFS_I(mp->m_rsumip), mp->m_rsumip->i_d.di_size); xfs_trans_log_inode(tp, mp->m_rsumip, XFS_ILOG_CORE); /* * Copy summary data from old to new sizes.
From: Pavel Begunkov asml.silence@gmail.com
[ Upstream commit 368c5481ae7c6a9719c40984faea35480d9f4872 ]
__io_kill_linked_timeout() sets REQ_F_COMP_LOCKED for a linked timeout even if it can't cancel it, e.g. it's already running. It not only races with io_link_timeout_fn() for ->flags field, but also leaves the flag set and so io_link_timeout_fn() may find it and decide that it holds the lock. Hopefully, the second problem is potential.
Signed-off-by: Pavel Begunkov asml.silence@gmail.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- fs/io_uring.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c index 59ab8c5c2aaaa..50a7a99dad4ca 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -1650,6 +1650,7 @@ static bool io_link_cancel_timeout(struct io_kiocb *req)
ret = hrtimer_try_to_cancel(&req->io->timeout.timer); if (ret != -1) { + req->flags |= REQ_F_COMP_LOCKED; io_cqring_fill_event(req, -ECANCELED); io_commit_cqring(ctx); req->flags &= ~REQ_F_LINK_HEAD; @@ -1672,7 +1673,6 @@ static bool __io_kill_linked_timeout(struct io_kiocb *req) return false;
list_del_init(&link->link_list); - link->flags |= REQ_F_COMP_LOCKED; wake_ev = io_link_cancel_timeout(link); req->flags &= ~REQ_F_LINK_TIMEOUT; return wake_ev;
From: Venkateswara Naralasetty vnaralas@codeaurora.org
[ Upstream commit 67b927f9820847d30e97510b2f00cd142b9559b6 ]
When tx status enabled, retry count is updated from tx completion status. which is not working as expected due to firmware limitation where firmware can not provide per MSDU rate statistics from tx completion status. Due to this tx retry count is always 0 in station dump.
Fix this issue by updating the retry packet count from per peer statistics. This patch will not break on SDIO devices since, this retry count is already updating from peer statistics for SDIO devices.
Tested-on: QCA9984 PCI 10.4-3.6-00104 Tested-on: QCA9882 PCI 10.2.4-1.0-00047
Signed-off-by: Venkateswara Naralasetty vnaralas@codeaurora.org Signed-off-by: Kalle Valo kvalo@codeaurora.org Link: https://lore.kernel.org/r/1591856446-26977-1-git-send-email-vnaralas@codeaur... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/ath/ath10k/htt_rx.c | 8 +++++--- drivers/net/wireless/ath/ath10k/mac.c | 5 +++-- 2 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/drivers/net/wireless/ath/ath10k/htt_rx.c b/drivers/net/wireless/ath/ath10k/htt_rx.c index 215ade6faf328..69ad4ca1a87c1 100644 --- a/drivers/net/wireless/ath/ath10k/htt_rx.c +++ b/drivers/net/wireless/ath/ath10k/htt_rx.c @@ -3583,12 +3583,14 @@ ath10k_update_per_peer_tx_stats(struct ath10k *ar, }
if (ar->htt.disable_tx_comp) { - arsta->tx_retries += peer_stats->retry_pkts; arsta->tx_failed += peer_stats->failed_pkts; - ath10k_dbg(ar, ATH10K_DBG_HTT, "htt tx retries %d tx failed %d\n", - arsta->tx_retries, arsta->tx_failed); + ath10k_dbg(ar, ATH10K_DBG_HTT, "tx failed %d\n", + arsta->tx_failed); }
+ arsta->tx_retries += peer_stats->retry_pkts; + ath10k_dbg(ar, ATH10K_DBG_HTT, "htt tx retries %d", arsta->tx_retries); + if (ath10k_debug_is_extd_tx_stats_enabled(ar)) ath10k_accumulate_per_peer_tx_stats(ar, arsta, peer_stats, rate_idx); diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c index 2177e9d92bdff..03c7edf05a1d1 100644 --- a/drivers/net/wireless/ath/ath10k/mac.c +++ b/drivers/net/wireless/ath/ath10k/mac.c @@ -8542,12 +8542,13 @@ static void ath10k_sta_statistics(struct ieee80211_hw *hw, sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_BITRATE);
if (ar->htt.disable_tx_comp) { - sinfo->tx_retries = arsta->tx_retries; - sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_RETRIES); sinfo->tx_failed = arsta->tx_failed; sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_FAILED); }
+ sinfo->tx_retries = arsta->tx_retries; + sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_RETRIES); + ath10k_mac_sta_get_peer_stats_info(ar, sta, sinfo); }
From: Arvind Sankar nivedita@alum.mit.edu
[ Upstream commit 451286940d95778e83fa7f97006316d995b4c4a8 ]
On 64-bit, the kernel must be placed below MAXMEM (64TiB with 4-level paging or 4PiB with 5-level paging). This is currently not enforced by KASLR, which thus implicitly relies on physical memory being limited to less than 64TiB.
On 32-bit, the limit is KERNEL_IMAGE_SIZE (512MiB). This is enforced by special checks in __process_mem_region().
Initialize mem_limit to the maximum (depending on architecture), instead of ULLONG_MAX, and make sure the command-line arguments can only decrease it. This makes the enforcement explicit on 64-bit, and eliminates the 32-bit specific checks to keep the kernel below 512M.
Check upfront to make sure the minimum address is below the limit before doing any work.
Signed-off-by: Arvind Sankar nivedita@alum.mit.edu Signed-off-by: Ingo Molnar mingo@kernel.org Acked-by: Kees Cook keescook@chromium.org Link: https://lore.kernel.org/r/20200727230801.3468620-5-nivedita@alum.mit.edu Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/boot/compressed/kaslr.c | 41 +++++++++++++++++--------------- 1 file changed, 22 insertions(+), 19 deletions(-)
diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c index dde7cb3724df3..9bd966ef7d19e 100644 --- a/arch/x86/boot/compressed/kaslr.c +++ b/arch/x86/boot/compressed/kaslr.c @@ -87,8 +87,11 @@ static unsigned long get_boot_seed(void) static bool memmap_too_large;
-/* Store memory limit specified by "mem=nn[KMG]" or "memmap=nn[KMG]" */ -static unsigned long long mem_limit = ULLONG_MAX; +/* + * Store memory limit: MAXMEM on 64-bit and KERNEL_IMAGE_SIZE on 32-bit. + * It may be reduced by "mem=nn[KMG]" or "memmap=nn[KMG]" command line options. + */ +static unsigned long long mem_limit;
/* Number of immovable memory regions */ static int num_immovable_mem; @@ -214,7 +217,7 @@ static void mem_avoid_memmap(enum parse_mode mode, char *str)
if (start == 0) { /* Store the specified memory limit if size > 0 */ - if (size > 0) + if (size > 0 && size < mem_limit) mem_limit = size;
continue; @@ -302,7 +305,8 @@ static void handle_mem_options(void) if (mem_size == 0) goto out;
- mem_limit = mem_size; + if (mem_size < mem_limit) + mem_limit = mem_size; } else if (!strcmp(param, "efi_fake_mem")) { mem_avoid_memmap(PARSE_EFI, val); } @@ -314,7 +318,9 @@ out: }
/* - * In theory, KASLR can put the kernel anywhere in the range of [16M, 64T). + * In theory, KASLR can put the kernel anywhere in the range of [16M, MAXMEM) + * on 64-bit, and [16M, KERNEL_IMAGE_SIZE) on 32-bit. + * * The mem_avoid array is used to store the ranges that need to be avoided * when KASLR searches for an appropriate random address. We must avoid any * regions that are unsafe to overlap with during decompression, and other @@ -614,10 +620,6 @@ static void __process_mem_region(struct mem_vector *entry, unsigned long start_orig, end; struct mem_vector cur_entry;
- /* On 32-bit, ignore entries entirely above our maximum. */ - if (IS_ENABLED(CONFIG_X86_32) && entry->start >= KERNEL_IMAGE_SIZE) - return; - /* Ignore entries entirely below our minimum. */ if (entry->start + entry->size < minimum) return; @@ -650,11 +652,6 @@ static void __process_mem_region(struct mem_vector *entry, /* Reduce size by any delta from the original address. */ region.size -= region.start - start_orig;
- /* On 32-bit, reduce region size to fit within max size. */ - if (IS_ENABLED(CONFIG_X86_32) && - region.start + region.size > KERNEL_IMAGE_SIZE) - region.size = KERNEL_IMAGE_SIZE - region.start; - /* Return if region can't contain decompressed kernel */ if (region.size < image_size) return; @@ -839,15 +836,16 @@ static void process_e820_entries(unsigned long minimum, static unsigned long find_random_phys_addr(unsigned long minimum, unsigned long image_size) { + /* Bail out early if it's impossible to succeed. */ + if (minimum + image_size > mem_limit) + return 0; + /* Check if we had too many memmaps. */ if (memmap_too_large) { debug_putstr("Aborted memory entries scan (more than 4 memmap= args)!\n"); return 0; }
- /* Make sure minimum is aligned. */ - minimum = ALIGN(minimum, CONFIG_PHYSICAL_ALIGN); - if (process_efi_entries(minimum, image_size)) return slots_fetch_random();
@@ -860,8 +858,6 @@ static unsigned long find_random_virt_addr(unsigned long minimum, { unsigned long slots, random_addr;
- /* Make sure minimum is aligned. */ - minimum = ALIGN(minimum, CONFIG_PHYSICAL_ALIGN); /* Align image_size for easy slot calculations. */ image_size = ALIGN(image_size, CONFIG_PHYSICAL_ALIGN);
@@ -908,6 +904,11 @@ void choose_random_location(unsigned long input, /* Prepare to add new identity pagetables on demand. */ initialize_identity_maps();
+ if (IS_ENABLED(CONFIG_X86_32)) + mem_limit = KERNEL_IMAGE_SIZE; + else + mem_limit = MAXMEM; + /* Record the various known unsafe memory ranges. */ mem_avoid_init(input, input_size, *output);
@@ -917,6 +918,8 @@ void choose_random_location(unsigned long input, * location: */ min_addr = min(*output, 512UL << 20); + /* Make sure minimum is aligned. */ + min_addr = ALIGN(min_addr, CONFIG_PHYSICAL_ALIGN);
/* Walk available memory entries to find a random address. */ random_addr = find_random_phys_addr(min_addr, output_size);
From: Thomas Zimmermann tzimmermann@suse.de
[ Upstream commit d50ace1e72f05708cc5dbc89b9bbb9873f150092 ]
Putting the DRM driver to the top of the file and the PCI code to the bottom makes ast_drv.c more readable. While at it, the patch prefixes file-scope variables with ast_.
Signed-off-by: Thomas Zimmermann tzimmermann@suse.de Acked-by: Daniel Vetter daniel.vetter@ffwll.ch Acked-by: Sam Ravnborg sam@ravnborg.org Link: https://patchwork.freedesktop.org/patch/msgid/20200730135206.30239-3-tzimmer... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/ast/ast_drv.c | 59 ++++++++++++++++++----------------- 1 file changed, 31 insertions(+), 28 deletions(-)
diff --git a/drivers/gpu/drm/ast/ast_drv.c b/drivers/gpu/drm/ast/ast_drv.c index 0b58f7aee6b01..9d04f2b5225cf 100644 --- a/drivers/gpu/drm/ast/ast_drv.c +++ b/drivers/gpu/drm/ast/ast_drv.c @@ -43,9 +43,33 @@ int ast_modeset = -1; MODULE_PARM_DESC(modeset, "Disable/Enable modesetting"); module_param_named(modeset, ast_modeset, int, 0400);
-#define PCI_VENDOR_ASPEED 0x1a03 +/* + * DRM driver + */ + +DEFINE_DRM_GEM_FOPS(ast_fops); + +static struct drm_driver ast_driver = { + .driver_features = DRIVER_ATOMIC | + DRIVER_GEM | + DRIVER_MODESET, + + .fops = &ast_fops, + .name = DRIVER_NAME, + .desc = DRIVER_DESC, + .date = DRIVER_DATE, + .major = DRIVER_MAJOR, + .minor = DRIVER_MINOR, + .patchlevel = DRIVER_PATCHLEVEL,
-static struct drm_driver driver; + DRM_GEM_VRAM_DRIVER +}; + +/* + * PCI driver + */ + +#define PCI_VENDOR_ASPEED 0x1a03
#define AST_VGA_DEVICE(id, info) { \ .class = PCI_BASE_CLASS_DISPLAY << 16, \ @@ -56,13 +80,13 @@ static struct drm_driver driver; .subdevice = PCI_ANY_ID, \ .driver_data = (unsigned long) info }
-static const struct pci_device_id pciidlist[] = { +static const struct pci_device_id ast_pciidlist[] = { AST_VGA_DEVICE(PCI_CHIP_AST2000, NULL), AST_VGA_DEVICE(PCI_CHIP_AST2100, NULL), {0, 0, 0}, };
-MODULE_DEVICE_TABLE(pci, pciidlist); +MODULE_DEVICE_TABLE(pci, ast_pciidlist);
static void ast_kick_out_firmware_fb(struct pci_dev *pdev) { @@ -94,7 +118,7 @@ static int ast_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent) if (ret) return ret;
- dev = drm_dev_alloc(&driver, &pdev->dev); + dev = drm_dev_alloc(&ast_driver, &pdev->dev); if (IS_ERR(dev)) return PTR_ERR(dev);
@@ -118,11 +142,9 @@ err_ast_driver_unload: err_drm_dev_put: drm_dev_put(dev); return ret; - }
-static void -ast_pci_remove(struct pci_dev *pdev) +static void ast_pci_remove(struct pci_dev *pdev) { struct drm_device *dev = pci_get_drvdata(pdev);
@@ -217,30 +239,12 @@ static const struct dev_pm_ops ast_pm_ops = {
static struct pci_driver ast_pci_driver = { .name = DRIVER_NAME, - .id_table = pciidlist, + .id_table = ast_pciidlist, .probe = ast_pci_probe, .remove = ast_pci_remove, .driver.pm = &ast_pm_ops, };
-DEFINE_DRM_GEM_FOPS(ast_fops); - -static struct drm_driver driver = { - .driver_features = DRIVER_ATOMIC | - DRIVER_GEM | - DRIVER_MODESET, - - .fops = &ast_fops, - .name = DRIVER_NAME, - .desc = DRIVER_DESC, - .date = DRIVER_DATE, - .major = DRIVER_MAJOR, - .minor = DRIVER_MINOR, - .patchlevel = DRIVER_PATCHLEVEL, - - DRM_GEM_VRAM_DRIVER -}; - static int __init ast_init(void) { if (vgacon_text_force() && ast_modeset == -1) @@ -261,4 +265,3 @@ module_exit(ast_exit); MODULE_AUTHOR(DRIVER_AUTHOR); MODULE_DESCRIPTION(DRIVER_DESC); MODULE_LICENSE("GPL and additional rights"); -
From: Guchun Chen guchun.chen@amd.com
[ Upstream commit bf0b91b78f002faa1be1902a75eeb0797f9fbcf3 ]
RAS flags needs to be cleaned as well when user requires one clean eeprom.
v2: RAS flags shall be restored after eeprom reset succeeds.
Signed-off-by: Guchun Chen guchun.chen@amd.com Reviewed-by: Hawking Zhang Hawking.Zhang@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c index 1bedb416eebd0..b4fb5a473df5a 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c @@ -367,12 +367,19 @@ static ssize_t amdgpu_ras_debugfs_ctrl_write(struct file *f, const char __user * static ssize_t amdgpu_ras_debugfs_eeprom_write(struct file *f, const char __user *buf, size_t size, loff_t *pos) { - struct amdgpu_device *adev = (struct amdgpu_device *)file_inode(f)->i_private; + struct amdgpu_device *adev = + (struct amdgpu_device *)file_inode(f)->i_private; int ret;
- ret = amdgpu_ras_eeprom_reset_table(&adev->psp.ras.ras->eeprom_control); + ret = amdgpu_ras_eeprom_reset_table( + &(amdgpu_ras_get_context(adev)->eeprom_control));
- return ret == 1 ? size : -EIO; + if (ret == 1) { + amdgpu_ras_get_context(adev)->flags = RAS_DEFAULT_FLAGS; + return size; + } else { + return -EIO; + } }
static const struct file_operations amdgpu_ras_debugfs_ctrl_ops = {
From: Tom Rix trix@redhat.com
[ Upstream commit 8e1ba47c60bcd325fdd097cd76054639155e5d2e ]
clang static analysis reports this repesentative error
pvr2fb.c:1049:2: warning: 1st function call argument is an uninitialized value [core.CallAndMessage] if (*cable_arg) ^~~~~~~~~~~~~~~
Problem is that cable_arg depends on the input loop to set the cable_arg[0]. If it does not, then some random value from the stack is used.
A similar problem exists for output_arg.
So initialize cable_arg and output_arg.
Signed-off-by: Tom Rix trix@redhat.com Acked-by: Arnd Bergmann arnd@arndb.de Signed-off-by: Sam Ravnborg sam@ravnborg.org Link: https://patchwork.freedesktop.org/patch/msgid/20200720191845.20115-1-trix@re... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/video/fbdev/pvr2fb.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/video/fbdev/pvr2fb.c b/drivers/video/fbdev/pvr2fb.c index 2d9f69b93392a..f4add36cb5f4d 100644 --- a/drivers/video/fbdev/pvr2fb.c +++ b/drivers/video/fbdev/pvr2fb.c @@ -1028,6 +1028,8 @@ static int __init pvr2fb_setup(char *options) if (!options || !*options) return 0;
+ cable_arg[0] = output_arg[0] = 0; + while ((this_opt = strsep(&options, ","))) { if (!*this_opt) continue;
From: Wen Gong wgong@codeaurora.org
[ Upstream commit 2fd3c8f34d08af0a6236085f9961866ad92ef9ec ]
When simulate random transfer fail for sdio write and read, it happened "payload length exceeds max htc length" and recovery later sometimes.
Test steps: 1. Add config and update kernel: CONFIG_FAIL_MMC_REQUEST=y CONFIG_FAULT_INJECTION=y CONFIG_FAULT_INJECTION_DEBUG_FS=y
2. Run simulate fail: cd /sys/kernel/debug/mmc1/fail_mmc_request echo 10 > probability echo 10 > times # repeat until hitting issues
3. It happened payload length exceeds max htc length. [ 199.935506] ath10k_sdio mmc1:0001:1: payload length 57005 exceeds max htc length: 4088 .... [ 264.990191] ath10k_sdio mmc1:0001:1: payload length 57005 exceeds max htc length: 4088
4. after some time, such as 60 seconds, it start recovery which triggered by wmi command timeout for periodic scan. [ 269.229232] ieee80211 phy0: Hardware restart was requested [ 269.734693] ath10k_sdio mmc1:0001:1: device successfully recovered
The simulate fail of sdio is not a real sdio transter fail, it only set an error status in mmc_should_fail_request after the transfer end, actually the transfer is success, then sdio_io_rw_ext_helper will return error status and stop transfer the left data. For example, the really RX len is 286 bytes, then it will split to 2 blocks in sdio_io_rw_ext_helper, one is 256 bytes, left is 30 bytes, if the first 256 bytes get an error status by mmc_should_fail_request,then the left 30 bytes will not read in this RX operation. Then when the next RX arrive, the left 30 bytes will be considered as the header of the read, the top 4 bytes of the 30 bytes will be considered as lookaheads, but actually the 4 bytes is not the lookaheads, so the len from this lookaheads is not correct, it exceeds max htc length 4088 sometimes. When happened exceeds, the buffer chain is not matched between firmware and ath10k, then it need to start recovery ASAP. Recently then recovery will be started by wmi command timeout, but it will be long time later, for example, it is 60+ seconds later from the periodic scan, if it does not have periodic scan, it will be longer.
Start recovery when it happened "payload length exceeds max htc length" will be reasonable.
This patch only effect sdio chips.
Tested with QCA6174 SDIO with firmware WLAN.RMH.4.4.1-00029.
Signed-off-by: Wen Gong wgong@codeaurora.org Signed-off-by: Kalle Valo kvalo@codeaurora.org Link: https://lore.kernel.org/r/20200108031957.22308-3-wgong@codeaurora.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/ath/ath10k/sdio.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/drivers/net/wireless/ath/ath10k/sdio.c b/drivers/net/wireless/ath/ath10k/sdio.c index 63f882c690bff..0841e69b10b1a 100644 --- a/drivers/net/wireless/ath/ath10k/sdio.c +++ b/drivers/net/wireless/ath/ath10k/sdio.c @@ -557,6 +557,10 @@ static int ath10k_sdio_mbox_rx_alloc(struct ath10k *ar, le16_to_cpu(htc_hdr->len), ATH10K_HTC_MBOX_MAX_PAYLOAD_LENGTH); ret = -ENOMEM; + + queue_work(ar->workqueue, &ar->restart_work); + ath10k_warn(ar, "exceeds length, start recovery\n"); + goto err; }
From: Sathishkumar Muruganandam murugana@codeaurora.org
[ Upstream commit 99f41b8e43b8b4b31262adb8ac3e69088fff1289 ]
When STBC is enabled, NSTS_SU value need to be accounted for VHT NSS calculation for SU case.
Without this fix, 1SS + STBC enabled case was reported wrongly as 2SS in radiotap header on monitor mode capture.
Tested-on: QCA9984 10.4-3.10-00047
Signed-off-by: Sathishkumar Muruganandam murugana@codeaurora.org Signed-off-by: Kalle Valo kvalo@codeaurora.org Link: https://lore.kernel.org/r/1597392971-3897-1-git-send-email-murugana@codeauro... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/ath/ath10k/htt_rx.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/net/wireless/ath/ath10k/htt_rx.c b/drivers/net/wireless/ath/ath10k/htt_rx.c index 69ad4ca1a87c1..a00498338b1cc 100644 --- a/drivers/net/wireless/ath/ath10k/htt_rx.c +++ b/drivers/net/wireless/ath/ath10k/htt_rx.c @@ -949,6 +949,7 @@ static void ath10k_htt_rx_h_rates(struct ath10k *ar, u8 preamble = 0; u8 group_id; u32 info1, info2, info3; + u32 stbc, nsts_su;
info1 = __le32_to_cpu(rxd->ppdu_start.info1); info2 = __le32_to_cpu(rxd->ppdu_start.info2); @@ -993,11 +994,16 @@ static void ath10k_htt_rx_h_rates(struct ath10k *ar, */ bw = info2 & 3; sgi = info3 & 1; + stbc = (info2 >> 3) & 1; group_id = (info2 >> 4) & 0x3F;
if (GROUP_ID_IS_SU_MIMO(group_id)) { mcs = (info3 >> 4) & 0x0F; - nss = ((info2 >> 10) & 0x07) + 1; + nsts_su = ((info2 >> 10) & 0x07); + if (stbc) + nss = (nsts_su >> 2) + 1; + else + nss = (nsts_su + 1); } else { /* Hardware doesn't decode VHT-SIG-B into Rx descriptor * so it's impossible to decode MCS. Also since
From: Luben Tuikov luben.tuikov@amd.com
[ Upstream commit e2d732fdb7a9e421720a644580cd6a9400f97f60 ]
Remove DRM_SCHED_PRIORITY_LOW, as it was used in only one place.
Rename and separate by a line DRM_SCHED_PRIORITY_MAX to DRM_SCHED_PRIORITY_COUNT as it represents a (total) count of said priorities and it is used as such in loops throughout the code. (0-based indexing is the the count number.)
Remove redundant word HIGH in priority names, and rename *KERNEL* to *HIGH*, as it really means that, high.
v2: Add back KERNEL and remove SW and HW, in lieu of a single HIGH between NORMAL and KERNEL.
Signed-off-by: Luben Tuikov luben.tuikov@amd.com Reviewed-by: Christian König christian.koenig@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 4 ++-- drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c | 6 +++--- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 2 +- drivers/gpu/drm/scheduler/sched_main.c | 4 ++-- include/drm/gpu_scheduler.h | 12 +++++++----- 8 files changed, 18 insertions(+), 16 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c index 8842c55d4490b..fc695126b6e75 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c @@ -46,7 +46,7 @@ const unsigned int amdgpu_ctx_num_entities[AMDGPU_HW_IP_NUM] = { static int amdgpu_ctx_priority_permit(struct drm_file *filp, enum drm_sched_priority priority) { - if (priority < 0 || priority >= DRM_SCHED_PRIORITY_MAX) + if (priority < 0 || priority >= DRM_SCHED_PRIORITY_COUNT) return -EINVAL;
/* NORMAL and below are accessible by everyone */ @@ -65,7 +65,7 @@ static int amdgpu_ctx_priority_permit(struct drm_file *filp, static enum gfx_pipe_priority amdgpu_ctx_sched_prio_to_compute_prio(enum drm_sched_priority prio) { switch (prio) { - case DRM_SCHED_PRIORITY_HIGH_HW: + case DRM_SCHED_PRIORITY_HIGH: case DRM_SCHED_PRIORITY_KERNEL: return AMDGPU_GFX_PIPE_PRIO_HIGH; default: diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c index 937029ad5271a..dcfe8a3b03ffb 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c @@ -251,7 +251,7 @@ void amdgpu_job_stop_all_jobs_on_sched(struct drm_gpu_scheduler *sched) int i;
/* Signal all jobs not yet scheduled */ - for (i = DRM_SCHED_PRIORITY_MAX - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) { + for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) { struct drm_sched_rq *rq = &sched->sched_rq[i];
if (!rq) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c index 13ea8ebc421c6..6d4fc79bf84aa 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c @@ -267,7 +267,7 @@ int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring, &ring->sched; }
- for (i = 0; i < DRM_SCHED_PRIORITY_MAX; ++i) + for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_COUNT; ++i) atomic_set(&ring->num_jobs[i], 0);
return 0; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h index da871d84b7424..7112137689db0 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h @@ -243,7 +243,7 @@ struct amdgpu_ring { bool has_compute_vm_bug; bool no_scheduler;
- atomic_t num_jobs[DRM_SCHED_PRIORITY_MAX]; + atomic_t num_jobs[DRM_SCHED_PRIORITY_COUNT]; struct mutex priority_mutex; /* protected by priority_mutex */ int priority; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c index c799691dfa848..17661ede94885 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c @@ -36,14 +36,14 @@ enum drm_sched_priority amdgpu_to_sched_priority(int amdgpu_priority) { switch (amdgpu_priority) { case AMDGPU_CTX_PRIORITY_VERY_HIGH: - return DRM_SCHED_PRIORITY_HIGH_HW; + return DRM_SCHED_PRIORITY_HIGH; case AMDGPU_CTX_PRIORITY_HIGH: - return DRM_SCHED_PRIORITY_HIGH_SW; + return DRM_SCHED_PRIORITY_HIGH; case AMDGPU_CTX_PRIORITY_NORMAL: return DRM_SCHED_PRIORITY_NORMAL; case AMDGPU_CTX_PRIORITY_LOW: case AMDGPU_CTX_PRIORITY_VERY_LOW: - return DRM_SCHED_PRIORITY_LOW; + return DRM_SCHED_PRIORITY_MIN; case AMDGPU_CTX_PRIORITY_UNSET: return DRM_SCHED_PRIORITY_UNSET; default: diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index 978bae7313980..b7fd0cdffce0e 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -2101,7 +2101,7 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable) ring = adev->mman.buffer_funcs_ring; sched = &ring->sched; r = drm_sched_entity_init(&adev->mman.entity, - DRM_SCHED_PRIORITY_KERNEL, &sched, + DRM_SCHED_PRIORITY_KERNEL, &sched, 1, NULL); if (r) { DRM_ERROR("Failed setting up TTM BO move entity (%d)\n", diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 96f763d888af5..9a0d77a680180 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -625,7 +625,7 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched) return NULL;
/* Kernel run queue has higher priority than normal run queue*/ - for (i = DRM_SCHED_PRIORITY_MAX - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) { + for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) { entity = drm_sched_rq_select_entity(&sched->sched_rq[i]); if (entity) break; @@ -852,7 +852,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, sched->name = name; sched->timeout = timeout; sched->hang_limit = hang_limit; - for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_MAX; i++) + for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_COUNT; i++) drm_sched_rq_init(sched, &sched->sched_rq[i]);
init_waitqueue_head(&sched->wake_up_worker); diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index b9780ae9dd26c..72dc3a95fbaad 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -33,14 +33,16 @@ struct drm_gpu_scheduler; struct drm_sched_rq;
+/* These are often used as an (initial) index + * to an array, and as such should start at 0. + */ enum drm_sched_priority { DRM_SCHED_PRIORITY_MIN, - DRM_SCHED_PRIORITY_LOW = DRM_SCHED_PRIORITY_MIN, DRM_SCHED_PRIORITY_NORMAL, - DRM_SCHED_PRIORITY_HIGH_SW, - DRM_SCHED_PRIORITY_HIGH_HW, + DRM_SCHED_PRIORITY_HIGH, DRM_SCHED_PRIORITY_KERNEL, - DRM_SCHED_PRIORITY_MAX, + + DRM_SCHED_PRIORITY_COUNT, DRM_SCHED_PRIORITY_INVALID = -1, DRM_SCHED_PRIORITY_UNSET = -2 }; @@ -274,7 +276,7 @@ struct drm_gpu_scheduler { uint32_t hw_submission_limit; long timeout; const char *name; - struct drm_sched_rq sched_rq[DRM_SCHED_PRIORITY_MAX]; + struct drm_sched_rq sched_rq[DRM_SCHED_PRIORITY_COUNT]; wait_queue_head_t wake_up_worker; wait_queue_head_t job_scheduled; atomic_t hw_rq_count;
From: Nadezda Lutovinova lutovinova@ispras.ru
[ Upstream commit f688a345f0d7a6df4dd2aeca8e4f3c05e123a0ee ]
If ge_b850v3_lvds_init() does not allocate memory for ge_b850v3_lvds_ptr, then a null pointer dereference is accessed.
The patch adds checking of the return value of ge_b850v3_lvds_init().
Found by Linux Driver Verification project (linuxtesting.org).
Signed-off-by: Nadezda Lutovinova lutovinova@ispras.ru Signed-off-by: Sam Ravnborg sam@ravnborg.org Link: https://patchwork.freedesktop.org/patch/msgid/20200819143756.30626-1-lutovin... Signed-off-by: Sasha Levin sashal@kernel.org --- .../gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c b/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c index 6200f12a37e69..ab8174831cf40 100644 --- a/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c +++ b/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c @@ -302,8 +302,12 @@ static int stdp4028_ge_b850v3_fw_probe(struct i2c_client *stdp4028_i2c, const struct i2c_device_id *id) { struct device *dev = &stdp4028_i2c->dev; + int ret; + + ret = ge_b850v3_lvds_init(dev);
- ge_b850v3_lvds_init(dev); + if (ret) + return ret;
ge_b850v3_lvds_ptr->stdp4028_i2c = stdp4028_i2c; i2c_set_clientdata(stdp4028_i2c, ge_b850v3_lvds_ptr); @@ -361,8 +365,12 @@ static int stdp2690_ge_b850v3_fw_probe(struct i2c_client *stdp2690_i2c, const struct i2c_device_id *id) { struct device *dev = &stdp2690_i2c->dev; + int ret; + + ret = ge_b850v3_lvds_init(dev);
- ge_b850v3_lvds_init(dev); + if (ret) + return ret;
ge_b850v3_lvds_ptr->stdp2690_i2c = stdp2690_i2c; i2c_set_clientdata(stdp2690_i2c, ge_b850v3_lvds_ptr);
From: Rander Wang rander.wang@intel.com
[ Upstream commit 6c63c954e1c52f1262f986f36d95f557c6f8fa94 ]
When hda_codec_probe() doesn't initialize audio component, we disable the codec and keep going. However,the resources are not released. The child_count of SOF device is increased in snd_hdac_ext_bus_device_init but is not decrease in error case, so SOF can't get suspended.
snd_hdac_ext_bus_device_exit will be invoked in HDA framework if it gets a error. Now copy this behavior to release resources and decrease SOF device child_count to release SOF device.
Signed-off-by: Rander Wang rander.wang@intel.com Reviewed-by: Pierre-Louis Bossart pierre-louis.bossart@linux.intel.com Reviewed-by: Bard Liao yung-chuan.liao@linux.intel.com Reviewed-by: Guennadi Liakhovetski guennadi.liakhovetski@linux.intel.com Signed-off-by: Ranjani Sridharan ranjani.sridharan@linux.intel.com Link: https://lore.kernel.org/r/20200825235040.1586478-3-ranjani.sridharan@linux.i... Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/sof/intel/hda-codec.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/sound/soc/sof/intel/hda-codec.c b/sound/soc/sof/intel/hda-codec.c index 2c5c451fa19d7..c475955c6eeba 100644 --- a/sound/soc/sof/intel/hda-codec.c +++ b/sound/soc/sof/intel/hda-codec.c @@ -151,7 +151,7 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address, if (!hdev->bus->audio_component) { dev_dbg(sdev->dev, "iDisp hw present but no driver\n"); - return -ENOENT; + goto error; } hda_priv->need_display_power = true; } @@ -174,7 +174,7 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address, * other return codes without modification */ if (ret == 0) - ret = -ENOENT; + goto error; }
return ret;
Greg Kroah-Hartman schreef op di 03-11-2020 om 21:32 [+0100]:
From: Rander Wang rander.wang@intel.com
[ Upstream commit 6c63c954e1c52f1262f986f36d95f557c6f8fa94 ]
When hda_codec_probe() doesn't initialize audio component, we disable the codec and keep going. However,the resources are not released. The child_count of SOF device is increased in snd_hdac_ext_bus_device_init but is not decrease in error case, so SOF can't get suspended.
snd_hdac_ext_bus_device_exit will be invoked in HDA framework if it gets a error. Now copy this behavior to release resources and decrease SOF device child_count to release SOF device.
Signed-off-by: Rander Wang rander.wang@intel.com Reviewed-by: Pierre-Louis Bossart pierre-louis.bossart@linux.intel.com Reviewed-by: Bard Liao yung-chuan.liao@linux.intel.com Reviewed-by: Guennadi Liakhovetski guennadi.liakhovetski@linux.intel.com Signed-off-by: Ranjani Sridharan ranjani.sridharan@linux.intel.com Link: https://lore.kernel.org/r/20200825235040.1586478-3-ranjani.sridharan@linux.i... Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org
sound/soc/sof/intel/hda-codec.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/sound/soc/sof/intel/hda-codec.c b/sound/soc/sof/intel/hda-codec.c index 2c5c451fa19d7..c475955c6eeba 100644 --- a/sound/soc/sof/intel/hda-codec.c +++ b/sound/soc/sof/intel/hda-codec.c @@ -151,7 +151,7 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address, if (!hdev->bus->audio_component) { dev_dbg(sdev->dev, "iDisp hw present but no driver\n");
return -ENOENT;
} hda_priv->need_display_power = true; }goto error;
@@ -174,7 +174,7 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address, * other return codes without modification */ if (ret == 0)
ret = -ENOENT;
}goto error;
return ret;
My local build of v5.9.5 broke on this patch.
sound/soc/sof/intel/hda-codec.c: In function 'hda_codec_probe': sound/soc/sof/intel/hda-codec.c:177:4: error: label 'error' used but not defined 177 | goto error; | ^~~~ make[4]: *** [scripts/Makefile.build:283: sound/soc/sof/intel/hda-codec.o] Error 1 make[3]: *** [scripts/Makefile.build:500: sound/soc/sof/intel] Error 2 make[2]: *** [scripts/Makefile.build:500: sound/soc/sof] Error 2 make[1]: *** [scripts/Makefile.build:500: sound/soc] Error 2 make: *** [Makefile:1778: sound] Error 2
There's indeed no error label in v5.9.5. (There is one in v5.10-rc2, I just checked.) Is no-one else running into this?
Thanks,
Paul Bolle
On Thu, Nov 05, 2020 at 02:23:35PM +0100, Paul Bolle wrote:
Greg Kroah-Hartman schreef op di 03-11-2020 om 21:32 [+0100]:
From: Rander Wang rander.wang@intel.com
[ Upstream commit 6c63c954e1c52f1262f986f36d95f557c6f8fa94 ]
When hda_codec_probe() doesn't initialize audio component, we disable the codec and keep going. However,the resources are not released. The child_count of SOF device is increased in snd_hdac_ext_bus_device_init but is not decrease in error case, so SOF can't get suspended.
snd_hdac_ext_bus_device_exit will be invoked in HDA framework if it gets a error. Now copy this behavior to release resources and decrease SOF device child_count to release SOF device.
Signed-off-by: Rander Wang rander.wang@intel.com Reviewed-by: Pierre-Louis Bossart pierre-louis.bossart@linux.intel.com Reviewed-by: Bard Liao yung-chuan.liao@linux.intel.com Reviewed-by: Guennadi Liakhovetski guennadi.liakhovetski@linux.intel.com Signed-off-by: Ranjani Sridharan ranjani.sridharan@linux.intel.com Link: https://lore.kernel.org/r/20200825235040.1586478-3-ranjani.sridharan@linux.i... Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org
sound/soc/sof/intel/hda-codec.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/sound/soc/sof/intel/hda-codec.c b/sound/soc/sof/intel/hda-codec.c index 2c5c451fa19d7..c475955c6eeba 100644 --- a/sound/soc/sof/intel/hda-codec.c +++ b/sound/soc/sof/intel/hda-codec.c @@ -151,7 +151,7 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address, if (!hdev->bus->audio_component) { dev_dbg(sdev->dev, "iDisp hw present but no driver\n");
return -ENOENT;
} hda_priv->need_display_power = true; }goto error;
@@ -174,7 +174,7 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address, * other return codes without modification */ if (ret == 0)
ret = -ENOENT;
goto error;
}
return ret;
My local build of v5.9.5 broke on this patch.
sound/soc/sof/intel/hda-codec.c: In function 'hda_codec_probe': sound/soc/sof/intel/hda-codec.c:177:4: error: label 'error' used but not defined 177 | goto error; | ^~~~ make[4]: *** [scripts/Makefile.build:283: sound/soc/sof/intel/hda-codec.o] Error 1 make[3]: *** [scripts/Makefile.build:500: sound/soc/sof/intel] Error 2 make[2]: *** [scripts/Makefile.build:500: sound/soc/sof] Error 2 make[1]: *** [scripts/Makefile.build:500: sound/soc] Error 2 make: *** [Makefile:1778: sound] Error 2
There's indeed no error label in v5.9.5. (There is one in v5.10-rc2, I just checked.) Is no-one else running into this?
It seems that setting CONFIG_SND_SOC_SOF_HDA_AUDIO_CODEC=y is very "difficult", it's not being set by allmodconfig nor is it easy to manually set it up.
I'll revert the patch, but it would be nice to make sure it's easier to test this out too.
On 11/5/20 8:35 AM, Sasha Levin wrote:
On Thu, Nov 05, 2020 at 02:23:35PM +0100, Paul Bolle wrote:
Greg Kroah-Hartman schreef op di 03-11-2020 om 21:32 [+0100]:
From: Rander Wang rander.wang@intel.com
[ Upstream commit 6c63c954e1c52f1262f986f36d95f557c6f8fa94 ]
When hda_codec_probe() doesn't initialize audio component, we disable the codec and keep going. However,the resources are not released. The child_count of SOF device is increased in snd_hdac_ext_bus_device_init but is not decrease in error case, so SOF can't get suspended.
snd_hdac_ext_bus_device_exit will be invoked in HDA framework if it gets a error. Now copy this behavior to release resources and decrease SOF device child_count to release SOF device.
Signed-off-by: Rander Wang rander.wang@intel.com Reviewed-by: Pierre-Louis Bossart pierre-louis.bossart@linux.intel.com Reviewed-by: Bard Liao yung-chuan.liao@linux.intel.com Reviewed-by: Guennadi Liakhovetski guennadi.liakhovetski@linux.intel.com Signed-off-by: Ranjani Sridharan ranjani.sridharan@linux.intel.com Link: https://lore.kernel.org/r/20200825235040.1586478-3-ranjani.sridharan@linux.i...
Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org
sound/soc/sof/intel/hda-codec.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/sound/soc/sof/intel/hda-codec.c b/sound/soc/sof/intel/hda-codec.c index 2c5c451fa19d7..c475955c6eeba 100644 --- a/sound/soc/sof/intel/hda-codec.c +++ b/sound/soc/sof/intel/hda-codec.c @@ -151,7 +151,7 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address, if (!hdev->bus->audio_component) { dev_dbg(sdev->dev, "iDisp hw present but no driver\n"); - return -ENOENT; + goto error; } hda_priv->need_display_power = true; } @@ -174,7 +174,7 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address, * other return codes without modification */ if (ret == 0) - ret = -ENOENT; + goto error; }
return ret;
My local build of v5.9.5 broke on this patch.
sound/soc/sof/intel/hda-codec.c: In function 'hda_codec_probe': sound/soc/sof/intel/hda-codec.c:177:4: error: label 'error' used but not defined 177 | goto error; | ^~~~ make[4]: *** [scripts/Makefile.build:283: sound/soc/sof/intel/hda-codec.o] Error 1 make[3]: *** [scripts/Makefile.build:500: sound/soc/sof/intel] Error 2 make[2]: *** [scripts/Makefile.build:500: sound/soc/sof] Error 2 make[1]: *** [scripts/Makefile.build:500: sound/soc] Error 2 make: *** [Makefile:1778: sound] Error 2
There's indeed no error label in v5.9.5. (There is one in v5.10-rc2, I just checked.) Is no-one else running into this?
It seems that setting CONFIG_SND_SOC_SOF_HDA_AUDIO_CODEC=y is very "difficult", it's not being set by allmodconfig nor is it easy to manually set it up.
I'll revert the patch, but it would be nice to make sure it's easier to test this out too.
this issue comes from out-of-order patches, give me a couple of hours to look into this before reverting. thanks!
On Thu, Nov 05, 2020 at 08:47:57AM -0600, Pierre-Louis Bossart wrote:
On 11/5/20 8:35 AM, Sasha Levin wrote:
On Thu, Nov 05, 2020 at 02:23:35PM +0100, Paul Bolle wrote:
Greg Kroah-Hartman schreef op di 03-11-2020 om 21:32 [+0100]:
From: Rander Wang rander.wang@intel.com
[ Upstream commit 6c63c954e1c52f1262f986f36d95f557c6f8fa94 ]
When hda_codec_probe() doesn't initialize audio component, we disable the codec and keep going. However,the resources are not released. The child_count of SOF device is increased in snd_hdac_ext_bus_device_init but is not decrease in error case, so SOF can't get suspended.
snd_hdac_ext_bus_device_exit will be invoked in HDA framework if it gets a error. Now copy this behavior to release resources and decrease SOF device child_count to release SOF device.
Signed-off-by: Rander Wang rander.wang@intel.com Reviewed-by: Pierre-Louis Bossart pierre-louis.bossart@linux.intel.com Reviewed-by: Bard Liao yung-chuan.liao@linux.intel.com Reviewed-by: Guennadi Liakhovetski guennadi.liakhovetski@linux.intel.com Signed-off-by: Ranjani Sridharan ranjani.sridharan@linux.intel.com Link: https://lore.kernel.org/r/20200825235040.1586478-3-ranjani.sridharan@linux.i...
Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org
sound/soc/sof/intel/hda-codec.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/sound/soc/sof/intel/hda-codec.c b/sound/soc/sof/intel/hda-codec.c index 2c5c451fa19d7..c475955c6eeba 100644 --- a/sound/soc/sof/intel/hda-codec.c +++ b/sound/soc/sof/intel/hda-codec.c @@ -151,7 +151,7 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address, if (!hdev->bus->audio_component) { dev_dbg(sdev->dev, "iDisp hw present but no driver\n"); - return -ENOENT; + goto error; } hda_priv->need_display_power = true; } @@ -174,7 +174,7 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address, * other return codes without modification */ if (ret == 0) - ret = -ENOENT; + goto error; }
return ret;
My local build of v5.9.5 broke on this patch.
sound/soc/sof/intel/hda-codec.c: In function 'hda_codec_probe': sound/soc/sof/intel/hda-codec.c:177:4: error: label 'error' used but not defined 177 | goto error; | ^~~~ make[4]: *** [scripts/Makefile.build:283: sound/soc/sof/intel/hda-codec.o] Error 1 make[3]: *** [scripts/Makefile.build:500: sound/soc/sof/intel] Error 2 make[2]: *** [scripts/Makefile.build:500: sound/soc/sof] Error 2 make[1]: *** [scripts/Makefile.build:500: sound/soc] Error 2 make: *** [Makefile:1778: sound] Error 2
There's indeed no error label in v5.9.5. (There is one in v5.10-rc2, I just checked.) Is no-one else running into this?
It seems that setting CONFIG_SND_SOC_SOF_HDA_AUDIO_CODEC=y is very "difficult", it's not being set by allmodconfig nor is it easy to manually set it up.
I'll revert the patch, but it would be nice to make sure it's easier to test this out too.
this issue comes from out-of-order patches, give me a couple of hours to look into this before reverting. thanks!
Sure! Thanks for looking into this.
My local build of v5.9.5 broke on this patch.
sound/soc/sof/intel/hda-codec.c: In function 'hda_codec_probe': sound/soc/sof/intel/hda-codec.c:177:4: error: label 'error' used but not defined 177 | goto error; | ^~~~ make[4]: *** [scripts/Makefile.build:283: sound/soc/sof/intel/hda-codec.o] Error 1 make[3]: *** [scripts/Makefile.build:500: sound/soc/sof/intel] Error 2 make[2]: *** [scripts/Makefile.build:500: sound/soc/sof] Error 2 make[1]: *** [scripts/Makefile.build:500: sound/soc] Error 2 make: *** [Makefile:1778: sound] Error 2
There's indeed no error label in v5.9.5. (There is one in v5.10-rc2, I just checked.) Is no-one else running into this?
It seems that setting CONFIG_SND_SOC_SOF_HDA_AUDIO_CODEC=y is very "difficult", it's not being set by allmodconfig nor is it easy to manually set it up.
I'll revert the patch, but it would be nice to make sure it's easier to test this out too.
this issue comes from out-of-order patches, give me a couple of hours to look into this before reverting. thanks!
Sure! Thanks for looking into this.
I would recommend adding this commit to 5.9-stable:
11ec0edc6408a ('ASOC: SOF: Intel: hda-codec: move unused label to correct position')
I just tried with 5.9.5 and the compilation error is solved with this commit.
It was initially intended to solve a minor 'defined but not used' issue, which somehow became a bad 'used but not defined' one. Probably a bad git merge I did, sorry about that.
Thanks! -Pierre
On Thu, Nov 05, 2020 at 10:17:57AM -0600, Pierre-Louis Bossart wrote:
My local build of v5.9.5 broke on this patch.
sound/soc/sof/intel/hda-codec.c: In function 'hda_codec_probe': sound/soc/sof/intel/hda-codec.c:177:4: error: label 'error' used but not defined 177 | goto error; | ^~~~ make[4]: *** [scripts/Makefile.build:283: sound/soc/sof/intel/hda-codec.o] Error 1 make[3]: *** [scripts/Makefile.build:500: sound/soc/sof/intel] Error 2 make[2]: *** [scripts/Makefile.build:500: sound/soc/sof] Error 2 make[1]: *** [scripts/Makefile.build:500: sound/soc] Error 2 make: *** [Makefile:1778: sound] Error 2
There's indeed no error label in v5.9.5. (There is one in v5.10-rc2, I just checked.) Is no-one else running into this?
It seems that setting CONFIG_SND_SOC_SOF_HDA_AUDIO_CODEC=y is very "difficult", it's not being set by allmodconfig nor is it easy to manually set it up.
I'll revert the patch, but it would be nice to make sure it's easier to test this out too.
this issue comes from out-of-order patches, give me a couple of hours to look into this before reverting. thanks!
Sure! Thanks for looking into this.
I would recommend adding this commit to 5.9-stable:
11ec0edc6408a ('ASOC: SOF: Intel: hda-codec: move unused label to correct position')
I just tried with 5.9.5 and the compilation error is solved with this commit.
It was initially intended to solve a minor 'defined but not used' issue, which somehow became a bad 'used but not defined' one. Probably a bad git merge I did, sorry about that.
Will go do that now and push out a new release, thanks!
greg k-h
From: Andy Lutomirski luto@kernel.org
[ Upstream commit ab2dd173330a3f07142e68cd65682205036cd00f ]
The ptrace() test forgot to reap its child. Reap it.
Signed-off-by: Andy Lutomirski luto@kernel.org Signed-off-by: Ingo Molnar mingo@kernel.org Link: https://lore.kernel.org/r/e7700a503f30e79ab35a63103938a19893dbeff2.159846115... Signed-off-by: Sasha Levin sashal@kernel.org --- tools/testing/selftests/x86/fsgsbase.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/tools/testing/selftests/x86/fsgsbase.c b/tools/testing/selftests/x86/fsgsbase.c index 9983195535237..0056e2597f53a 100644 --- a/tools/testing/selftests/x86/fsgsbase.c +++ b/tools/testing/selftests/x86/fsgsbase.c @@ -517,6 +517,9 @@ static void test_ptrace_write_gsbase(void)
END: ptrace(PTRACE_CONT, child, NULL, NULL); + wait(&status); + if (!WIFEXITED(status)) + printf("[WARN]\tChild didn't exit cleanly.\n"); }
int main()
From: Enric Balletbo i Serra enric.balletbo@collabora.com
[ Upstream commit c5589b39549d1875bb506da473bf4580c959db8c ]
In an eDP application, HPD is not required and on most bridge chips useless. If HPD is not used, we need to set initial status as connected, otherwise the connector created by the drm_bridge_connector API remains in an unknown state.
Reviewed-by: Laurent Pinchart laurent.pinchart@ideasonboard.com Acked-by: Sam Ravnborg sam@ravnborg.org Signed-off-by: Enric Balletbo i Serra enric.balletbo@collabora.com Reviewed-by: Bilal Wasim bwasim.lkml@gmail.com Tested-by: Bilal Wasim bwasim.lkml@gmail.com Signed-off-by: Sam Ravnborg sam@ravnborg.org Link: https://patchwork.freedesktop.org/patch/msgid/20200826081526.674866-2-enric.... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/drm_bridge_connector.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/drm_bridge_connector.c b/drivers/gpu/drm/drm_bridge_connector.c index c6994fe673f31..a58cbde59c34a 100644 --- a/drivers/gpu/drm/drm_bridge_connector.c +++ b/drivers/gpu/drm/drm_bridge_connector.c @@ -187,6 +187,7 @@ drm_bridge_connector_detect(struct drm_connector *connector, bool force) case DRM_MODE_CONNECTOR_DPI: case DRM_MODE_CONNECTOR_LVDS: case DRM_MODE_CONNECTOR_DSI: + case DRM_MODE_CONNECTOR_eDP: status = connector_status_connected; break; default:
From: Hans Verkuil hverkuil-cisco@xs4all.nl
[ Upstream commit b305dfe2e93434b12d438434461b709641f62af4 ]
The default RGB quantization range for BT.2020 is full range (just as for all the other RGB pixel encodings), not limited range.
Update the V4L2_MAP_QUANTIZATION_DEFAULT macro and documentation accordingly.
Also mention that HSV is always full range and cannot be limited range.
When RGB BT2020 was introduced in V4L2 it was not clear whether it should be limited or full range, but full range is the right (and consistent) choice.
Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- .../media/v4l/colorspaces-defs.rst | 9 ++++----- .../media/v4l/colorspaces-details.rst | 5 ++--- include/uapi/linux/videodev2.h | 17 ++++++++--------- 3 files changed, 14 insertions(+), 17 deletions(-)
diff --git a/Documentation/userspace-api/media/v4l/colorspaces-defs.rst b/Documentation/userspace-api/media/v4l/colorspaces-defs.rst index 01404e1f609a7..4089f426258d6 100644 --- a/Documentation/userspace-api/media/v4l/colorspaces-defs.rst +++ b/Documentation/userspace-api/media/v4l/colorspaces-defs.rst @@ -36,8 +36,7 @@ whole range, 0-255, dividing the angular value by 1.41. The enum :c:type:`v4l2_hsv_encoding` specifies which encoding is used.
.. note:: The default R'G'B' quantization is full range for all - colorspaces except for BT.2020 which uses limited range R'G'B' - quantization. + colorspaces. HSV formats are always full range.
.. tabularcolumns:: |p{6.7cm}|p{10.8cm}|
@@ -169,8 +168,8 @@ whole range, 0-255, dividing the angular value by 1.41. The enum - Details * - ``V4L2_QUANTIZATION_DEFAULT`` - Use the default quantization encoding as defined by the - colorspace. This is always full range for R'G'B' (except for the - BT.2020 colorspace) and HSV. It is usually limited range for Y'CbCr. + colorspace. This is always full range for R'G'B' and HSV. + It is usually limited range for Y'CbCr. * - ``V4L2_QUANTIZATION_FULL_RANGE`` - Use the full range quantization encoding. I.e. the range [0…1] is mapped to [0…255] (with possible clipping to [1…254] to avoid the @@ -180,4 +179,4 @@ whole range, 0-255, dividing the angular value by 1.41. The enum * - ``V4L2_QUANTIZATION_LIM_RANGE`` - Use the limited range quantization encoding. I.e. the range [0…1] is mapped to [16…235]. Cb and Cr are mapped from [-0.5…0.5] to - [16…240]. + [16…240]. Limited Range cannot be used with HSV. diff --git a/Documentation/userspace-api/media/v4l/colorspaces-details.rst b/Documentation/userspace-api/media/v4l/colorspaces-details.rst index 300c5d2e7d0f0..cf1b825ec34a7 100644 --- a/Documentation/userspace-api/media/v4l/colorspaces-details.rst +++ b/Documentation/userspace-api/media/v4l/colorspaces-details.rst @@ -377,9 +377,8 @@ Colorspace BT.2020 (V4L2_COLORSPACE_BT2020) The :ref:`itu2020` standard defines the colorspace used by Ultra-high definition television (UHDTV). The default transfer function is ``V4L2_XFER_FUNC_709``. The default Y'CbCr encoding is -``V4L2_YCBCR_ENC_BT2020``. The default R'G'B' quantization is limited -range (!), and so is the default Y'CbCr quantization. The chromaticities -of the primary colors and the white reference are: +``V4L2_YCBCR_ENC_BT2020``. The default Y'CbCr quantization is limited range. +The chromaticities of the primary colors and the white reference are:
diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h index 235db7754606d..f717826d5d7c0 100644 --- a/include/uapi/linux/videodev2.h +++ b/include/uapi/linux/videodev2.h @@ -373,9 +373,9 @@ enum v4l2_hsv_encoding {
enum v4l2_quantization { /* - * The default for R'G'B' quantization is always full range, except - * for the BT2020 colorspace. For Y'CbCr the quantization is always - * limited range, except for COLORSPACE_JPEG: this is full range. + * The default for R'G'B' quantization is always full range. + * For Y'CbCr the quantization is always limited range, except + * for COLORSPACE_JPEG: this is full range. */ V4L2_QUANTIZATION_DEFAULT = 0, V4L2_QUANTIZATION_FULL_RANGE = 1, @@ -384,14 +384,13 @@ enum v4l2_quantization {
/* * Determine how QUANTIZATION_DEFAULT should map to a proper quantization. - * This depends on whether the image is RGB or not, the colorspace and the - * Y'CbCr encoding. + * This depends on whether the image is RGB or not, the colorspace. + * The Y'CbCr encoding is not used anymore, but is still there for backwards + * compatibility. */ #define V4L2_MAP_QUANTIZATION_DEFAULT(is_rgb_or_hsv, colsp, ycbcr_enc) \ - (((is_rgb_or_hsv) && (colsp) == V4L2_COLORSPACE_BT2020) ? \ - V4L2_QUANTIZATION_LIM_RANGE : \ - (((is_rgb_or_hsv) || (colsp) == V4L2_COLORSPACE_JPEG) ? \ - V4L2_QUANTIZATION_FULL_RANGE : V4L2_QUANTIZATION_LIM_RANGE)) + (((is_rgb_or_hsv) || (colsp) == V4L2_COLORSPACE_JPEG) ? \ + V4L2_QUANTIZATION_FULL_RANGE : V4L2_QUANTIZATION_LIM_RANGE)
/* * Deprecated names for opRGB colorspace (IEC 61966-2-5)
From: Akshu Agrawal akshu.agrawal@amd.com
[ Upstream commit f7660445c8e7fda91e8b944128554249d886b1d4 ]
While the driver waits for DAIs to be probed and retries probing, have the error messages at debug level instead of error.
Signed-off-by: Akshu Agrawal akshu.agrawal@amd.com Link: https://lore.kernel.org/r/20200826185454.5545-1-akshu.agrawal@amd.com Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/amd/acp3x-rt5682-max9836.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/sound/soc/amd/acp3x-rt5682-max9836.c b/sound/soc/amd/acp3x-rt5682-max9836.c index 406526e79af34..1a4e8ca0f99c2 100644 --- a/sound/soc/amd/acp3x-rt5682-max9836.c +++ b/sound/soc/amd/acp3x-rt5682-max9836.c @@ -472,12 +472,17 @@ static int acp3x_probe(struct platform_device *pdev)
ret = devm_snd_soc_register_card(&pdev->dev, card); if (ret) { - dev_err(&pdev->dev, + if (ret != -EPROBE_DEFER) + dev_err(&pdev->dev, "devm_snd_soc_register_card(%s) failed: %d\n", card->name, ret); - return ret; + else + dev_dbg(&pdev->dev, + "devm_snd_soc_register_card(%s) probe deferred: %d\n", + card->name, ret); } - return 0; + + return ret; }
static const struct acpi_device_id acp3x_audio_acpi_match[] = {
From: Marek Szyprowski m.szyprowski@samsung.com
[ Upstream commit 7cd7edb89437457ec36ffdbb970cc314d00c4aba ]
The Documentation/DMA-API-HOWTO.txt states that the dma_map_sg() function returns the number of the created entries in the DMA address space. However the subsequent calls to the dma_sync_sg_for_{device,cpu}() and dma_unmap_sg must be called with the original number of the entries passed to the dma_map_sg().
struct sg_table is a common structure used for describing a non-contiguous memory buffer, used commonly in the DRM and graphics subsystems. It consists of a scatterlist with memory pages and DMA addresses (sgl entry), as well as the number of scatterlist entries: CPU pages (orig_nents entry) and DMA mapped pages (nents entry).
It turned out that it was a common mistake to misuse nents and orig_nents entries, calling DMA-mapping functions with a wrong number of entries or ignoring the number of mapped entries returned by the dma_map_sg() function.
To avoid such issues, lets use a common dma-mapping wrappers operating directly on the struct sg_table objects and use scatterlist page iterators where possible. This, almost always, hides references to the nents and orig_nents entries, making the code robust, easier to follow and copy/paste safe.
Signed-off-by: Marek Szyprowski m.szyprowski@samsung.com Link: https://lore.kernel.org/r/20200826063316.23486-29-m.szyprowski@samsung.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/misc/fastrpc.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c index 7939c55daceb2..9d68677493163 100644 --- a/drivers/misc/fastrpc.c +++ b/drivers/misc/fastrpc.c @@ -518,7 +518,7 @@ fastrpc_map_dma_buf(struct dma_buf_attachment *attachment,
table = &a->sgt;
- if (!dma_map_sg(attachment->dev, table->sgl, table->nents, dir)) + if (!dma_map_sgtable(attachment->dev, table, dir, 0)) return ERR_PTR(-ENOMEM);
return table; @@ -528,7 +528,7 @@ static void fastrpc_unmap_dma_buf(struct dma_buf_attachment *attach, struct sg_table *table, enum dma_data_direction dir) { - dma_unmap_sg(attach->dev, table->sgl, table->nents, dir); + dma_unmap_sgtable(attach->dev, table, dir, 0); }
static void fastrpc_release(struct dma_buf *dmabuf)
From: Jérôme Pouiller jerome.pouiller@silabs.com
[ Upstream commit ce3653a8d3db096aa163fc80239d8ec1305c81fa ]
The trace below can appear:
[83613.832200] INFO: trying to register non-static key. [83613.837248] the code is fine but needs lockdep annotation. [83613.842808] turning off the locking correctness validator. [83613.848375] CPU: 3 PID: 141 Comm: kworker/3:2H Tainted: G O 5.6.13-silabs15 #2 [83613.857019] Hardware name: BCM2835 [83613.860605] Workqueue: events_highpri bh_work [wfx] [83613.865552] Backtrace: [83613.868041] [<c010f2cc>] (dump_backtrace) from [<c010f7b8>] (show_stack+0x20/0x24) [83613.881463] [<c010f798>] (show_stack) from [<c0d82138>] (dump_stack+0xe8/0x114) [83613.888882] [<c0d82050>] (dump_stack) from [<c01a02ec>] (register_lock_class+0x748/0x768) [83613.905035] [<c019fba4>] (register_lock_class) from [<c019da04>] (__lock_acquire+0x88/0x13dc) [83613.924192] [<c019d97c>] (__lock_acquire) from [<c019f6a4>] (lock_acquire+0xe8/0x274) [83613.942644] [<c019f5bc>] (lock_acquire) from [<c0daa5dc>] (_raw_spin_lock_irqsave+0x58/0x6c) [83613.961714] [<c0daa584>] (_raw_spin_lock_irqsave) from [<c0ab3248>] (skb_dequeue+0x24/0x78) [83613.974967] [<c0ab3224>] (skb_dequeue) from [<bf330db0>] (wfx_tx_queues_get+0x96c/0x1294 [wfx]) [83613.989728] [<bf330444>] (wfx_tx_queues_get [wfx]) from [<bf320454>] (bh_work+0x454/0x26d8 [wfx]) [83614.009337] [<bf320000>] (bh_work [wfx]) from [<c014c920>] (process_one_work+0x23c/0x7ec) [83614.028141] [<c014c6e4>] (process_one_work) from [<c014cf1c>] (worker_thread+0x4c/0x55c) [83614.046861] [<c014ced0>] (worker_thread) from [<c0154c04>] (kthread+0x138/0x168) [83614.064876] [<c0154acc>] (kthread) from [<c01010b4>] (ret_from_fork+0x14/0x20) [83614.072200] Exception stack(0xecad3fb0 to 0xecad3ff8) [83614.077323] 3fa0: 00000000 00000000 00000000 00000000 [83614.085620] 3fc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 [83614.093914] 3fe0: 00000000 00000000 00000000 00000000 00000013 00000000
Indeed, the code of wfx_add_interface() shows that the interface is enabled to early. So, the spinlock associated with some skb_queue may not yet initialized when wfx_tx_queues_get() is called.
Signed-off-by: Jérôme Pouiller jerome.pouiller@silabs.com Link: https://lore.kernel.org/r/20200825085828.399505-8-Jerome.Pouiller@silabs.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/staging/wfx/sta.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-)
diff --git a/drivers/staging/wfx/sta.c b/drivers/staging/wfx/sta.c index 7dace7c17bf5c..536c62001c709 100644 --- a/drivers/staging/wfx/sta.c +++ b/drivers/staging/wfx/sta.c @@ -761,17 +761,6 @@ int wfx_add_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif) return -EOPNOTSUPP; }
- for (i = 0; i < ARRAY_SIZE(wdev->vif); i++) { - if (!wdev->vif[i]) { - wdev->vif[i] = vif; - wvif->id = i; - break; - } - } - if (i == ARRAY_SIZE(wdev->vif)) { - mutex_unlock(&wdev->conf_mutex); - return -EOPNOTSUPP; - } // FIXME: prefer use of container_of() to get vif wvif->vif = vif; wvif->wdev = wdev; @@ -788,12 +777,22 @@ int wfx_add_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif) init_completion(&wvif->scan_complete); INIT_WORK(&wvif->scan_work, wfx_hw_scan_work);
- mutex_unlock(&wdev->conf_mutex); + wfx_tx_queues_init(wvif); + wfx_tx_policy_init(wvif); + + for (i = 0; i < ARRAY_SIZE(wdev->vif); i++) { + if (!wdev->vif[i]) { + wdev->vif[i] = vif; + wvif->id = i; + break; + } + } + WARN(i == ARRAY_SIZE(wdev->vif), "try to instantiate more vif than supported");
hif_set_macaddr(wvif, vif->addr);
- wfx_tx_queues_init(wvif); - wfx_tx_policy_init(wvif); + mutex_unlock(&wdev->conf_mutex); + wvif = NULL; while ((wvif = wvif_iterate(wdev, wvif)) != NULL) { // Combo mode does not support Block Acks. We can re-enable them @@ -825,6 +824,7 @@ void wfx_remove_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif) wvif->vif = NULL;
mutex_unlock(&wdev->conf_mutex); + wvif = NULL; while ((wvif = wvif_iterate(wdev, wvif)) != NULL) { // Combo mode does not support Block Acks. We can re-enable them
From: Xia Jiang xia.jiang@mediatek.com
[ Upstream commit 5095a6413a0cf896ab468009b6142cb0fe617e66 ]
Add checking created buffer size follow in mtk_jpeg_queue_setup().
Reviewed-by: Tomasz Figa tfiga@chromium.org Signed-off-by: Xia Jiang xia.jiang@mediatek.com Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c | 7 +++++++ 1 file changed, 7 insertions(+)
diff --git a/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c b/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c index 61fed1e35a005..b1ca4e3adae32 100644 --- a/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c +++ b/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c @@ -571,6 +571,13 @@ static int mtk_jpeg_queue_setup(struct vb2_queue *q, if (!q_data) return -EINVAL;
+ if (*num_planes) { + for (i = 0; i < *num_planes; i++) + if (sizes[i] < q_data->sizeimage[i]) + return -EINVAL; + return 0; + } + *num_planes = q_data->fmt->colplanes; for (i = 0; i < q_data->fmt->colplanes; i++) { sizes[i] = q_data->sizeimage[i];
From: Badhri Jagan Sridharan badhri@google.com
[ Upstream commit 6bbe2a90a0bb4af8dd99c3565e907fe9b5e7fd88 ]
The patch addresses the compliance test failures while running TD.PD.CP.E3, TD.PD.CP.E4, TD.PD.CP.E5 of the "Deterministic PD Compliance MOI" test plan published in https://www.usb.org/usbc. For a product to be Type-C compliant, it's expected that these tests are run on usb.org certified Type-C compliance tester as mentioned in https://www.usb.org/usbc.
The purpose of the tests TD.PD.CP.E3, TD.PD.CP.E4, TD.PD.CP.E5 is to verify the PR_SWAP response of the device. While doing so, the test asserts that Source Capabilities message is NOT received from the test device within tSwapSourceStart min (20 ms) from the time the last bit of GoodCRC corresponding to the RS_RDY message sent by the UUT was sent. If it does then the test fails.
This is in line with the requirements from the USB Power Delivery Specification Revision 3.0, Version 1.2: "6.6.8.1 SwapSourceStartTimer The SwapSourceStartTimer Shall be used by the new Source, after a Power Role Swap or Fast Role Swap, to ensure that it does not send Source_Capabilities Message before the new Sink is ready to receive the Source_Capabilities Message. The new Source Shall Not send the Source_Capabilities Message earlier than tSwapSourceStart after the last bit of the EOP of GoodCRC Message sent in response to the PS_RDY Message sent by the new Source indicating that its power supply is ready."
The patch makes sure that TCPM does not send the Source_Capabilities Message within tSwapSourceStart(20ms) by transitioning into SRC_STARTUP only after tSwapSourceStart(20ms).
Signed-off-by: Badhri Jagan Sridharan badhri@google.com Reviewed-by: Guenter Roeck linux@roeck-us.net Reviewed-by: Heikki Krogerus heikki.krogerus@linux.intel.com Link: https://lore.kernel.org/r/20200817183828.1895015-1-badhri@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/typec/tcpm/tcpm.c | 2 +- include/linux/usb/pd.h | 1 + 2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c index a48e3f90d1961..1e676ee44c937 100644 --- a/drivers/usb/typec/tcpm/tcpm.c +++ b/drivers/usb/typec/tcpm/tcpm.c @@ -3573,7 +3573,7 @@ static void run_state_machine(struct tcpm_port *port) */ tcpm_set_pwr_role(port, TYPEC_SOURCE); tcpm_pd_send_control(port, PD_CTRL_PS_RDY); - tcpm_set_state(port, SRC_STARTUP, 0); + tcpm_set_state(port, SRC_STARTUP, PD_T_SWAP_SRC_START); break;
case VCONN_SWAP_ACCEPT: diff --git a/include/linux/usb/pd.h b/include/linux/usb/pd.h index b6c233e79bd45..1df895e4680b2 100644 --- a/include/linux/usb/pd.h +++ b/include/linux/usb/pd.h @@ -473,6 +473,7 @@ static inline unsigned int rdo_max_power(u32 rdo) #define PD_T_ERROR_RECOVERY 100 /* minimum 25 is insufficient */ #define PD_T_SRCSWAPSTDBY 625 /* Maximum of 650ms */ #define PD_T_NEWSRC 250 /* Maximum of 275ms */ +#define PD_T_SWAP_SRC_START 20 /* Minimum of 20ms */
#define PD_T_DRP_TRY 100 /* 75 - 150 ms */ #define PD_T_DRP_TRYWAIT 600 /* 400 - 800 ms */
From: Tom Rix trix@redhat.com
[ Upstream commit 780d815dcc9b34d93ae69385a8465c38d423ff0f ]
clang static analysis reports this problem
tw5864-video.c:773:32: warning: The left expression of the compound assignment is an uninitialized value. The computed value will also be garbage fintv->stepwise.max.numerator *= std_max_fps; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^
stepwise.max is set with frameinterval, which comes from
ret = tw5864_frameinterval_get(input, &frameinterval); fintv->stepwise.step = frameinterval; fintv->stepwise.min = frameinterval; fintv->stepwise.max = frameinterval; fintv->stepwise.max.numerator *= std_max_fps;
When tw5864_frameinterval_get() fails, frameinterval is not set. So check the status and fix another similar problem.
Signed-off-by: Tom Rix trix@redhat.com Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/pci/tw5864/tw5864-video.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/drivers/media/pci/tw5864/tw5864-video.c b/drivers/media/pci/tw5864/tw5864-video.c index ec1e06da7e4fb..a65114e7ca346 100644 --- a/drivers/media/pci/tw5864/tw5864-video.c +++ b/drivers/media/pci/tw5864/tw5864-video.c @@ -767,6 +767,9 @@ static int tw5864_enum_frameintervals(struct file *file, void *priv, fintv->type = V4L2_FRMIVAL_TYPE_STEPWISE;
ret = tw5864_frameinterval_get(input, &frameinterval); + if (ret) + return ret; + fintv->stepwise.step = frameinterval; fintv->stepwise.min = frameinterval; fintv->stepwise.max = frameinterval; @@ -785,6 +788,9 @@ static int tw5864_g_parm(struct file *file, void *priv, cp->capability = V4L2_CAP_TIMEPERFRAME;
ret = tw5864_frameinterval_get(input, &cp->timeperframe); + if (ret) + return ret; + cp->timeperframe.numerator *= input->frame_interval; cp->capturemode = 0; cp->readbuffers = 2;
From: Sidong Yang realwakka@gmail.com
[ Upstream commit 05ca530268a9d0ab3547e7b288635e35990a77c4 ]
This patch avoid the warning in vkms_get_vblank_timestamp when vblanks aren't enabled. When running igt test kms_cursor_crc just after vkms module, the warning raised like below. Initial value of vblank time is zero and hrtimer.node.expires is also zero if vblank aren't enabled before. vkms module isn't real hardware but just virtual hardware module. so vkms can't generate a resonable timestamp when hrtimer is off. it's best to grab the current time.
[106444.464503] [IGT] kms_cursor_crc: starting subtest pipe-A-cursor-size-change [106444.471475] WARNING: CPU: 0 PID: 10109 at vkms_get_vblank_timestamp+0x42/0x50 [vkms] [106444.471511] CPU: 0 PID: 10109 Comm: kms_cursor_crc Tainted: G W OE 5.9.0-rc1+ #6 [106444.471514] RIP: 0010:vkms_get_vblank_timestamp+0x42/0x50 [vkms] [106444.471528] Call Trace: [106444.471551] drm_get_last_vbltimestamp+0xb9/0xd0 [drm] [106444.471566] drm_reset_vblank_timestamp+0x63/0xe0 [drm] [106444.471579] drm_crtc_vblank_on+0x85/0x150 [drm] [106444.471582] vkms_crtc_atomic_enable+0xe/0x10 [vkms] [106444.471592] drm_atomic_helper_commit_modeset_enables+0x1db/0x230 [drm_kms_helper] [106444.471594] vkms_atomic_commit_tail+0x38/0xc0 [vkms] [106444.471601] commit_tail+0x97/0x130 [drm_kms_helper] [106444.471608] drm_atomic_helper_commit+0x117/0x140 [drm_kms_helper] [106444.471622] drm_atomic_commit+0x4a/0x50 [drm] [106444.471629] drm_atomic_helper_set_config+0x63/0xb0 [drm_kms_helper] [106444.471642] drm_mode_setcrtc+0x1d9/0x7b0 [drm] [106444.471654] ? drm_mode_getcrtc+0x1a0/0x1a0 [drm] [106444.471666] drm_ioctl_kernel+0xb6/0x100 [drm] [106444.471677] drm_ioctl+0x3ad/0x470 [drm] [106444.471688] ? drm_mode_getcrtc+0x1a0/0x1a0 [drm] [106444.471692] ? tomoyo_file_ioctl+0x19/0x20 [106444.471694] __x64_sys_ioctl+0x96/0xd0 [106444.471697] do_syscall_64+0x37/0x80 [106444.471699] entry_SYSCALL_64_after_hwframe+0x44/0xa9
Cc: Daniel Vetter daniel@ffwll.ch Cc: Rodrigo Siqueira rodrigosiqueiramelo@gmail.com Cc: Haneen Mohammed hamohammed.sa@gmail.com Cc: Melissa Wen melissa.srw@gmail.com
Signed-off-by: Sidong Yang realwakka@gmail.com Reviewed-by: Melissa Wen melissa.srw@gmail.com Signed-off-by: Rodrigo Siqueira rodrigosiqueiramelo@gmail.com Link: https://patchwork.freedesktop.org/patch/msgid/20200828124553.2178-1-realwakk... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/vkms/vkms_crtc.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/drivers/gpu/drm/vkms/vkms_crtc.c b/drivers/gpu/drm/vkms/vkms_crtc.c index ac85e17428f88..09c012d54d58f 100644 --- a/drivers/gpu/drm/vkms/vkms_crtc.c +++ b/drivers/gpu/drm/vkms/vkms_crtc.c @@ -86,6 +86,11 @@ static bool vkms_get_vblank_timestamp(struct drm_crtc *crtc, struct vkms_output *output = &vkmsdev->output; struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
+ if (!READ_ONCE(vblank->enabled)) { + *vblank_time = ktime_get(); + return true; + } + *vblank_time = READ_ONCE(output->vblank_hrtimer.node.expires);
if (WARN_ON(*vblank_time == vblank->time))
From: Hans Verkuil hverkuil@xs4all.nl
[ Upstream commit 49b20d981d723fae5a93843c617af2b2c23611ec ]
1) the numerator and/or denominator might be 0, in that case fall back to the default frame interval. This is per the spec and this caused a v4l2-compliance failure.
2) the updated frame interval wasn't returned in the s_frame_interval subdev op.
Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl Reviewed-by: Luca Ceresoli luca@lucaceresoli.net Signed-off-by: Sakari Ailus sakari.ailus@linux.intel.com Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/i2c/imx274.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/drivers/media/i2c/imx274.c b/drivers/media/i2c/imx274.c index 6011cec5e351d..e6aa9f32b6a83 100644 --- a/drivers/media/i2c/imx274.c +++ b/drivers/media/i2c/imx274.c @@ -1235,6 +1235,8 @@ static int imx274_s_frame_interval(struct v4l2_subdev *sd, ret = imx274_set_frame_interval(imx274, fi->interval);
if (!ret) { + fi->interval = imx274->frame_interval; + /* * exposure time range is decided by frame interval * need to update it after frame interval changes @@ -1730,9 +1732,9 @@ static int imx274_set_frame_interval(struct stimx274 *priv, __func__, frame_interval.numerator, frame_interval.denominator);
- if (frame_interval.numerator == 0) { - err = -EINVAL; - goto fail; + if (frame_interval.numerator == 0 || frame_interval.denominator == 0) { + frame_interval.denominator = IMX274_DEF_FRAME_RATE; + frame_interval.numerator = 1; }
req_frame_rate = (u32)(frame_interval.denominator
From: Madhuparna Bhowmik madhuparnabhowmik10@gmail.com
[ Upstream commit 87d7ad089b318b4f319bf57f1daa64eb6d1d10ad ]
via_save_pcictrlreg() should be called with host->lock held as it writes to pm_pcictrl_reg, otherwise there can be a race condition between via_sd_suspend() and via_sdc_card_detect(). The same pattern is used in the function via_reset_pcictrl() as well, where via_save_pcictrlreg() is called with host->lock held.
Found by Linux Driver Verification project (linuxtesting.org).
Signed-off-by: Madhuparna Bhowmik madhuparnabhowmik10@gmail.com Link: https://lore.kernel.org/r/20200822061528.7035-1-madhuparnabhowmik10@gmail.co... Signed-off-by: Ulf Hansson ulf.hansson@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/mmc/host/via-sdmmc.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/mmc/host/via-sdmmc.c b/drivers/mmc/host/via-sdmmc.c index 49dab9f42b6d6..9b755ea0fa03c 100644 --- a/drivers/mmc/host/via-sdmmc.c +++ b/drivers/mmc/host/via-sdmmc.c @@ -1257,11 +1257,14 @@ static void __maybe_unused via_init_sdc_pm(struct via_crdr_mmc_host *host) static int __maybe_unused via_sd_suspend(struct device *dev) { struct via_crdr_mmc_host *host; + unsigned long flags;
host = dev_get_drvdata(dev);
+ spin_lock_irqsave(&host->lock, flags); via_save_pcictrlreg(host); via_save_sdcreg(host); + spin_unlock_irqrestore(&host->lock, flags);
device_wakeup_enable(dev);
From: Antonio Borneo antonio.borneo@st.com
[ Upstream commit c6d94e37bdbb6dfe7e581e937a915ab58399b8a5 ]
Current code enables the HS clock when video mode is started or to send out a HS command, and disables the HS clock to send out a LP command. This is not what DSI spec specify.
Enable HS clock either in command and in video mode. Set automatic HS clock management for panels and devices that support non-continuous HS clock.
Signed-off-by: Antonio Borneo antonio.borneo@st.com Tested-by: Philippe Cornu philippe.cornu@st.com Reviewed-by: Philippe Cornu philippe.cornu@st.com Acked-by: Neil Armstrong narmstrong@baylibre.com Signed-off-by: Neil Armstrong narmstrong@baylibre.com Link: https://patchwork.freedesktop.org/patch/msgid/20200701194234.18123-1-yannick... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c b/drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c index d580b2aa4ce98..979acaa90d002 100644 --- a/drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c +++ b/drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c @@ -365,7 +365,6 @@ static void dw_mipi_message_config(struct dw_mipi_dsi *dsi, if (lpm) val |= CMD_MODE_ALL_LP;
- dsi_write(dsi, DSI_LPCLK_CTRL, lpm ? 0 : PHY_TXREQUESTCLKHS); dsi_write(dsi, DSI_CMD_MODE_CFG, val); }
@@ -541,16 +540,22 @@ static void dw_mipi_dsi_video_mode_config(struct dw_mipi_dsi *dsi) static void dw_mipi_dsi_set_mode(struct dw_mipi_dsi *dsi, unsigned long mode_flags) { + u32 val; + dsi_write(dsi, DSI_PWR_UP, RESET);
if (mode_flags & MIPI_DSI_MODE_VIDEO) { dsi_write(dsi, DSI_MODE_CFG, ENABLE_VIDEO_MODE); dw_mipi_dsi_video_mode_config(dsi); - dsi_write(dsi, DSI_LPCLK_CTRL, PHY_TXREQUESTCLKHS); } else { dsi_write(dsi, DSI_MODE_CFG, ENABLE_CMD_MODE); }
+ val = PHY_TXREQUESTCLKHS; + if (dsi->mode_flags & MIPI_DSI_CLOCK_NON_CONTINUOUS) + val |= AUTO_CLKLANE_CTRL; + dsi_write(dsi, DSI_LPCLK_CTRL, val); + dsi_write(dsi, DSI_PWR_UP, POWERUP); }
From: Dmitry Osipenko digetx@gmail.com
[ Upstream commit 317da69d10b0247c4042354eb90c75b81620ce9d ]
This patch fixes SDHCI CRC errors during of RX throughput testing on BCM4329 chip if SDIO BUS is clocked above 25MHz. In particular the checksum problem is observed on NVIDIA Tegra20 SoCs. The good watermark value is borrowed from downstream BCMDHD driver and it's matching to the value that is already used for the BCM4339 chip, hence let's re-use it for BCM4329.
Reviewed-by: Arend van Spriel arend.vanspriel@broadcom.com Signed-off-by: Dmitry Osipenko digetx@gmail.com Signed-off-by: Kalle Valo kvalo@codeaurora.org Link: https://lore.kernel.org/r/20200830191439.10017-2-digetx@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c index 3c07d1bbe1c6e..ac3ee93a23780 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c +++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c @@ -4278,6 +4278,7 @@ static void brcmf_sdio_firmware_callback(struct device *dev, int err, brcmf_sdiod_writeb(sdiod, SBSDIO_FUNC1_MESBUSYCTRL, CY_43012_MESBUSYCTRL, &err); break; + case SDIO_DEVICE_ID_BROADCOM_4329: case SDIO_DEVICE_ID_BROADCOM_4339: brcmf_dbg(INFO, "set F2 watermark to 0x%x*4 bytes for 4339\n", CY_4339_F2_WATERMARK);
From: Valentin Schneider valentin.schneider@arm.com
[ Upstream commit 3102bc0e6ac752cc5df896acb557d779af4d82a1 ]
In the absence of ACPI or DT topology data, we fallback to haphazardly decoding *something* out of MPIDR. Sadly, the contents of that register are mostly unusable due to the implementation leniancy and things like Aff0 having to be capped to 15 (despite being encoded on 8 bits).
Consider a simple system with a single package of 32 cores, all under the same LLC. We ought to be shoving them in the same core_sibling mask, but MPIDR is going to look like:
| CPU | 0 | ... | 15 | 16 | ... | 31 | |------+---+-----+----+----+-----+----+ | Aff0 | 0 | ... | 15 | 0 | ... | 15 | | Aff1 | 0 | ... | 0 | 1 | ... | 1 | | Aff2 | 0 | ... | 0 | 0 | ... | 0 |
Which will eventually yield
core_sibling(0-15) == 0-15 core_sibling(16-31) == 16-31
NUMA woes =========
If we try to play games with this and set up NUMA boundaries within those groups of 16 cores via e.g. QEMU:
# Node0: 0-9; Node1: 10-19 $ qemu-system-aarch64 <blah> \ -smp 20 -numa node,cpus=0-9,nodeid=0 -numa node,cpus=10-19,nodeid=1
The scheduler's MC domain (all CPUs with same LLC) is going to be built via
arch_topology.c::cpu_coregroup_mask()
In there we try to figure out a sensible mask out of the topology information we have. In short, here we'll pick the smallest of NUMA or core sibling mask.
node_mask(CPU9) == 0-9 core_sibling(CPU9) == 0-15
MC mask for CPU9 will thus be 0-9, not a problem.
node_mask(CPU10) == 10-19 core_sibling(CPU10) == 0-15
MC mask for CPU10 will thus be 10-19, not a problem.
node_mask(CPU16) == 10-19 core_sibling(CPU16) == 16-19
MC mask for CPU16 will thus be 16-19... Uh oh. CPUs 16-19 are in two different unique MC spans, and the scheduler has no idea what to make of that. That triggers the WARN_ON() added by commit
ccf74128d66c ("sched/topology: Assert non-NUMA topology masks don't (partially) overlap")
Fixing MPIDR-derived topology =============================
We could try to come up with some cleverer scheme to figure out which of the available masks to pick, but really if one of those masks resulted from MPIDR then it should be discarded because it's bound to be bogus.
I was hoping to give MPIDR a chance for SMT, to figure out which threads are in the same core using Aff1-3 as core ID, but Sudeep and Robin pointed out to me that there are systems out there where *all* cores have non-zero values in their higher affinity fields (e.g. RK3288 has "5" in all of its cores' MPIDR.Aff1), which would expose a bogus core ID to userspace.
Stop using MPIDR for topology information. When no other source of topology information is available, mark each CPU as its own core and its NUMA node as its LLC domain.
Signed-off-by: Valentin Schneider valentin.schneider@arm.com Reviewed-by: Sudeep Holla sudeep.holla@arm.com Link: https://lore.kernel.org/r/20200829130016.26106-1-valentin.schneider@arm.com Signed-off-by: Will Deacon will@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/kernel/topology.c | 32 +++++++++++++++++--------------- 1 file changed, 17 insertions(+), 15 deletions(-)
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c index 0801a0f3c156a..ff1dd1dbfe641 100644 --- a/arch/arm64/kernel/topology.c +++ b/arch/arm64/kernel/topology.c @@ -36,21 +36,23 @@ void store_cpu_topology(unsigned int cpuid) if (mpidr & MPIDR_UP_BITMASK) return;
- /* Create cpu topology mapping based on MPIDR. */ - if (mpidr & MPIDR_MT_BITMASK) { - /* Multiprocessor system : Multi-threads per core */ - cpuid_topo->thread_id = MPIDR_AFFINITY_LEVEL(mpidr, 0); - cpuid_topo->core_id = MPIDR_AFFINITY_LEVEL(mpidr, 1); - cpuid_topo->package_id = MPIDR_AFFINITY_LEVEL(mpidr, 2) | - MPIDR_AFFINITY_LEVEL(mpidr, 3) << 8; - } else { - /* Multiprocessor system : Single-thread per core */ - cpuid_topo->thread_id = -1; - cpuid_topo->core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0); - cpuid_topo->package_id = MPIDR_AFFINITY_LEVEL(mpidr, 1) | - MPIDR_AFFINITY_LEVEL(mpidr, 2) << 8 | - MPIDR_AFFINITY_LEVEL(mpidr, 3) << 16; - } + /* + * This would be the place to create cpu topology based on MPIDR. + * + * However, it cannot be trusted to depict the actual topology; some + * pieces of the architecture enforce an artificial cap on Aff0 values + * (e.g. GICv3's ICC_SGI1R_EL1 limits it to 15), leading to an + * artificial cycling of Aff1, Aff2 and Aff3 values. IOW, these end up + * having absolutely no relationship to the actual underlying system + * topology, and cannot be reasonably used as core / package ID. + * + * If the MT bit is set, Aff0 *could* be used to define a thread ID, but + * we still wouldn't be able to obtain a sane core ID. This means we + * need to entirely ignore MPIDR for any topology deduction. + */ + cpuid_topo->thread_id = -1; + cpuid_topo->core_id = cpuid; + cpuid_topo->package_id = cpu_to_node(cpuid);
pr_debug("CPU%u: cluster %d core %d thread %d mpidr %#016llx\n", cpuid, cpuid_topo->package_id, cpuid_topo->core_id,
From: John Ogness john.ogness@linutronix.de
[ Upstream commit 550c10d28d21bd82a8bb48debbb27e6ed53262f6 ]
The .bss section for the h8300 is relatively small. A value of CONFIG_LOG_BUF_SHIFT that is larger than 19 will create a static printk ringbuffer that is too large. Limit the range appropriately for the H8300.
Reported-by: kernel test robot lkp@intel.com Signed-off-by: John Ogness john.ogness@linutronix.de Reviewed-by: Sergey Senozhatsky sergey.senozhatsky@gmail.com Acked-by: Steven Rostedt (VMware) rostedt@goodmis.org Signed-off-by: Petr Mladek pmladek@suse.com Link: https://lore.kernel.org/r/20200812073122.25412-1-john.ogness@linutronix.de Signed-off-by: Sasha Levin sashal@kernel.org --- init/Kconfig | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/init/Kconfig b/init/Kconfig index d6a0b31b13dc9..2a5df1cf838c6 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -682,7 +682,8 @@ config IKHEADERS
config LOG_BUF_SHIFT int "Kernel log buffer size (16 => 64KB, 17 => 128KB)" - range 12 25 + range 12 25 if !H8300 + range 12 19 if H8300 default 17 depends on PRINTK help
From: Masami Hiramatsu mhiramat@kernel.org
[ Upstream commit e792ff804f49720ce003b3e4c618b5d996256a18 ]
Use the generic kretprobe trampoline handler. Don't use framepointer verification.
Signed-off-by: Masami Hiramatsu mhiramat@kernel.org Signed-off-by: Ingo Molnar mingo@kernel.org Link: https://lore.kernel.org/r/159870606883.1229682.12331813108378725668.stgit@de... Signed-off-by: Sasha Levin sashal@kernel.org --- arch/ia64/kernel/kprobes.c | 77 +------------------------------------- 1 file changed, 2 insertions(+), 75 deletions(-)
diff --git a/arch/ia64/kernel/kprobes.c b/arch/ia64/kernel/kprobes.c index 7a7df944d7986..fc1ff8a4d7de6 100644 --- a/arch/ia64/kernel/kprobes.c +++ b/arch/ia64/kernel/kprobes.c @@ -396,83 +396,9 @@ static void kretprobe_trampoline(void) { }
-/* - * At this point the target function has been tricked into - * returning into our trampoline. Lookup the associated instance - * and then: - * - call the handler function - * - cleanup by marking the instance as unused - * - long jump back to the original return address - */ int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) { - struct kretprobe_instance *ri = NULL; - struct hlist_head *head, empty_rp; - struct hlist_node *tmp; - unsigned long flags, orig_ret_address = 0; - unsigned long trampoline_address = - ((struct fnptr *)kretprobe_trampoline)->ip; - - INIT_HLIST_HEAD(&empty_rp); - kretprobe_hash_lock(current, &head, &flags); - - /* - * It is possible to have multiple instances associated with a given - * task either because an multiple functions in the call path - * have a return probe installed on them, and/or more than one return - * return probe was registered for a target function. - * - * We can handle this because: - * - instances are always inserted at the head of the list - * - when multiple return probes are registered for the same - * function, the first instance's ret_addr will point to the - * real return address, and all the rest will point to - * kretprobe_trampoline - */ - hlist_for_each_entry_safe(ri, tmp, head, hlist) { - if (ri->task != current) - /* another task is sharing our hash bucket */ - continue; - - orig_ret_address = (unsigned long)ri->ret_addr; - if (orig_ret_address != trampoline_address) - /* - * This is the real return address. Any other - * instances associated with this task are for - * other calls deeper on the call stack - */ - break; - } - - regs->cr_iip = orig_ret_address; - - hlist_for_each_entry_safe(ri, tmp, head, hlist) { - if (ri->task != current) - /* another task is sharing our hash bucket */ - continue; - - if (ri->rp && ri->rp->handler) - ri->rp->handler(ri, regs); - - orig_ret_address = (unsigned long)ri->ret_addr; - recycle_rp_inst(ri, &empty_rp); - - if (orig_ret_address != trampoline_address) - /* - * This is the real return address. Any other - * instances associated with this task are for - * other calls deeper on the call stack - */ - break; - } - kretprobe_assert(ri, orig_ret_address, trampoline_address); - - kretprobe_hash_unlock(current, &flags); - - hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) { - hlist_del(&ri->hlist); - kfree(ri); - } + regs->cr_iip = __kretprobe_trampoline_handler(regs, kretprobe_trampoline, NULL); /* * By returning a non-zero value, we are telling * kprobe_handler() that we don't want the post_handler @@ -485,6 +411,7 @@ void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri, struct pt_regs *regs) { ri->ret_addr = (kprobe_opcode_t *)regs->b0; + ri->fp = NULL;
/* Replace the return addr with trampoline addr */ regs->b0 = ((struct fnptr *)kretprobe_trampoline)->ip;
From: Michael Ellerman mpe@ellerman.id.au
[ Upstream commit 34c103342be3f9397e656da7c5cc86e97b91f514 ]
These platforms don't show the MMU in /proc/cpuinfo, but they always use hash, so teach using_hash_mmu() that.
Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20200819015727.1977134-1-mpe@ellerman.id.au Signed-off-by: Sasha Levin sashal@kernel.org --- tools/testing/selftests/powerpc/utils.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/powerpc/utils.c b/tools/testing/selftests/powerpc/utils.c index 18b6a773d5c73..638ffacc90aa1 100644 --- a/tools/testing/selftests/powerpc/utils.c +++ b/tools/testing/selftests/powerpc/utils.c @@ -318,7 +318,9 @@ int using_hash_mmu(bool *using_hash)
rc = 0; while (fgets(line, sizeof(line), f) != NULL) { - if (strcmp(line, "MMU : Hash\n") == 0) { + if (!strcmp(line, "MMU : Hash\n") || + !strcmp(line, "platform : Cell\n") || + !strcmp(line, "platform : PowerMac\n")) { *using_hash = true; goto out; }
From: Douglas Anderson dianders@chromium.org
[ Upstream commit b18b099e04f450cdc77bec72acefcde7042bd1f3 ]
On my system the kernel processes the "kgdb_earlycon" parameter before the "kgdbcon" parameter. When we setup "kgdb_earlycon" we'll end up in kgdb_register_callbacks() and "kgdb_use_con" won't have been set yet so we'll never get around to starting "kgdbcon". Let's remedy this by detecting that the IO module was already registered when setting "kgdb_use_con" and registering the console then.
As part of this, to avoid pre-declaring things, move the handling of the "kgdbcon" further down in the file.
Signed-off-by: Douglas Anderson dianders@chromium.org Link: https://lore.kernel.org/r/20200630151422.1.I4aa062751ff5e281f5116655c976dff5... Signed-off-by: Daniel Thompson daniel.thompson@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/debug/debug_core.c | 22 ++++++++++++++-------- 1 file changed, 14 insertions(+), 8 deletions(-)
diff --git a/kernel/debug/debug_core.c b/kernel/debug/debug_core.c index b16dbc1bf0567..404d6d47a11da 100644 --- a/kernel/debug/debug_core.c +++ b/kernel/debug/debug_core.c @@ -94,14 +94,6 @@ int dbg_switch_cpu; /* Use kdb or gdbserver mode */ int dbg_kdb_mode = 1;
-static int __init opt_kgdb_con(char *str) -{ - kgdb_use_con = 1; - return 0; -} - -early_param("kgdbcon", opt_kgdb_con); - module_param(kgdb_use_con, int, 0644); module_param(kgdbreboot, int, 0644);
@@ -920,6 +912,20 @@ static struct console kgdbcons = { .index = -1, };
+static int __init opt_kgdb_con(char *str) +{ + kgdb_use_con = 1; + + if (kgdb_io_module_registered && !kgdb_con_registered) { + register_console(&kgdbcons); + kgdb_con_registered = 1; + } + + return 0; +} + +early_param("kgdbcon", opt_kgdb_con); + #ifdef CONFIG_MAGIC_SYSRQ static void sysrq_handle_dbg(int key) {
From: Yonghong Song yhs@fb.com
[ Upstream commit 7c6967326267bd5c0dded0a99541357d70dd11ac ]
Commit 41c48f3a98231 ("bpf: Support access to bpf map fields") added support to access map fields with CORE support. For example,
struct bpf_map { __u32 max_entries; } __attribute__((preserve_access_index));
struct bpf_array { struct bpf_map map; __u32 elem_size; } __attribute__((preserve_access_index));
struct { __uint(type, BPF_MAP_TYPE_ARRAY); __uint(max_entries, 4); __type(key, __u32); __type(value, __u32); } m_array SEC(".maps");
SEC("cgroup_skb/egress") int cg_skb(void *ctx) { struct bpf_array *array = (struct bpf_array *)&m_array;
/* .. array->map.max_entries .. */ }
In kernel, bpf_htab has similar structure,
struct bpf_htab { struct bpf_map map; ... }
In the above cg_skb(), to access array->map.max_entries, with CORE, the clang will generate two builtin's. base = &m_array; /* access array.map */ map_addr = __builtin_preserve_struct_access_info(base, 0, 0); /* access array.map.max_entries */ max_entries_addr = __builtin_preserve_struct_access_info(map_addr, 0, 0); max_entries = *max_entries_addr;
In the current llvm, if two builtin's are in the same function or in the same function after inlining, the compiler is smart enough to chain them together and generates like below: base = &m_array; max_entries = *(base + reloc_offset); /* reloc_offset = 0 in this case */ and we are fine.
But if we force no inlining for one of functions in test_map_ptr() selftest, e.g., check_default(), the above two __builtin_preserve_* will be in two different functions. In this case, we will have code like: func check_hash(): reloc_offset_map = 0; base = &m_array; map_base = base + reloc_offset_map; check_default(map_base, ...) func check_default(map_base, ...): max_entries = *(map_base + reloc_offset_max_entries);
In kernel, map_ptr (CONST_PTR_TO_MAP) does not allow any arithmetic. The above "map_base = base + reloc_offset_map" will trigger a verifier failure. ; VERIFY(check_default(&hash->map, map)); 0: (18) r7 = 0xffffb4fe8018a004 2: (b4) w1 = 110 3: (63) *(u32 *)(r7 +0) = r1 R1_w=invP110 R7_w=map_value(id=0,off=4,ks=4,vs=8,imm=0) R10=fp0 ; VERIFY_TYPE(BPF_MAP_TYPE_HASH, check_hash); 4: (18) r1 = 0xffffb4fe8018a000 6: (b4) w2 = 1 7: (63) *(u32 *)(r1 +0) = r2 R1_w=map_value(id=0,off=0,ks=4,vs=8,imm=0) R2_w=invP1 R7_w=map_value(id=0,off=4,ks=4,vs=8,imm=0) R10=fp0 8: (b7) r2 = 0 9: (18) r8 = 0xffff90bcb500c000 11: (18) r1 = 0xffff90bcb500c000 13: (0f) r1 += r2 R1 pointer arithmetic on map_ptr prohibited
To fix the issue, let us permit map_ptr + 0 arithmetic which will result in exactly the same map_ptr.
Signed-off-by: Yonghong Song yhs@fb.com Signed-off-by: Alexei Starovoitov ast@kernel.org Acked-by: Andrii Nakryiko andriin@fb.com Link: https://lore.kernel.org/bpf/20200908175702.2463625-1-yhs@fb.com Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/bpf/verifier.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 43cd175c66a55..718bbdc8b3c66 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -5246,6 +5246,10 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env, dst, reg_type_str[ptr_reg->type]); return -EACCES; case CONST_PTR_TO_MAP: + /* smin_val represents the known value */ + if (known && smin_val == 0 && opcode == BPF_ADD) + break; + /* fall-through */ case PTR_TO_PACKET_END: case PTR_TO_SOCKET: case PTR_TO_SOCKET_OR_NULL:
From: Marek Szyprowski m.szyprowski@samsung.com
[ Upstream commit 84404614167b829f7b58189cd24b6c0c74897171 ]
The Documentation/DMA-API-HOWTO.txt states that the dma_map_sg() function returns the number of the created entries in the DMA address space. However the subsequent calls to the dma_sync_sg_for_{device,cpu}() and dma_unmap_sg must be called with the original number of the entries passed to the dma_map_sg().
struct sg_table is a common structure used for describing a non-contiguous memory buffer, used commonly in the DRM and graphics subsystems. It consists of a scatterlist with memory pages and DMA addresses (sgl entry), as well as the number of scatterlist entries: CPU pages (orig_nents entry) and DMA mapped pages (nents entry).
It turned out that it was a common mistake to misuse nents and orig_nents entries, calling DMA-mapping functions with a wrong number of entries or ignoring the number of mapped entries returned by the dma_map_sg() function.
To avoid such issues, lets use a common dma-mapping wrappers operating directly on the struct sg_table objects and use scatterlist page iterators where possible. This, almost always, hides references to the nents and orig_nents entries, making the code robust, easier to follow and copy/paste safe.
Signed-off-by: Marek Szyprowski m.szyprowski@samsung.com Reviewed-by: Andrzej Hajda a.hajda@samsung.com Acked-by : Inki Dae inki.dae@samsung.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/exynos/exynos_drm_g2d.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/exynos/exynos_drm_g2d.c b/drivers/gpu/drm/exynos/exynos_drm_g2d.c index 03be314271811..967a5cdc120e3 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_g2d.c +++ b/drivers/gpu/drm/exynos/exynos_drm_g2d.c @@ -395,8 +395,8 @@ static void g2d_userptr_put_dma_addr(struct g2d_data *g2d, return;
out: - dma_unmap_sg(to_dma_dev(g2d->drm_dev), g2d_userptr->sgt->sgl, - g2d_userptr->sgt->nents, DMA_BIDIRECTIONAL); + dma_unmap_sgtable(to_dma_dev(g2d->drm_dev), g2d_userptr->sgt, + DMA_BIDIRECTIONAL, 0);
pages = frame_vector_pages(g2d_userptr->vec); if (!IS_ERR(pages)) { @@ -511,10 +511,10 @@ static dma_addr_t *g2d_userptr_get_dma_addr(struct g2d_data *g2d,
g2d_userptr->sgt = sgt;
- if (!dma_map_sg(to_dma_dev(g2d->drm_dev), sgt->sgl, sgt->nents, - DMA_BIDIRECTIONAL)) { + ret = dma_map_sgtable(to_dma_dev(g2d->drm_dev), sgt, + DMA_BIDIRECTIONAL, 0); + if (ret) { DRM_DEV_ERROR(g2d->dev, "failed to map sgt with dma region.\n"); - ret = -ENOMEM; goto err_sg_free_table; }
From: Marek Szyprowski m.szyprowski@samsung.com
[ Upstream commit d1749eb1ab85e04e58c29e58900e3abebbdd6e82 ]
The Documentation/DMA-API-HOWTO.txt states that the dma_map_sg() function returns the number of the created entries in the DMA address space. However the subsequent calls to the dma_sync_sg_for_{device,cpu}() and dma_unmap_sg must be called with the original number of the entries passed to the dma_map_sg().
struct sg_table is a common structure used for describing a non-contiguous memory buffer, used commonly in the DRM and graphics subsystems. It consists of a scatterlist with memory pages and DMA addresses (sgl entry), as well as the number of scatterlist entries: CPU pages (orig_nents entry) and DMA mapped pages (nents entry).
It turned out that it was a common mistake to misuse nents and orig_nents entries, calling DMA-mapping functions with a wrong number of entries or ignoring the number of mapped entries returned by the dma_map_sg() function.
To avoid such issues, lets use a common dma-mapping wrappers operating directly on the struct sg_table objects and use scatterlist page iterators where possible. This, almost always, hides references to the nents and orig_nents entries, making the code robust, easier to follow and copy/paste safe.
Signed-off-by: Marek Szyprowski m.szyprowski@samsung.com Acked-by: Juergen Gross jgross@suse.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/xen/gntdev-dmabuf.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c index b1b6eebafd5de..4c13cbc99896a 100644 --- a/drivers/xen/gntdev-dmabuf.c +++ b/drivers/xen/gntdev-dmabuf.c @@ -247,10 +247,9 @@ static void dmabuf_exp_ops_detach(struct dma_buf *dma_buf,
if (sgt) { if (gntdev_dmabuf_attach->dir != DMA_NONE) - dma_unmap_sg_attrs(attach->dev, sgt->sgl, - sgt->nents, - gntdev_dmabuf_attach->dir, - DMA_ATTR_SKIP_CPU_SYNC); + dma_unmap_sgtable(attach->dev, sgt, + gntdev_dmabuf_attach->dir, + DMA_ATTR_SKIP_CPU_SYNC); sg_free_table(sgt); }
@@ -288,8 +287,8 @@ dmabuf_exp_ops_map_dma_buf(struct dma_buf_attachment *attach, sgt = dmabuf_pages_to_sgt(gntdev_dmabuf->pages, gntdev_dmabuf->nr_pages); if (!IS_ERR(sgt)) { - if (!dma_map_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir, - DMA_ATTR_SKIP_CPU_SYNC)) { + if (dma_map_sgtable(attach->dev, sgt, dir, + DMA_ATTR_SKIP_CPU_SYNC)) { sg_free_table(sgt); kfree(sgt); sgt = ERR_PTR(-ENOMEM); @@ -633,7 +632,7 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev,
/* Now convert sgt to array of pages and check for page validity. */ i = 0; - for_each_sg_page(sgt->sgl, &sg_iter, sgt->nents, 0) { + for_each_sgtable_page(sgt, &sg_iter, 0) { struct page *page = sg_page_iter_page(&sg_iter); /* * Check if page is valid: this can happen if we are given
From: Marek Szyprowski m.szyprowski@samsung.com
[ Upstream commit c3d9c17f486d5c54940487dc31a54ebfdeeb371a ]
The Documentation/DMA-API-HOWTO.txt states that the dma_map_sg() function returns the number of the created entries in the DMA address space. However the subsequent calls to the dma_sync_sg_for_{device,cpu}() and dma_unmap_sg must be called with the original number of the entries passed to the dma_map_sg().
struct sg_table is a common structure used for describing a non-contiguous memory buffer, used commonly in the DRM and graphics subsystems. It consists of a scatterlist with memory pages and DMA addresses (sgl entry), as well as the number of scatterlist entries: CPU pages (orig_nents entry) and DMA mapped pages (nents entry).
It turned out that it was a common mistake to misuse nents and orig_nents entries, calling DMA-mapping functions with a wrong number of entries or ignoring the number of mapped entries returned by the dma_map_sg() function.
To avoid such issues, lets use a common dma-mapping wrappers operating directly on the struct sg_table objects and use scatterlist page iterators where possible. This, almost always, hides references to the nents and orig_nents entries, making the code robust, easier to follow and copy/paste safe.
Signed-off-by: Marek Szyprowski m.szyprowski@samsung.com Reviewed-by: Qiang Yu yuq825@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/lima/lima_gem.c | 11 ++++++++--- drivers/gpu/drm/lima/lima_vm.c | 5 ++--- 2 files changed, 10 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c index 155f2b4b4030a..11223fe348dfe 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -69,8 +69,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm) return ret;
if (bo->base.sgt) { - dma_unmap_sg(dev, bo->base.sgt->sgl, - bo->base.sgt->nents, DMA_BIDIRECTIONAL); + dma_unmap_sgtable(dev, bo->base.sgt, DMA_BIDIRECTIONAL, 0); sg_free_table(bo->base.sgt); } else { bo->base.sgt = kmalloc(sizeof(*bo->base.sgt), GFP_KERNEL); @@ -80,7 +79,13 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm) } }
- dma_map_sg(dev, sgt.sgl, sgt.nents, DMA_BIDIRECTIONAL); + ret = dma_map_sgtable(dev, &sgt, DMA_BIDIRECTIONAL, 0); + if (ret) { + sg_free_table(&sgt); + kfree(bo->base.sgt); + bo->base.sgt = NULL; + return ret; + }
*bo->base.sgt = sgt;
diff --git a/drivers/gpu/drm/lima/lima_vm.c b/drivers/gpu/drm/lima/lima_vm.c index 5b92fb82674a9..2b2739adc7f53 100644 --- a/drivers/gpu/drm/lima/lima_vm.c +++ b/drivers/gpu/drm/lima/lima_vm.c @@ -124,7 +124,7 @@ int lima_vm_bo_add(struct lima_vm *vm, struct lima_bo *bo, bool create) if (err) goto err_out1;
- for_each_sg_dma_page(bo->base.sgt->sgl, &sg_iter, bo->base.sgt->nents, 0) { + for_each_sgtable_dma_page(bo->base.sgt, &sg_iter, 0) { err = lima_vm_map_page(vm, sg_page_iter_dma_address(&sg_iter), bo_va->node.start + offset); if (err) @@ -298,8 +298,7 @@ int lima_vm_map_bo(struct lima_vm *vm, struct lima_bo *bo, int pageoff) mutex_lock(&vm->lock);
base = bo_va->node.start + (pageoff << PAGE_SHIFT); - for_each_sg_dma_page(bo->base.sgt->sgl, &sg_iter, - bo->base.sgt->nents, pageoff) { + for_each_sgtable_dma_page(bo->base.sgt, &sg_iter, pageoff) { err = lima_vm_map_page(vm, sg_page_iter_dma_address(&sg_iter), base + offset); if (err)
From: Marek Szyprowski m.szyprowski@samsung.com
[ Upstream commit 34a4e66faf8b22c8409cbd46839ba5e488b1e6a9 ]
The Documentation/DMA-API-HOWTO.txt states that the dma_map_sg() function returns the number of the created entries in the DMA address space. However the subsequent calls to the dma_sync_sg_for_{device,cpu}() and dma_unmap_sg must be called with the original number of the entries passed to the dma_map_sg().
struct sg_table is a common structure used for describing a non-contiguous memory buffer, used commonly in the DRM and graphics subsystems. It consists of a scatterlist with memory pages and DMA addresses (sgl entry), as well as the number of scatterlist entries: CPU pages (orig_nents entry) and DMA mapped pages (nents entry).
It turned out that it was a common mistake to misuse nents and orig_nents entries, calling DMA-mapping functions with a wrong number of entries or ignoring the number of mapped entries returned by the dma_map_sg() function.
To avoid such issues, lets use a common dma-mapping wrappers operating directly on the struct sg_table objects and use scatterlist page iterators where possible. This, almost always, hides references to the nents and orig_nents entries, making the code robust, easier to follow and copy/paste safe.
Signed-off-by: Marek Szyprowski m.szyprowski@samsung.com Reviewed-by: Steven Price steven.price@arm.com Reviewed-by: Rob Herring robh@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/panfrost/panfrost_gem.c | 4 ++-- drivers/gpu/drm/panfrost/panfrost_mmu.c | 7 +++---- 2 files changed, 5 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index 33355dd302f11..1a6cea0e0bd74 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -41,8 +41,8 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj)
for (i = 0; i < n_sgt; i++) { if (bo->sgts[i].sgl) { - dma_unmap_sg(pfdev->dev, bo->sgts[i].sgl, - bo->sgts[i].nents, DMA_BIDIRECTIONAL); + dma_unmap_sgtable(pfdev->dev, &bo->sgts[i], + DMA_BIDIRECTIONAL, 0); sg_free_table(&bo->sgts[i]); } } diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index e8f7b11352d27..776448c527ea9 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -253,7 +253,7 @@ static int mmu_map_sg(struct panfrost_device *pfdev, struct panfrost_mmu *mmu, struct io_pgtable_ops *ops = mmu->pgtbl_ops; u64 start_iova = iova;
- for_each_sg(sgt->sgl, sgl, sgt->nents, count) { + for_each_sgtable_dma_sg(sgt, sgl, count) { unsigned long paddr = sg_dma_address(sgl); size_t len = sg_dma_len(sgl);
@@ -517,10 +517,9 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, if (ret) goto err_pages;
- if (!dma_map_sg(pfdev->dev, sgt->sgl, sgt->nents, DMA_BIDIRECTIONAL)) { - ret = -EINVAL; + ret = dma_map_sgtable(pfdev->dev, sgt, DMA_BIDIRECTIONAL, 0); + if (ret) goto err_map; - }
mmu_map_sg(pfdev, bomapping->mmu, addr, IOMMU_WRITE | IOMMU_READ | IOMMU_NOEXEC, sgt);
From: Daniel W. S. Almeida dwlsalmeida@gmail.com
[ Upstream commit f875bcc375c738bf2f599ff2e1c5b918dbd07c45 ]
Fixes the following coccinelle report:
drivers/media/usb/uvc/uvc_ctrl.c:1860:5-11: ERROR: invalid reference to the index variable of the iterator on line 1854
by adding a boolean variable to check if the loop has found the
Found using - Coccinelle (http://coccinelle.lip6.fr)
[Replace cursor variable with bool found]
Signed-off-by: Daniel W. S. Almeida dwlsalmeida@gmail.com Signed-off-by: Laurent Pinchart laurent.pinchart@ideasonboard.com Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/usb/uvc/uvc_ctrl.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c index a30a8a731eda8..c13ed95cb06fe 100644 --- a/drivers/media/usb/uvc/uvc_ctrl.c +++ b/drivers/media/usb/uvc/uvc_ctrl.c @@ -1848,30 +1848,35 @@ int uvc_xu_ctrl_query(struct uvc_video_chain *chain, { struct uvc_entity *entity; struct uvc_control *ctrl; - unsigned int i, found = 0; + unsigned int i; + bool found; u32 reqflags; u16 size; u8 *data = NULL; int ret;
/* Find the extension unit. */ + found = false; list_for_each_entry(entity, &chain->entities, chain) { if (UVC_ENTITY_TYPE(entity) == UVC_VC_EXTENSION_UNIT && - entity->id == xqry->unit) + entity->id == xqry->unit) { + found = true; break; + } }
- if (entity->id != xqry->unit) { + if (!found) { uvc_trace(UVC_TRACE_CONTROL, "Extension unit %u not found.\n", xqry->unit); return -ENOENT; }
/* Find the control and perform delayed initialization if needed. */ + found = false; for (i = 0; i < entity->ncontrols; ++i) { ctrl = &entity->controls[i]; if (ctrl->index == xqry->selector - 1) { - found = 1; + found = true; break; } }
From: Krzysztof Kozlowski krzk@kernel.org
[ Upstream commit 4aa62c62d4c41d71b2bda5ed01b78961829ee93c ]
The driver uses crypto hash functions so it needs to select CRYPTO_HASH. This fixes build errors:
arc-linux-ld: drivers/nfc/s3fwrn5/firmware.o: in function `s3fwrn5_fw_download': firmware.c:(.text+0x152): undefined reference to `crypto_alloc_shash'
Signed-off-by: Krzysztof Kozlowski krzk@kernel.org Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nfc/s3fwrn5/Kconfig | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/nfc/s3fwrn5/Kconfig b/drivers/nfc/s3fwrn5/Kconfig index af9d18690afeb..3f8b6da582803 100644 --- a/drivers/nfc/s3fwrn5/Kconfig +++ b/drivers/nfc/s3fwrn5/Kconfig @@ -2,6 +2,7 @@ config NFC_S3FWRN5 tristate select CRYPTO + select CRYPTO_HASH help Core driver for Samsung S3FWRN5 NFC chip. Contains core utilities of chip. It's intended to be used by PHYs to avoid duplicating lots
From: Yonghong Song yhs@fb.com
[ Upstream commit 6e057fc15a2da4ee03eb1fa6889cf687e690106e ]
When tweaking llvm optimizations, I found that selftest build failed with the following error: libbpf: elf: skipping unrecognized data section(6) .rodata.str1.1 libbpf: prog 'sysctl_tcp_mem': bad map relo against '.L__const.is_tcp_mem.tcp_mem_name' in section '.rodata.str1.1' Error: failed to open BPF object file: Relocation failed make: *** [/work/net-next/tools/testing/selftests/bpf/test_sysctl_prog.skel.h] Error 255 make: *** Deleting file `/work/net-next/tools/testing/selftests/bpf/test_sysctl_prog.skel.h'
The local string constant "tcp_mem_name" is put into '.rodata.str1.1' section which libbpf cannot handle. Using untweaked upstream llvm, "tcp_mem_name" is completely inlined after loop unrolling.
Commit 7fb5eefd7639 ("selftests/bpf: Fix test_sysctl_loop{1, 2} failure due to clang change") solved a similar problem by defining the string const as a global. Let us do the same here for test_sysctl_prog.c so it can weather future potential llvm changes.
Signed-off-by: Yonghong Song yhs@fb.com Signed-off-by: Alexei Starovoitov ast@kernel.org Acked-by: Andrii Nakryiko andriin@fb.com Link: https://lore.kernel.org/bpf/20200910202718.956042-1-yhs@fb.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/testing/selftests/bpf/progs/test_sysctl_prog.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/bpf/progs/test_sysctl_prog.c b/tools/testing/selftests/bpf/progs/test_sysctl_prog.c index 50525235380e8..5489823c83fc2 100644 --- a/tools/testing/selftests/bpf/progs/test_sysctl_prog.c +++ b/tools/testing/selftests/bpf/progs/test_sysctl_prog.c @@ -19,11 +19,11 @@ #define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0])) #endif
+const char tcp_mem_name[] = "net/ipv4/tcp_mem"; static __always_inline int is_tcp_mem(struct bpf_sysctl *ctx) { - char tcp_mem_name[] = "net/ipv4/tcp_mem"; unsigned char i; - char name[64]; + char name[sizeof(tcp_mem_name)]; int ret;
memset(name, 0, sizeof(name));
From: Stephen Smalley stephen.smalley.work@gmail.com
[ Upstream commit e8ba53d0023a76ba0f50e6ee3e6288c5442f9d33 ]
Use READ_ONCE/WRITE_ONCE for all accesses to the selinux_state.policycaps booleans to prevent compiler mischief.
Signed-off-by: Stephen Smalley stephen.smalley.work@gmail.com Signed-off-by: Paul Moore paul@paul-moore.com Signed-off-by: Sasha Levin sashal@kernel.org --- security/selinux/include/security.h | 14 +++++++------- security/selinux/ss/services.c | 3 ++- 2 files changed, 9 insertions(+), 8 deletions(-)
diff --git a/security/selinux/include/security.h b/security/selinux/include/security.h index b0e02cfe3ce14..8a432f646967e 100644 --- a/security/selinux/include/security.h +++ b/security/selinux/include/security.h @@ -177,49 +177,49 @@ static inline bool selinux_policycap_netpeer(void) { struct selinux_state *state = &selinux_state;
- return state->policycap[POLICYDB_CAPABILITY_NETPEER]; + return READ_ONCE(state->policycap[POLICYDB_CAPABILITY_NETPEER]); }
static inline bool selinux_policycap_openperm(void) { struct selinux_state *state = &selinux_state;
- return state->policycap[POLICYDB_CAPABILITY_OPENPERM]; + return READ_ONCE(state->policycap[POLICYDB_CAPABILITY_OPENPERM]); }
static inline bool selinux_policycap_extsockclass(void) { struct selinux_state *state = &selinux_state;
- return state->policycap[POLICYDB_CAPABILITY_EXTSOCKCLASS]; + return READ_ONCE(state->policycap[POLICYDB_CAPABILITY_EXTSOCKCLASS]); }
static inline bool selinux_policycap_alwaysnetwork(void) { struct selinux_state *state = &selinux_state;
- return state->policycap[POLICYDB_CAPABILITY_ALWAYSNETWORK]; + return READ_ONCE(state->policycap[POLICYDB_CAPABILITY_ALWAYSNETWORK]); }
static inline bool selinux_policycap_cgroupseclabel(void) { struct selinux_state *state = &selinux_state;
- return state->policycap[POLICYDB_CAPABILITY_CGROUPSECLABEL]; + return READ_ONCE(state->policycap[POLICYDB_CAPABILITY_CGROUPSECLABEL]); }
static inline bool selinux_policycap_nnp_nosuid_transition(void) { struct selinux_state *state = &selinux_state;
- return state->policycap[POLICYDB_CAPABILITY_NNP_NOSUID_TRANSITION]; + return READ_ONCE(state->policycap[POLICYDB_CAPABILITY_NNP_NOSUID_TRANSITION]); }
static inline bool selinux_policycap_genfs_seclabel_symlinks(void) { struct selinux_state *state = &selinux_state;
- return state->policycap[POLICYDB_CAPABILITY_GENFS_SECLABEL_SYMLINKS]; + return READ_ONCE(state->policycap[POLICYDB_CAPABILITY_GENFS_SECLABEL_SYMLINKS]); }
int security_mls_enabled(struct selinux_state *state); diff --git a/security/selinux/ss/services.c b/security/selinux/ss/services.c index 1caf4e6033096..c55b3063753ab 100644 --- a/security/selinux/ss/services.c +++ b/security/selinux/ss/services.c @@ -2103,7 +2103,8 @@ static void security_load_policycaps(struct selinux_state *state) struct ebitmap_node *node;
for (i = 0; i < ARRAY_SIZE(state->policycap); i++) - state->policycap[i] = ebitmap_get_bit(&p->policycaps, i); + WRITE_ONCE(state->policycap[i], + ebitmap_get_bit(&p->policycaps, i));
for (i = 0; i < ARRAY_SIZE(selinux_policycap_names); i++) pr_info("SELinux: policy capability %s=%d\n",
From: Magnus Karlsson magnus.karlsson@intel.com
[ Upstream commit 5a2a0dd88f0f267ac5953acd81050ae43a82201f ]
Fix a possible deadlock in the l2fwd application in xdpsock that can occur when there is no space in the Tx ring. There are two ways to get the kernel to consume entries in the Tx ring: calling sendto() to make it send packets and freeing entries from the completion ring, as the kernel will not send a packet if there is no space for it to add a completion entry in the completion ring. The Tx loop in l2fwd only used to call sendto(). This patches adds cleaning the completion ring in that loop.
Signed-off-by: Magnus Karlsson magnus.karlsson@intel.com Signed-off-by: Alexei Starovoitov ast@kernel.org Link: https://lore.kernel.org/bpf/1599726666-8431-3-git-send-email-magnus.karlsson... Signed-off-by: Sasha Levin sashal@kernel.org --- samples/bpf/xdpsock_user.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/samples/bpf/xdpsock_user.c b/samples/bpf/xdpsock_user.c index c821e98671393..63a9a2a39da7b 100644 --- a/samples/bpf/xdpsock_user.c +++ b/samples/bpf/xdpsock_user.c @@ -1111,6 +1111,7 @@ static void l2fwd(struct xsk_socket_info *xsk, struct pollfd *fds) while (ret != rcvd) { if (ret < 0) exit_with_error(-ret); + complete_tx_l2fwd(xsk, fds); if (xsk_ring_prod__needs_wakeup(&xsk->tx)) kick_tx(xsk); ret = xsk_ring_prod__reserve(&xsk->tx, rcvd, &idx_tx);
From: Rodrigo Siqueira Rodrigo.Siqueira@amd.com
[ Upstream commit 4b4f21ff7f5d11bb77e169b306dcbc5b216f5db5 ]
During the load processes for Renoir, our display code needs to retrieve the SMU clock and voltage table, however, this operation can fail which means that we have to check this scenario. Currently, we are not handling this case properly and as a result, we have seen the following dmesg log during the boot:
RIP: 0010:rn_clk_mgr_construct+0x129/0x3d0 [amdgpu] ... Call Trace: dc_clk_mgr_create+0x16a/0x1b0 [amdgpu] dc_create+0x231/0x760 [amdgpu]
This commit fixes this issue by checking the return status retrieved from the clock table before try to populate any bandwidth.
Signed-off-by: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Acked-by: Aurabindo Pillai aurabindo.pillai@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c index 21a3073c8929e..2f8fee05547ac 100644 --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c @@ -761,6 +761,7 @@ void rn_clk_mgr_construct( { struct dc_debug_options *debug = &ctx->dc->debug; struct dpm_clocks clock_table = { 0 }; + enum pp_smu_status status = 0;
clk_mgr->base.ctx = ctx; clk_mgr->base.funcs = &dcn21_funcs; @@ -817,8 +818,10 @@ void rn_clk_mgr_construct( clk_mgr->base.bw_params = &rn_bw_params;
if (pp_smu && pp_smu->rn_funcs.get_dpm_clock_table) { - pp_smu->rn_funcs.get_dpm_clock_table(&pp_smu->rn_funcs.pp_smu, &clock_table); - if (ctx->dc_bios && ctx->dc_bios->integrated_info) { + status = pp_smu->rn_funcs.get_dpm_clock_table(&pp_smu->rn_funcs.pp_smu, &clock_table); + + if (status == PP_SMU_RESULT_OK && + ctx->dc_bios && ctx->dc_bios->integrated_info) { rn_clk_mgr_helper_populate_bw_params (clk_mgr->base.bw_params, &clock_table, ctx->dc_bios->integrated_info); } }
From: Zong Li zong.li@sifive.com
[ Upstream commit b5fca7c55f9fbab5ad732c3bce00f31af6ba5cfa ]
AT_VECTOR_SIZE_ARCH should be defined with the maximum number of NEW_AUX_ENT entries that ARCH_DLINFO can contain, but it wasn't defined for RISC-V at all even though ARCH_DLINFO will contain one NEW_AUX_ENT for the VDSO address.
Signed-off-by: Zong Li zong.li@sifive.com Reviewed-by: Palmer Dabbelt palmerdabbelt@google.com Reviewed-by: Pekka Enberg penberg@kernel.org Signed-off-by: Palmer Dabbelt palmerdabbelt@google.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/riscv/include/uapi/asm/auxvec.h | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/arch/riscv/include/uapi/asm/auxvec.h b/arch/riscv/include/uapi/asm/auxvec.h index d86cb17bbabe6..22e0ae8884061 100644 --- a/arch/riscv/include/uapi/asm/auxvec.h +++ b/arch/riscv/include/uapi/asm/auxvec.h @@ -10,4 +10,7 @@ /* vDSO location */ #define AT_SYSINFO_EHDR 33
+/* entries in ARCH_DLINFO */ +#define AT_VECTOR_SIZE_ARCH 1 + #endif /* _UAPI_ASM_RISCV_AUXVEC_H */
From: Alain Volmat avolmat@me.com
[ Upstream commit 01a163c52039e9426c7d3d3ab16ca261ad622597 ]
The STiH418 can be controlled the same way as STiH407 & STiH410 regarding cpufreq.
Signed-off-by: Alain Volmat avolmat@me.com Signed-off-by: Viresh Kumar viresh.kumar@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/cpufreq/sti-cpufreq.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/cpufreq/sti-cpufreq.c b/drivers/cpufreq/sti-cpufreq.c index a5ad96d29adca..4ac6fb23792a0 100644 --- a/drivers/cpufreq/sti-cpufreq.c +++ b/drivers/cpufreq/sti-cpufreq.c @@ -141,7 +141,8 @@ static const struct reg_field sti_stih407_dvfs_regfields[DVFS_MAX_REGFIELDS] = { static const struct reg_field *sti_cpufreq_match(void) { if (of_machine_is_compatible("st,stih407") || - of_machine_is_compatible("st,stih410")) + of_machine_is_compatible("st,stih410") || + of_machine_is_compatible("st,stih418")) return sti_stih407_dvfs_regfields;
return NULL; @@ -258,7 +259,8 @@ static int sti_cpufreq_init(void) int ret;
if ((!of_machine_is_compatible("st,stih407")) && - (!of_machine_is_compatible("st,stih410"))) + (!of_machine_is_compatible("st,stih410")) && + (!of_machine_is_compatible("st,stih418"))) return -ENODEV;
ddata.cpu = get_cpu_device(0);
From: Oliver Neukum oneukum@suse.com
[ Upstream commit c56150c1bc8da5524831b1dac2eec3c67b89f587 ]
Handling for removal of the controller was missing at one place. Add it.
Signed-off-by: Oliver Neukum oneukum@suse.com Link: https://lore.kernel.org/r/20200917112600.26508-1-oneukum@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/misc/adutux.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/usb/misc/adutux.c b/drivers/usb/misc/adutux.c index a7eefe11f31aa..45a3879799352 100644 --- a/drivers/usb/misc/adutux.c +++ b/drivers/usb/misc/adutux.c @@ -209,6 +209,7 @@ static void adu_interrupt_out_callback(struct urb *urb)
if (status != 0) { if ((status != -ENOENT) && + (status != -ESHUTDOWN) && (status != -ECONNRESET)) { dev_dbg(&dev->udev->dev, "%s :nonzero status received: %d\n", __func__,
From: Lang Dai lang.dai@intel.com
[ Upstream commit 8fd0e2a6df262539eaa28b0a2364cca10d1dc662 ]
uio_register_device() do two things. 1) get an uio id from a global pool, e.g. the id is <A> 2) create file nodes like /sys/class/uio/uio<A>
uio_unregister_device() do two things. 1) free the uio id <A> and return it to the global pool 2) free the file node /sys/class/uio/uio<A>
There is a situation is that one worker is calling uio_unregister_device(), and another worker is calling uio_register_device(). If the two workers are X and Y, they go as below sequence, 1) X free the uio id <AAA> 2) Y get an uio id <AAA> 3) Y create file node /sys/class/uio/uio<AAA> 4) X free the file note /sys/class/uio/uio<AAA> Then it will failed at the 3rd step and cause the phenomenon we saw as it is creating a duplicated file node.
Failure reports as follows: sysfs: cannot create duplicate filename '/class/uio/uio10' Call Trace: sysfs_do_create_link_sd.isra.2+0x9e/0xb0 sysfs_create_link+0x25/0x40 device_add+0x2c4/0x640 __uio_register_device+0x1c5/0x576 [uio] adf_uio_init_bundle_dev+0x231/0x280 [intel_qat] adf_uio_register+0x1c0/0x340 [intel_qat] adf_dev_start+0x202/0x370 [intel_qat] adf_dev_start_async+0x40/0xa0 [intel_qat] process_one_work+0x14d/0x410 worker_thread+0x4b/0x460 kthread+0x105/0x140 ? process_one_work+0x410/0x410 ? kthread_bind+0x40/0x40 ret_from_fork+0x1f/0x40 Code: 85 c0 48 89 c3 74 12 b9 00 10 00 00 48 89 c2 31 f6 4c 89 ef e8 ec c4 ff ff 4c 89 e2 48 89 de 48 c7 c7 e8 b4 ee b4 e8 6a d4 d7 ff <0f> 0b 48 89 df e8 20 fa f3 ff 5b 41 5c 41 5d 5d c3 66 0f 1f 84 ---[ end trace a7531c1ed5269e84 ]--- c6xxvf b002:00:00.0: Failed to register UIO devices c6xxvf b002:00:00.0: Failed to register UIO devices
Signed-off-by: Lang Dai lang.dai@intel.com
Link: https://lore.kernel.org/r/1600054002-17722-1-git-send-email-lang.dai@intel.c... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/uio/uio.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c index 73efb80815db8..6dca744e39e95 100644 --- a/drivers/uio/uio.c +++ b/drivers/uio/uio.c @@ -1048,8 +1048,6 @@ void uio_unregister_device(struct uio_info *info)
idev = info->uio_dev;
- uio_free_minor(idev); - mutex_lock(&idev->info_lock); uio_dev_del_attributes(idev);
@@ -1064,6 +1062,8 @@ void uio_unregister_device(struct uio_info *info)
device_unregister(&idev->dev);
+ uio_free_minor(idev); + return; } EXPORT_SYMBOL_GPL(uio_unregister_device);
From: Linu Cherian lcherian@marvell.com
[ Upstream commit 6d578258b955fc8888e1bbd9a8fefe7b10065a84 ]
Coresight driver assumes sink is common across all the ETMs, and tries to build a path between ETM and the first enabled sink found using bus based search. This breaks sysFS usage on implementations that has multiple per core sinks in enabled state.
To fix this, coresight_get_enabled_sink API is updated to do a connection based search starting from the given source, instead of bus based search. With sink selection using sysfs depecrated for perf interface, provision for reset is removed as well in this API.
Signed-off-by: Linu Cherian lcherian@marvell.com [Fixed indentation problem and removed obsolete comment] Signed-off-by: Mathieu Poirier mathieu.poirier@linaro.org Link: https://lore.kernel.org/r/20200916191737.4001561-15-mathieu.poirier@linaro.o... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/hwtracing/coresight/coresight-priv.h | 3 +- drivers/hwtracing/coresight/coresight.c | 62 +++++++++----------- 2 files changed, 29 insertions(+), 36 deletions(-)
diff --git a/drivers/hwtracing/coresight/coresight-priv.h b/drivers/hwtracing/coresight/coresight-priv.h index f2dc625ea5856..5fe773c4d6cc5 100644 --- a/drivers/hwtracing/coresight/coresight-priv.h +++ b/drivers/hwtracing/coresight/coresight-priv.h @@ -148,7 +148,8 @@ static inline void coresight_write_reg_pair(void __iomem *addr, u64 val, void coresight_disable_path(struct list_head *path); int coresight_enable_path(struct list_head *path, u32 mode, void *sink_data); struct coresight_device *coresight_get_sink(struct list_head *path); -struct coresight_device *coresight_get_enabled_sink(bool reset); +struct coresight_device * +coresight_get_enabled_sink(struct coresight_device *source); struct coresight_device *coresight_get_sink_by_id(u32 id); struct coresight_device * coresight_find_default_sink(struct coresight_device *csdev); diff --git a/drivers/hwtracing/coresight/coresight.c b/drivers/hwtracing/coresight/coresight.c index cdcb1917216fd..fd46216669449 100644 --- a/drivers/hwtracing/coresight/coresight.c +++ b/drivers/hwtracing/coresight/coresight.c @@ -540,50 +540,46 @@ struct coresight_device *coresight_get_sink(struct list_head *path) return csdev; }
-static int coresight_enabled_sink(struct device *dev, const void *data) +static struct coresight_device * +coresight_find_enabled_sink(struct coresight_device *csdev) { - const bool *reset = data; - struct coresight_device *csdev = to_coresight_device(dev); + int i; + struct coresight_device *sink;
if ((csdev->type == CORESIGHT_DEV_TYPE_SINK || csdev->type == CORESIGHT_DEV_TYPE_LINKSINK) && - csdev->activated) { - /* - * Now that we have a handle on the sink for this session, - * disable the sysFS "enable_sink" flag so that possible - * concurrent perf session that wish to use another sink don't - * trip on it. Doing so has no ramification for the current - * session. - */ - if (*reset) - csdev->activated = false; + csdev->activated) + return csdev;
- return 1; + /* + * Recursively explore each port found on this element. + */ + for (i = 0; i < csdev->pdata->nr_outport; i++) { + struct coresight_device *child_dev; + + child_dev = csdev->pdata->conns[i].child_dev; + if (child_dev) + sink = coresight_find_enabled_sink(child_dev); + if (sink) + return sink; }
- return 0; + return NULL; }
/** - * coresight_get_enabled_sink - returns the first enabled sink found on the bus - * @deactivate: Whether the 'enable_sink' flag should be reset - * - * When operated from perf the deactivate parameter should be set to 'true'. - * That way the "enabled_sink" flag of the sink that was selected can be reset, - * allowing for other concurrent perf sessions to choose a different sink. + * coresight_get_enabled_sink - returns the first enabled sink using + * connection based search starting from the source reference * - * When operated from sysFS users have full control and as such the deactivate - * parameter should be set to 'false', hence mandating users to explicitly - * clear the flag. + * @source: Coresight source device reference */ -struct coresight_device *coresight_get_enabled_sink(bool deactivate) +struct coresight_device * +coresight_get_enabled_sink(struct coresight_device *source) { - struct device *dev = NULL; - - dev = bus_find_device(&coresight_bustype, NULL, &deactivate, - coresight_enabled_sink); + if (!source) + return NULL;
- return dev ? to_coresight_device(dev) : NULL; + return coresight_find_enabled_sink(source); }
static int coresight_sink_by_id(struct device *dev, const void *data) @@ -988,11 +984,7 @@ int coresight_enable(struct coresight_device *csdev) goto out; }
- /* - * Search for a valid sink for this session but don't reset the - * "enable_sink" flag in sysFS. Users get to do that explicitly. - */ - sink = coresight_get_enabled_sink(false); + sink = coresight_get_enabled_sink(csdev); if (!sink) { ret = -EINVAL; goto out;
From: Luben Tuikov luben.tuikov@amd.com
[ Upstream commit 5aea5327ea2ddf544cbeff096f45fc2319b0714e ]
Not being able to create amdgpu sysfs attributes is not a fatal error warranting not to continue to try to bring up the display. Thus, if we get an error trying to create amdgpu sysfs attrs, report it and continue on to try to bring up a display.
Signed-off-by: Luben Tuikov luben.tuikov@amd.com Acked-by: Slava Abramov slava.abramov@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c index d0b8d0d341af5..2576c299958c5 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -3316,10 +3316,8 @@ fence_driver_init: flush_delayed_work(&adev->delayed_init_work);
r = sysfs_create_files(&adev->dev->kobj, amdgpu_dev_attributes); - if (r) { + if (r) dev_err(adev->dev, "Could not create amdgpu device attr\n"); - return r; - }
if (IS_ENABLED(CONFIG_PERF_EVENTS)) r = amdgpu_pmu_init(adev);
From: Felix Fietkau nbd@nbd.name
[ Upstream commit 5f8d69eaab1915df97f4f2aca89ea16abdd092d5 ]
Fixes AQL for encap-offloaded tx
Signed-off-by: Felix Fietkau nbd@nbd.name Link: https://lore.kernel.org/r/20200908123702.88454-2-nbd@nbd.name Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/mac80211/tx.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c index dca01d7e6e3e0..282b0bc201eeb 100644 --- a/net/mac80211/tx.c +++ b/net/mac80211/tx.c @@ -4209,6 +4209,12 @@ static void ieee80211_8023_xmit(struct ieee80211_sub_if_data *sdata, if (is_zero_ether_addr(ra)) goto out_free;
+ if (local->ops->wake_tx_queue) { + u16 queue = __ieee80211_select_queue(sdata, sta, skb); + skb_set_queue_mapping(skb, queue); + skb_get_hash(skb); + } + multicast = is_multicast_ether_addr(ra);
if (sta)
From: Peter Chen peter.chen@nxp.com
[ Upstream commit 18a367e8947d72dd91b6fc401e88a2952c6363f7 ]
If the xhci-plat.c is the platform driver, after the runtime pm is enabled, the xhci_suspend is called if nothing is connected on the port. When the system goes to suspend, it will call xhci_suspend again if USB wakeup is enabled.
Since the runtime suspend wakeup setting is not always the same as system suspend wakeup setting, eg, at runtime suspend we always need wakeup if the controller is in low power mode; but at system suspend, we may not need wakeup. So, we move the judgement after changing wakeup setting.
[commit message rewording -Mathias]
Reviewed-by: Jun Li jun.li@nxp.com Signed-off-by: Peter Chen peter.chen@nxp.com Signed-off-by: Mathias Nyman mathias.nyman@linux.intel.com Link: https://lore.kernel.org/r/20200918131752.16488-8-mathias.nyman@linux.intel.c... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/host/xhci.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c index e534f524b7f87..e88f4f9539955 100644 --- a/drivers/usb/host/xhci.c +++ b/drivers/usb/host/xhci.c @@ -982,12 +982,15 @@ int xhci_suspend(struct xhci_hcd *xhci, bool do_wakeup) xhci->shared_hcd->state != HC_STATE_SUSPENDED) return -EINVAL;
- xhci_dbc_suspend(xhci); - /* Clear root port wake on bits if wakeup not allowed. */ if (!do_wakeup) xhci_disable_port_wake_on_bits(xhci);
+ if (!HCD_HW_ACCESSIBLE(hcd)) + return 0; + + xhci_dbc_suspend(xhci); + /* Don't poll the roothubs on bus suspend. */ xhci_dbg(xhci, "%s: stopping port polling.\n", __func__); clear_bit(HCD_FLAG_POLL_RH, &hcd->flags);
From: Chuck Lever chuck.lever@oracle.com
[ Upstream commit 6f9f17287e78e5049931af2037b15b26d134a32a ]
The original purpose of this expensive call is to prevent a long queue of requests from blocking other work.
The cond_resched() call is unnecessary after just a single send operation.
For longer queues, instead of invoking the kernel scheduler, simply release the transport send lock and return to the RPC scheduler.
Signed-off-by: Chuck Lever chuck.lever@oracle.com Signed-off-by: Anna Schumaker Anna.Schumaker@Netapp.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/sunrpc/xprt.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c index 5a8e47bbfb9f4..13fbc2dd4196a 100644 --- a/net/sunrpc/xprt.c +++ b/net/sunrpc/xprt.c @@ -1520,10 +1520,13 @@ xprt_transmit(struct rpc_task *task) { struct rpc_rqst *next, *req = task->tk_rqstp; struct rpc_xprt *xprt = req->rq_xprt; - int status; + int counter, status;
spin_lock(&xprt->queue_lock); + counter = 0; while (!list_empty(&xprt->xmit_queue)) { + if (++counter == 20) + break; next = list_first_entry(&xprt->xmit_queue, struct rpc_rqst, rq_xmit); xprt_pin_rqst(next); @@ -1531,7 +1534,6 @@ xprt_transmit(struct rpc_task *task) status = xprt_request_transmit(next, task); if (status == -EBADMSG && next != req) status = 0; - cond_resched(); spin_lock(&xprt->queue_lock); xprt_unpin_rqst(next); if (status == 0) {
From: Dmitry Osipenko digetx@gmail.com
[ Upstream commit 1170433e6611402b869c583fa1fbfd85106ff066 ]
The enter() callback of CPUIDLE drivers returns index of the entered idle state on success or a negative value on failure. The negative value could any negative value, i.e. it doesn't necessarily needs to be a error code. That's because CPUIDLE core only cares about the fact of failure and not about the reason of the enter() failure.
Like every other enter() callback, the arm_cpuidle_simple_enter() returns the entered idle-index on success. Unlike some of other drivers, it never fails. It happened that TEGRA_C1=index=err=0 in the code of cpuidle-tegra driver, and thus, there is no problem for the cpuidle-tegra driver created by the typo in the code which assumes that the arm_cpuidle_simple_enter() returns a error code.
The arm_cpuidle_simple_enter() also may return a -ENODEV error if CPU_IDLE is disabled in a kernel's config, but all CPUIDLE drivers are disabled if CPU_IDLE is disabled, including the cpuidle-tegra driver. So we can't ever see the error code from arm_cpuidle_simple_enter() today.
Of course the code may get some changes in the future and then the typo may transform into a real bug, so let's correct the typo! The tegra_cpuidle_state_enter() is now changed to make it return the entered idle-index on success and negative error code on fail, which puts it on par with the arm_cpuidle_simple_enter(), making code consistent in regards to the error handling.
This patch fixes a minor typo in the code, it doesn't fix any bugs.
Signed-off-by: Dmitry Osipenko digetx@gmail.com Reviewed-by: Jon Hunter jonathanh@nvidia.com Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/cpuidle/cpuidle-tegra.c | 34 +++++++++++++++++++-------------- 1 file changed, 20 insertions(+), 14 deletions(-)
diff --git a/drivers/cpuidle/cpuidle-tegra.c b/drivers/cpuidle/cpuidle-tegra.c index a12fb141875a7..e8956706a2917 100644 --- a/drivers/cpuidle/cpuidle-tegra.c +++ b/drivers/cpuidle/cpuidle-tegra.c @@ -172,7 +172,7 @@ static int tegra_cpuidle_coupled_barrier(struct cpuidle_device *dev) static int tegra_cpuidle_state_enter(struct cpuidle_device *dev, int index, unsigned int cpu) { - int ret; + int err;
/* * CC6 state is the "CPU cluster power-off" state. In order to @@ -183,9 +183,9 @@ static int tegra_cpuidle_state_enter(struct cpuidle_device *dev, * CPU cores, GIC and L2 cache). */ if (index == TEGRA_CC6) { - ret = tegra_cpuidle_coupled_barrier(dev); - if (ret) - return ret; + err = tegra_cpuidle_coupled_barrier(dev); + if (err) + return err; }
local_fiq_disable(); @@ -194,15 +194,15 @@ static int tegra_cpuidle_state_enter(struct cpuidle_device *dev,
switch (index) { case TEGRA_C7: - ret = tegra_cpuidle_c7_enter(); + err = tegra_cpuidle_c7_enter(); break;
case TEGRA_CC6: - ret = tegra_cpuidle_cc6_enter(cpu); + err = tegra_cpuidle_cc6_enter(cpu); break;
default: - ret = -EINVAL; + err = -EINVAL; break; }
@@ -210,7 +210,7 @@ static int tegra_cpuidle_state_enter(struct cpuidle_device *dev, tegra_pm_clear_cpu_in_lp2(); local_fiq_enable();
- return ret; + return err ?: index; }
static int tegra_cpuidle_adjust_state_index(int index, unsigned int cpu) @@ -236,21 +236,27 @@ static int tegra_cpuidle_enter(struct cpuidle_device *dev, int index) { unsigned int cpu = cpu_logical_map(dev->cpu); - int err; + int ret;
index = tegra_cpuidle_adjust_state_index(index, cpu); if (dev->states_usage[index].disable) return -1;
if (index == TEGRA_C1) - err = arm_cpuidle_simple_enter(dev, drv, index); + ret = arm_cpuidle_simple_enter(dev, drv, index); else - err = tegra_cpuidle_state_enter(dev, index, cpu); + ret = tegra_cpuidle_state_enter(dev, index, cpu);
- if (err && (err != -EINTR || index != TEGRA_CC6)) - pr_err_once("failed to enter state %d err: %d\n", index, err); + if (ret < 0) { + if (ret != -EINTR || index != TEGRA_CC6) + pr_err_once("failed to enter state %d err: %d\n", + index, ret); + index = -1; + } else { + index = ret; + }
- return err ? -1 : index; + return index; }
static int tegra114_enter_s2idle(struct cpuidle_device *dev,
From: Zhengyuan Liu liuzhengyuan@tj.kylinos.cn
[ Upstream commit a194c5f2d2b3a05428805146afcabe5140b5d378 ]
The @node passed to cpumask_of_node() can be NUMA_NO_NODE, in that case it will trigger the following WARN_ON(node >= nr_node_ids) due to mismatched data types of @node and @nr_node_ids. Actually we should return cpu_all_mask just like most other architectures do if passed NUMA_NO_NODE.
Also add a similar check to the inline cpumask_of_node() in numa.h.
Signed-off-by: Zhengyuan Liu liuzhengyuan@tj.kylinos.cn Reviewed-by: Gavin Shan gshan@redhat.com Link: https://lore.kernel.org/r/20200921023936.21846-1-liuzhengyuan@tj.kylinos.cn Signed-off-by: Will Deacon will@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/include/asm/numa.h | 3 +++ arch/arm64/mm/numa.c | 6 +++++- 2 files changed, 8 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/numa.h b/arch/arm64/include/asm/numa.h index 626ad01e83bf0..dd870390d639f 100644 --- a/arch/arm64/include/asm/numa.h +++ b/arch/arm64/include/asm/numa.h @@ -25,6 +25,9 @@ const struct cpumask *cpumask_of_node(int node); /* Returns a pointer to the cpumask of CPUs on Node 'node'. */ static inline const struct cpumask *cpumask_of_node(int node) { + if (node == NUMA_NO_NODE) + return cpu_all_mask; + return node_to_cpumask_map[node]; } #endif diff --git a/arch/arm64/mm/numa.c b/arch/arm64/mm/numa.c index 73f8b49d485c2..88e51aade0da0 100644 --- a/arch/arm64/mm/numa.c +++ b/arch/arm64/mm/numa.c @@ -46,7 +46,11 @@ EXPORT_SYMBOL(node_to_cpumask_map); */ const struct cpumask *cpumask_of_node(int node) { - if (WARN_ON(node >= nr_node_ids)) + + if (node == NUMA_NO_NODE) + return cpu_all_mask; + + if (WARN_ON(node < 0 || node >= nr_node_ids)) return cpu_none_mask;
if (WARN_ON(node_to_cpumask_map[node] == NULL))
From: Joakim Zhang qiangqing.zhang@nxp.com
[ Upstream commit 02f71c6605e1f8259c07f16178330db766189a74 ]
Disable clocks while CAN core is in stop mode.
Signed-off-by: Joakim Zhang qiangqing.zhang@nxp.com Tested-by: Sean Nyekjaer sean@geanix.com Link: https://lore.kernel.org/r/20191210085721.9853-2-qiangqing.zhang@nxp.com Signed-off-by: Marc Kleine-Budde mkl@pengutronix.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/can/flexcan.c | 30 ++++++++++++++++++++---------- 1 file changed, 20 insertions(+), 10 deletions(-)
diff --git a/drivers/net/can/flexcan.c b/drivers/net/can/flexcan.c index 2ac7a667bde35..bc21a82cf3a76 100644 --- a/drivers/net/can/flexcan.c +++ b/drivers/net/can/flexcan.c @@ -1722,8 +1722,6 @@ static int __maybe_unused flexcan_suspend(struct device *device) err = flexcan_chip_disable(priv); if (err) return err; - - err = pm_runtime_force_suspend(device); } netif_stop_queue(dev); netif_device_detach(dev); @@ -1749,10 +1747,6 @@ static int __maybe_unused flexcan_resume(struct device *device) if (err) return err; } else { - err = pm_runtime_force_resume(device); - if (err) - return err; - err = flexcan_chip_enable(priv); } } @@ -1783,8 +1777,16 @@ static int __maybe_unused flexcan_noirq_suspend(struct device *device) struct net_device *dev = dev_get_drvdata(device); struct flexcan_priv *priv = netdev_priv(dev);
- if (netif_running(dev) && device_may_wakeup(device)) - flexcan_enable_wakeup_irq(priv, true); + if (netif_running(dev)) { + int err; + + if (device_may_wakeup(device)) + flexcan_enable_wakeup_irq(priv, true); + + err = pm_runtime_force_suspend(device); + if (err) + return err; + }
return 0; } @@ -1794,8 +1796,16 @@ static int __maybe_unused flexcan_noirq_resume(struct device *device) struct net_device *dev = dev_get_drvdata(device); struct flexcan_priv *priv = netdev_priv(dev);
- if (netif_running(dev) && device_may_wakeup(device)) - flexcan_enable_wakeup_irq(priv, false); + if (netif_running(dev)) { + int err; + + err = pm_runtime_force_resume(device); + if (err) + return err; + + if (device_may_wakeup(device)) + flexcan_enable_wakeup_irq(priv, false); + }
return 0; }
From: farah kassabri fkassabri@habana.ai
[ Upstream commit acd330c141b4c49f468f00719ebc944656061eac ]
Allow user application to write to this register in order to be able to configure the quiet period of the QMAN between grants.
Signed-off-by: farah kassabri fkassabri@habana.ai Reviewed-by: Oded Gabbay oded.gabbay@gmail.com Signed-off-by: Oded Gabbay oded.gabbay@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../misc/habanalabs/gaudi/gaudi_security.c | 55 +++++++------------ 1 file changed, 19 insertions(+), 36 deletions(-)
diff --git a/drivers/misc/habanalabs/gaudi/gaudi_security.c b/drivers/misc/habanalabs/gaudi/gaudi_security.c index 8d5d6ddee6eda..615b547ad2b7d 100644 --- a/drivers/misc/habanalabs/gaudi/gaudi_security.c +++ b/drivers/misc/habanalabs/gaudi/gaudi_security.c @@ -831,8 +831,7 @@ static void gaudi_init_mme_protection_bits(struct hl_device *hdev) PROT_BITS_OFFS; word_offset = ((mmMME0_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS) >> 7) << 2; - mask = 1 << ((mmMME0_QM_ARB_MST_QUIET_PER & 0x7F) >> 2); - mask |= 1 << ((mmMME0_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); + mask = 1 << ((mmMME0_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); mask |= 1 << ((mmMME0_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2); mask |= 1 << ((mmMME0_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2); mask |= 1 << ((mmMME0_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2); @@ -1311,8 +1310,7 @@ static void gaudi_init_mme_protection_bits(struct hl_device *hdev) PROT_BITS_OFFS; word_offset = ((mmMME2_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS) >> 7) << 2; - mask = 1 << ((mmMME2_QM_ARB_MST_QUIET_PER & 0x7F) >> 2); - mask |= 1 << ((mmMME2_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); + mask = 1 << ((mmMME2_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); mask |= 1 << ((mmMME2_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2); mask |= 1 << ((mmMME2_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2); mask |= 1 << ((mmMME2_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2); @@ -1790,8 +1788,7 @@ static void gaudi_init_dma_protection_bits(struct hl_device *hdev) word_offset = ((mmDMA0_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS) >> 7) << 2; - mask = 1 << ((mmDMA0_QM_ARB_MST_QUIET_PER & 0x7F) >> 2); - mask |= 1 << ((mmDMA0_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); + mask = 1 << ((mmDMA0_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); mask |= 1 << ((mmDMA0_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2); mask |= 1 << ((mmDMA0_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2); mask |= 1 << ((mmDMA0_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2); @@ -2186,8 +2183,7 @@ static void gaudi_init_dma_protection_bits(struct hl_device *hdev) word_offset = ((mmDMA1_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS) >> 7) << 2; - mask = 1 << ((mmDMA1_QM_ARB_MST_QUIET_PER & 0x7F) >> 2); - mask |= 1 << ((mmDMA1_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); + mask = 1 << ((mmDMA1_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); mask |= 1 << ((mmDMA1_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2); mask |= 1 << ((mmDMA1_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2); mask |= 1 << ((mmDMA1_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2); @@ -2582,8 +2578,7 @@ static void gaudi_init_dma_protection_bits(struct hl_device *hdev) word_offset = ((mmDMA2_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS) >> 7) << 2; - mask = 1 << ((mmDMA2_QM_ARB_MST_QUIET_PER & 0x7F) >> 2); - mask |= 1 << ((mmDMA2_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); + mask = 1 << ((mmDMA2_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); mask |= 1 << ((mmDMA2_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2); mask |= 1 << ((mmDMA2_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2); mask |= 1 << ((mmDMA2_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2); @@ -2978,8 +2973,7 @@ static void gaudi_init_dma_protection_bits(struct hl_device *hdev) word_offset = ((mmDMA3_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS) >> 7) << 2; - mask = 1 << ((mmDMA3_QM_ARB_MST_QUIET_PER & 0x7F) >> 2); - mask |= 1 << ((mmDMA3_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); + mask = 1 << ((mmDMA3_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); mask |= 1 << ((mmDMA3_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2); mask |= 1 << ((mmDMA3_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2); mask |= 1 << ((mmDMA3_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2); @@ -3374,8 +3368,7 @@ static void gaudi_init_dma_protection_bits(struct hl_device *hdev) word_offset = ((mmDMA4_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS) >> 7) << 2; - mask = 1 << ((mmDMA4_QM_ARB_MST_QUIET_PER & 0x7F) >> 2); - mask |= 1 << ((mmDMA4_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); + mask = 1 << ((mmDMA4_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); mask |= 1 << ((mmDMA4_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2); mask |= 1 << ((mmDMA4_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2); mask |= 1 << ((mmDMA4_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2); @@ -3770,8 +3763,7 @@ static void gaudi_init_dma_protection_bits(struct hl_device *hdev) word_offset = ((mmDMA5_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS) >> 7) << 2; - mask = 1 << ((mmDMA5_QM_ARB_MST_QUIET_PER & 0x7F) >> 2); - mask |= 1 << ((mmDMA5_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); + mask = 1 << ((mmDMA5_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); mask |= 1 << ((mmDMA5_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2); mask |= 1 << ((mmDMA5_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2); mask |= 1 << ((mmDMA5_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2); @@ -4166,8 +4158,8 @@ static void gaudi_init_dma_protection_bits(struct hl_device *hdev) word_offset = ((mmDMA6_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS) >> 7) << 2; - mask = 1 << ((mmDMA6_QM_ARB_MST_QUIET_PER & 0x7F) >> 2); - mask |= 1 << ((mmDMA6_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); + + mask = 1 << ((mmDMA6_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); mask |= 1 << ((mmDMA6_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2); mask |= 1 << ((mmDMA6_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2); mask |= 1 << ((mmDMA6_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2); @@ -4562,8 +4554,7 @@ static void gaudi_init_dma_protection_bits(struct hl_device *hdev) word_offset = ((mmDMA7_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS) >> 7) << 2; - mask = 1 << ((mmDMA7_QM_ARB_MST_QUIET_PER & 0x7F) >> 2); - mask |= 1 << ((mmDMA7_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); + mask = 1 << ((mmDMA7_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); mask |= 1 << ((mmDMA7_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2); mask |= 1 << ((mmDMA7_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2); mask |= 1 << ((mmDMA7_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2); @@ -5491,8 +5482,7 @@ static void gaudi_init_tpc_protection_bits(struct hl_device *hdev)
word_offset = ((mmTPC0_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS) >> 7) << 2; - mask = 1 << ((mmTPC0_QM_ARB_MST_QUIET_PER & 0x7F) >> 2); - mask |= 1 << ((mmTPC0_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); + mask = 1 << ((mmTPC0_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); mask |= 1 << ((mmTPC0_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2); mask |= 1 << ((mmTPC0_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2); mask |= 1 << ((mmTPC0_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2); @@ -5947,8 +5937,7 @@ static void gaudi_init_tpc_protection_bits(struct hl_device *hdev)
word_offset = ((mmTPC1_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS) >> 7) << 2; - mask = 1 << ((mmTPC1_QM_ARB_MST_QUIET_PER & 0x7F) >> 2); - mask |= 1 << ((mmTPC1_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); + mask = 1 << ((mmTPC1_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); mask |= 1 << ((mmTPC1_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2); mask |= 1 << ((mmTPC1_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2); mask |= 1 << ((mmTPC1_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2); @@ -6402,8 +6391,7 @@ static void gaudi_init_tpc_protection_bits(struct hl_device *hdev) PROT_BITS_OFFS; word_offset = ((mmTPC2_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS) >> 7) << 2; - mask = 1 << ((mmTPC2_QM_ARB_MST_QUIET_PER & 0x7F) >> 2); - mask |= 1 << ((mmTPC2_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); + mask = 1 << ((mmTPC2_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); mask |= 1 << ((mmTPC2_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2); mask |= 1 << ((mmTPC2_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2); mask |= 1 << ((mmTPC2_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2); @@ -6857,8 +6845,7 @@ static void gaudi_init_tpc_protection_bits(struct hl_device *hdev) PROT_BITS_OFFS; word_offset = ((mmTPC3_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS) >> 7) << 2; - mask = 1 << ((mmTPC3_QM_ARB_MST_QUIET_PER & 0x7F) >> 2); - mask |= 1 << ((mmTPC3_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); + mask = 1 << ((mmTPC3_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); mask |= 1 << ((mmTPC3_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2); mask |= 1 << ((mmTPC3_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2); mask |= 1 << ((mmTPC3_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2); @@ -7312,8 +7299,7 @@ static void gaudi_init_tpc_protection_bits(struct hl_device *hdev) PROT_BITS_OFFS; word_offset = ((mmTPC4_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS) >> 7) << 2; - mask = 1 << ((mmTPC4_QM_ARB_MST_QUIET_PER & 0x7F) >> 2); - mask |= 1 << ((mmTPC4_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); + mask = 1 << ((mmTPC4_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); mask |= 1 << ((mmTPC4_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2); mask |= 1 << ((mmTPC4_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2); mask |= 1 << ((mmTPC4_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2); @@ -7767,8 +7753,7 @@ static void gaudi_init_tpc_protection_bits(struct hl_device *hdev) PROT_BITS_OFFS; word_offset = ((mmTPC5_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS) >> 7) << 2; - mask = 1 << ((mmTPC5_QM_ARB_MST_QUIET_PER & 0x7F) >> 2); - mask |= 1 << ((mmTPC5_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); + mask = 1 << ((mmTPC5_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); mask |= 1 << ((mmTPC5_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2); mask |= 1 << ((mmTPC5_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2); mask |= 1 << ((mmTPC5_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2); @@ -8223,8 +8208,7 @@ static void gaudi_init_tpc_protection_bits(struct hl_device *hdev)
word_offset = ((mmTPC6_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS) >> 7) << 2; - mask = 1 << ((mmTPC6_QM_ARB_MST_QUIET_PER & 0x7F) >> 2); - mask |= 1 << ((mmTPC6_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); + mask = 1 << ((mmTPC6_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); mask |= 1 << ((mmTPC6_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2); mask |= 1 << ((mmTPC6_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2); mask |= 1 << ((mmTPC6_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2); @@ -8681,8 +8665,7 @@ static void gaudi_init_tpc_protection_bits(struct hl_device *hdev) PROT_BITS_OFFS; word_offset = ((mmTPC7_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS) >> 7) << 2; - mask = 1 << ((mmTPC7_QM_ARB_MST_QUIET_PER & 0x7F) >> 2); - mask |= 1 << ((mmTPC7_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); + mask = 1 << ((mmTPC7_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2); mask |= 1 << ((mmTPC7_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2); mask |= 1 << ((mmTPC7_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2); mask |= 1 << ((mmTPC7_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2);
From: Darrick J. Wong darrick.wong@oracle.com
[ Upstream commit 8df0fa39bdd86ca81a8d706a6ed9d33cc65ca625 ]
When callers pass XFS_BMAPI_REMAP into xfs_bunmapi, they want the extent to be unmapped from the given file fork without the extent being freed. We do this for non-rt files, but we forgot to do this for realtime files. So far this isn't a big deal since nobody makes a bunmapi call to a rt file with the REMAP flag set, but don't leave a logic bomb.
Signed-off-by: Darrick J. Wong darrick.wong@oracle.com Reviewed-by: Christoph Hellwig hch@lst.de Reviewed-by: Dave Chinner dchinner@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/xfs/libxfs/xfs_bmap.c | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-)
diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c index 1b0a01b06a05d..d9a692484eaed 100644 --- a/fs/xfs/libxfs/xfs_bmap.c +++ b/fs/xfs/libxfs/xfs_bmap.c @@ -5046,20 +5046,25 @@ xfs_bmap_del_extent_real(
flags = XFS_ILOG_CORE; if (whichfork == XFS_DATA_FORK && XFS_IS_REALTIME_INODE(ip)) { - xfs_fsblock_t bno; xfs_filblks_t len; xfs_extlen_t mod;
- bno = div_u64_rem(del->br_startblock, mp->m_sb.sb_rextsize, - &mod); - ASSERT(mod == 0); len = div_u64_rem(del->br_blockcount, mp->m_sb.sb_rextsize, &mod); ASSERT(mod == 0);
- error = xfs_rtfree_extent(tp, bno, (xfs_extlen_t)len); - if (error) - goto done; + if (!(bflags & XFS_BMAPI_REMAP)) { + xfs_fsblock_t bno; + + bno = div_u64_rem(del->br_startblock, + mp->m_sb.sb_rextsize, &mod); + ASSERT(mod == 0); + + error = xfs_rtfree_extent(tp, bno, (xfs_extlen_t)len); + if (error) + goto done; + } + do_fx = 0; nblks = len * mp->m_sb.sb_rextsize; qfield = XFS_TRANS_DQ_RTBCOUNT;
From: Gao Xiang hsiangkao@redhat.com
[ Upstream commit f692d09e9c8fd0f5557c2e87f796a16dd95222b8 ]
Currently, crafted h_len has been blocked for the log header of the tail block in commit a70f9fe52daa ("xfs: detect and handle invalid iclog size set by mkfs").
However, each log record could still have crafted h_len and cause log record buffer overrun. So let's check h_len vs buffer size for each log record as well.
Signed-off-by: Gao Xiang hsiangkao@redhat.com Reviewed-by: Darrick J. Wong darrick.wong@oracle.com Signed-off-by: Darrick J. Wong darrick.wong@oracle.com Reviewed-by: Brian Foster bfoster@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/xfs/xfs_log_recover.c | 39 +++++++++++++++++++-------------------- 1 file changed, 19 insertions(+), 20 deletions(-)
diff --git a/fs/xfs/xfs_log_recover.c b/fs/xfs/xfs_log_recover.c index e2ec91b2d0f46..9ceb67d0f2565 100644 --- a/fs/xfs/xfs_log_recover.c +++ b/fs/xfs/xfs_log_recover.c @@ -2904,7 +2904,8 @@ STATIC int xlog_valid_rec_header( struct xlog *log, struct xlog_rec_header *rhead, - xfs_daddr_t blkno) + xfs_daddr_t blkno, + int bufsize) { int hlen;
@@ -2920,10 +2921,14 @@ xlog_valid_rec_header( return -EFSCORRUPTED; }
- /* LR body must have data or it wouldn't have been written */ + /* + * LR body must have data (or it wouldn't have been written) + * and h_len must not be greater than LR buffer size. + */ hlen = be32_to_cpu(rhead->h_len); - if (XFS_IS_CORRUPT(log->l_mp, hlen <= 0 || hlen > INT_MAX)) + if (XFS_IS_CORRUPT(log->l_mp, hlen <= 0 || hlen > bufsize)) return -EFSCORRUPTED; + if (XFS_IS_CORRUPT(log->l_mp, blkno > log->l_logBBsize || blkno > INT_MAX)) return -EFSCORRUPTED; @@ -2984,9 +2989,6 @@ xlog_do_recovery_pass( goto bread_err1;
rhead = (xlog_rec_header_t *)offset; - error = xlog_valid_rec_header(log, rhead, tail_blk); - if (error) - goto bread_err1;
/* * xfsprogs has a bug where record length is based on lsunit but @@ -3001,21 +3003,18 @@ xlog_do_recovery_pass( */ h_size = be32_to_cpu(rhead->h_size); h_len = be32_to_cpu(rhead->h_len); - if (h_len > h_size) { - if (h_len <= log->l_mp->m_logbsize && - be32_to_cpu(rhead->h_num_logops) == 1) { - xfs_warn(log->l_mp, + if (h_len > h_size && h_len <= log->l_mp->m_logbsize && + rhead->h_num_logops == cpu_to_be32(1)) { + xfs_warn(log->l_mp, "invalid iclog size (%d bytes), using lsunit (%d bytes)", - h_size, log->l_mp->m_logbsize); - h_size = log->l_mp->m_logbsize; - } else { - XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, - log->l_mp); - error = -EFSCORRUPTED; - goto bread_err1; - } + h_size, log->l_mp->m_logbsize); + h_size = log->l_mp->m_logbsize; }
+ error = xlog_valid_rec_header(log, rhead, tail_blk, h_size); + if (error) + goto bread_err1; + if ((be32_to_cpu(rhead->h_version) & XLOG_VERSION_2) && (h_size > XLOG_HEADER_CYCLE_SIZE)) { hblks = h_size / XLOG_HEADER_CYCLE_SIZE; @@ -3096,7 +3095,7 @@ xlog_do_recovery_pass( } rhead = (xlog_rec_header_t *)offset; error = xlog_valid_rec_header(log, rhead, - split_hblks ? blk_no : 0); + split_hblks ? blk_no : 0, h_size); if (error) goto bread_err2;
@@ -3177,7 +3176,7 @@ xlog_do_recovery_pass( goto bread_err2;
rhead = (xlog_rec_header_t *)offset; - error = xlog_valid_rec_header(log, rhead, blk_no); + error = xlog_valid_rec_header(log, rhead, blk_no, h_size); if (error) goto bread_err2;
From: Jonathan Cameron Jonathan.Cameron@huawei.com
[ Upstream commit 8a3decac087aa897df5af04358c2089e52e70ac4 ]
The function should check the validity of the pxm value before using it to index the pxm_to_node_map[] array.
Whilst hardening this code may be good in general, the main intent here is to enable following patches that use this function to replace acpi_map_pxm_to_node() for non SRAT usecases which should return NO_NUMA_NODE for PXM entries not matching with those in SRAT.
Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Reviewed-by: Barry Song song.bao.hua@hisilicon.com Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/acpi/numa/srat.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/acpi/numa/srat.c b/drivers/acpi/numa/srat.c index 15bbaab8500b9..1fb486f46ee20 100644 --- a/drivers/acpi/numa/srat.c +++ b/drivers/acpi/numa/srat.c @@ -31,7 +31,7 @@ int acpi_numa __initdata;
int pxm_to_node(int pxm) { - if (pxm < 0) + if (pxm < 0 || pxm >= MAX_PXM_DOMAINS || numa_off) return NUMA_NO_NODE; return pxm_to_node_map[pxm]; }
From: Stanislaw Kardach skardach@marvell.com
[ Upstream commit 450f0b978870c384dd81d1176088536555f3170e ]
Since LD contains LTYPE definitions tweaked toward efficient NIX_AF_RX_FLOW_KEY_ALG(0..31)_FIELD(0..4) usage, the original location of NPC_LT_LD_CUSTOM0/1 was aliased with MPLS_IN_* definitions. Moving custom frame to value 6 and 7 removes the aliasing at the cost of custom frames being also considered when TCP/UDP RSS algo is configured.
However since the goal of CUSTOM frames is to classify them to a separate set of RQs, this cost is acceptable.
Signed-off-by: Stanislaw Kardach skardach@marvell.com Acked-by: Sunil Goutham sgoutham@marvell.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/marvell/octeontx2/af/npc.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/npc.h b/drivers/net/ethernet/marvell/octeontx2/af/npc.h index 3803af9231c68..c0ff5f70aa431 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/npc.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/npc.h @@ -77,6 +77,8 @@ enum npc_kpu_ld_ltype { NPC_LT_LD_ICMP, NPC_LT_LD_SCTP, NPC_LT_LD_ICMP6, + NPC_LT_LD_CUSTOM0, + NPC_LT_LD_CUSTOM1, NPC_LT_LD_IGMP = 8, NPC_LT_LD_ESP, NPC_LT_LD_AH, @@ -85,8 +87,6 @@ enum npc_kpu_ld_ltype { NPC_LT_LD_NSH, NPC_LT_LD_TU_MPLS_IN_NSH, NPC_LT_LD_TU_MPLS_IN_IP, - NPC_LT_LD_CUSTOM0 = 0xE, - NPC_LT_LD_CUSTOM1 = 0xF, };
enum npc_kpu_le_ltype {
From: Wright Feng wright.feng@cypress.com
[ Upstream commit 6aa5a83a7ed8036c1388a811eb8bdfa77b21f19c ]
Brcmfmac showed warning message in fweh.c when checking the size of event queue which is not initialized. Therefore, we only cancel the worker and reset event handler only when it is initialized.
[ 145.505899] brcmfmac 0000:02:00.0: brcmf_pcie_setup: Dongle setup [ 145.929970] ------------[ cut here ]------------ [ 145.929994] WARNING: CPU: 0 PID: 288 at drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.c:312 brcmf_fweh_detach+0xbc/0xd0 [brcmfmac] ... [ 145.930029] Call Trace: [ 145.930036] brcmf_detach+0x77/0x100 [brcmfmac] [ 145.930043] brcmf_pcie_remove+0x79/0x130 [brcmfmac] [ 145.930046] pci_device_remove+0x39/0xc0 [ 145.930048] device_release_driver_internal+0x141/0x200 [ 145.930049] device_release_driver+0x12/0x20 [ 145.930054] brcmf_pcie_setup+0x101/0x3c0 [brcmfmac] [ 145.930060] brcmf_fw_request_done+0x11d/0x1f0 [brcmfmac] [ 145.930062] ? lock_timer_base+0x7d/0xa0 [ 145.930063] ? internal_add_timer+0x1f/0xa0 [ 145.930064] ? add_timer+0x11a/0x1d0 [ 145.930066] ? __kmalloc_track_caller+0x18c/0x230 [ 145.930068] ? kstrdup_const+0x23/0x30 [ 145.930069] ? add_dr+0x46/0x80 [ 145.930070] ? devres_add+0x3f/0x50 [ 145.930072] ? usermodehelper_read_unlock+0x15/0x20 [ 145.930073] ? _request_firmware+0x288/0xa20 [ 145.930075] request_firmware_work_func+0x36/0x60 [ 145.930077] process_one_work+0x144/0x360 [ 145.930078] worker_thread+0x4d/0x3c0 [ 145.930079] kthread+0x112/0x150 [ 145.930080] ? rescuer_thread+0x340/0x340 [ 145.930081] ? kthread_park+0x60/0x60 [ 145.930083] ret_from_fork+0x25/0x30
Signed-off-by: Wright Feng wright.feng@cypress.com Signed-off-by: Chi-hsien Lin chi-hsien.lin@cypress.com Signed-off-by: Kalle Valo kvalo@codeaurora.org Link: https://lore.kernel.org/r/20200928054922.44580-3-wright.feng@cypress.com Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/wireless/broadcom/brcm80211/brcmfmac/fweh.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.c index a5cced2c89ac6..921b94c4f5f9a 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.c +++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.c @@ -304,10 +304,12 @@ void brcmf_fweh_detach(struct brcmf_pub *drvr) { struct brcmf_fweh_info *fweh = &drvr->fweh;
- /* cancel the worker */ - cancel_work_sync(&fweh->event_work); - WARN_ON(!list_empty(&fweh->event_q)); - memset(fweh->evt_handler, 0, sizeof(fweh->evt_handler)); + /* cancel the worker if initialized */ + if (fweh->event_work.func) { + cancel_work_sync(&fweh->event_work); + WARN_ON(!list_empty(&fweh->event_q)); + memset(fweh->evt_handler, 0, sizeof(fweh->evt_handler)); + } }
/**
From: Wen Gong wgong@codeaurora.org
[ Upstream commit 6a8be1baa9116a038cb4f6158cc10134387ca0d0 ]
With SLUB DEBUG CONFIG below crash is seen as kmem_cache_alloc is being called in non-atomic context.
To fix this issue, use GFP_ATOMIC instead of GFP_KERNEL kzalloc.
[ 357.217088] BUG: sleeping function called from invalid context at mm/slab.h:498 [ 357.217091] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 0, name: swapper/0 [ 357.217092] INFO: lockdep is turned off. [ 357.217095] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G W 5.9.0-rc5-wt-ath+ #196 [ 357.217096] Hardware name: Intel(R) Client Systems NUC8i7HVK/NUC8i7HVB, BIOS HNKBLi70.86A.0049.2018.0801.1601 08/01/2018 [ 357.217097] Call Trace: [ 357.217098] <IRQ> [ 357.217107] ? ath11k_dp_htt_get_ppdu_desc+0xa9/0x170 [ath11k] [ 357.217110] dump_stack+0x77/0xa0 [ 357.217113] ___might_sleep.cold+0xa6/0xb6 [ 357.217116] kmem_cache_alloc_trace+0x1f2/0x270 [ 357.217122] ath11k_dp_htt_get_ppdu_desc+0xa9/0x170 [ath11k] [ 357.217129] ath11k_htt_pull_ppdu_stats.isra.0+0x96/0x270 [ath11k] [ 357.217135] ath11k_dp_htt_htc_t2h_msg_handler+0xe7/0x1d0 [ath11k] [ 357.217137] ? trace_hardirqs_on+0x1c/0x100 [ 357.217143] ath11k_htc_rx_completion_handler+0x207/0x370 [ath11k] [ 357.217149] ath11k_ce_recv_process_cb+0x15e/0x1e0 [ath11k] [ 357.217151] ? handle_irq_event+0x70/0xa8 [ 357.217154] ath11k_pci_ce_tasklet+0x10/0x30 [ath11k_pci] [ 357.217157] tasklet_action_common.constprop.0+0xd4/0xf0 [ 357.217160] __do_softirq+0xc9/0x482 [ 357.217162] asm_call_on_stack+0x12/0x20 [ 357.217163] </IRQ> [ 357.217166] do_softirq_own_stack+0x49/0x60 [ 357.217167] irq_exit_rcu+0x9a/0xd0 [ 357.217169] common_interrupt+0xa1/0x190 [ 357.217171] asm_common_interrupt+0x1e/0x40 [ 357.217173] RIP: 0010:cpu_idle_poll.isra.0+0x2e/0x60 [ 357.217175] Code: 8b 35 26 27 74 69 e8 11 c8 3d ff e8 bc fa 42 ff e8 e7 9f 4a ff fb 65 48 8b 1c 25 80 90 01 00 48 8b 03 a8 08 74 0b eb 1c f3 90 <48> 8b 03 a8 08 75 13 8b 0 [ 357.217177] RSP: 0018:ffffffff97403ee0 EFLAGS: 00000202 [ 357.217178] RAX: 0000000000000001 RBX: ffffffff9742b8c0 RCX: 0000000000b890ca [ 357.217180] RDX: 0000000000b890ca RSI: 0000000000000001 RDI: ffffffff968d0c49 [ 357.217181] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000001 [ 357.217182] R10: ffffffff9742b8c0 R11: 0000000000000046 R12: 0000000000000000 [ 357.217183] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000066fdf520 [ 357.217186] ? cpu_idle_poll.isra.0+0x19/0x60 [ 357.217189] do_idle+0x5f/0xe0 [ 357.217191] cpu_startup_entry+0x14/0x20 [ 357.217193] start_kernel+0x443/0x464 [ 357.217196] secondary_startup_64+0xa4/0xb0
Tested-on: QCA6390 hw2.0 PCI WLAN.HST.1.0.1-01740-QCAHSTSWPLZ_V2_TO_X86-1
Signed-off-by: Wen Gong wgong@codeaurora.org Signed-off-by: Kalle Valo kvalo@codeaurora.org Link: https://lore.kernel.org/r/1601399736-3210-8-git-send-email-kvalo@codeaurora.... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/ath/ath11k/dp_rx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c index 791d971784ce0..055c3bb61e4c5 100644 --- a/drivers/net/wireless/ath/ath11k/dp_rx.c +++ b/drivers/net/wireless/ath/ath11k/dp_rx.c @@ -1421,7 +1421,7 @@ struct htt_ppdu_stats_info *ath11k_dp_htt_get_ppdu_desc(struct ath11k *ar, } spin_unlock_bh(&ar->data_lock);
- ppdu_info = kzalloc(sizeof(*ppdu_info), GFP_KERNEL); + ppdu_info = kzalloc(sizeof(*ppdu_info), GFP_ATOMIC); if (!ppdu_info) return NULL;
From: Carl Huang cjhuang@codeaurora.org
[ Upstream commit 2f588660e34a982377109872757f1b99d7748d21 ]
Fix warning caused by lockdep_assert_held when CONFIG_LOCKDEP is enabled.
[ 271.940647] WARNING: CPU: 6 PID: 0 at drivers/net/wireless/ath/ath11k/hal.c:818 ath11k_hal_srng_access_begin+0x31/0x40 [ath11k] [ 271.940655] Modules linked in: qrtr_mhi qrtr ns ath11k_pci mhi ath11k qmi_helpers nvme nvme_core [ 271.940675] CPU: 6 PID: 0 Comm: swapper/6 Kdump: loaded Tainted: G W 5.9.0-rc5-kalle-bringup-wt-ath+ #4 [ 271.940682] Hardware name: Dell Inc. Inspiron 7590/08717F, BIOS 1.3.0 07/22/2019 [ 271.940698] RIP: 0010:ath11k_hal_srng_access_begin+0x31/0x40 [ath11k] [ 271.940708] Code: 48 89 f3 85 c0 75 11 48 8b 83 a8 00 00 00 8b 00 89 83 b0 00 00 00 5b c3 48 8d 7e 58 be ff ff ff ff e8 53 24 ec fa 85 c0 75 dd <0f> 0b eb d9 90 66 2e 0f 1f 84 00 00 00 00 00 55 53 48 89 f3 8b 35 [ 271.940718] RSP: 0018:ffffbdf0c0230df8 EFLAGS: 00010246 [ 271.940727] RAX: 0000000000000000 RBX: ffffa12b34e67680 RCX: ffffa12b57a0d800 [ 271.940735] RDX: 0000000000000000 RSI: 00000000ffffffff RDI: ffffa12b34e676d8 [ 271.940742] RBP: ffffa12b34e60000 R08: 0000000000000001 R09: 0000000000000001 [ 271.940753] R10: 0000000000000001 R11: 0000000000000046 R12: 0000000000000000 [ 271.940763] R13: ffffa12b34e60000 R14: ffffa12b34e60000 R15: 0000000000000000 [ 271.940774] FS: 0000000000000000(0000) GS:ffffa12b5a400000(0000) knlGS:0000000000000000 [ 271.940788] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 271.940798] CR2: 00007f8bef282008 CR3: 00000001f4224004 CR4: 00000000003706e0 [ 271.940805] Call Trace: [ 271.940813] <IRQ> [ 271.940835] ath11k_dp_tx_completion_handler+0x9e/0x950 [ath11k] [ 271.940847] ? lock_acquire+0xba/0x3b0 [ 271.940876] ath11k_dp_service_srng+0x5a/0x2e0 [ath11k] [ 271.940893] ath11k_pci_ext_grp_napi_poll+0x1e/0x80 [ath11k_pci] [ 271.940908] net_rx_action+0x283/0x4f0 [ 271.940931] __do_softirq+0xcb/0x499 [ 271.940950] asm_call_on_stack+0x12/0x20 [ 271.940963] </IRQ> [ 271.940979] do_softirq_own_stack+0x4d/0x60 [ 271.940991] irq_exit_rcu+0xb0/0xc0 [ 271.941001] common_interrupt+0xce/0x190 [ 271.941014] asm_common_interrupt+0x1e/0x40 [ 271.941026] RIP: 0010:cpuidle_enter_state+0x115/0x500
Tested-on: QCA6390 hw2.0 PCI WLAN.HST.1.0.1-01740-QCAHSTSWPLZ_V2_TO_X86-1
Signed-off-by: Carl Huang cjhuang@codeaurora.org Signed-off-by: Kalle Valo kvalo@codeaurora.org Link: https://lore.kernel.org/r/1601463073-12106-5-git-send-email-kvalo@codeaurora... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/ath/ath11k/dp_tx.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/drivers/net/wireless/ath/ath11k/dp_tx.c b/drivers/net/wireless/ath/ath11k/dp_tx.c index 1af76775b1a87..99cff8fb39773 100644 --- a/drivers/net/wireless/ath/ath11k/dp_tx.c +++ b/drivers/net/wireless/ath/ath11k/dp_tx.c @@ -514,6 +514,8 @@ void ath11k_dp_tx_completion_handler(struct ath11k_base *ab, int ring_id) u32 msdu_id; u8 mac_id;
+ spin_lock_bh(&status_ring->lock); + ath11k_hal_srng_access_begin(ab, status_ring);
while ((ATH11K_TX_COMPL_NEXT(tx_ring->tx_status_head) != @@ -533,6 +535,8 @@ void ath11k_dp_tx_completion_handler(struct ath11k_base *ab, int ring_id)
ath11k_hal_srng_access_end(ab, status_ring);
+ spin_unlock_bh(&status_ring->lock); + while (ATH11K_TX_COMPL_NEXT(tx_ring->tx_status_tail) != tx_ring->tx_status_head) { struct hal_wbm_release_ring *tx_status; u32 desc_id;
From: Wen Gong wgong@codeaurora.org
[ Upstream commit df648808c6b9989555e247530d8ca0ad0094b361 ]
After base_lock which occupy by ath11k_regd_update, the softirq run for WMI_REG_CHAN_LIST_CC_EVENTID maybe arrived and it also need to accuire the spin lock, then deadlock happend, change to disable softirqis to solve it.
[ 235.576990] ================================ [ 235.576991] WARNING: inconsistent lock state [ 235.576993] 5.9.0-rc5-wt-ath+ #196 Not tainted [ 235.576994] -------------------------------- [ 235.576995] inconsistent {IN-SOFTIRQ-W} -> {SOFTIRQ-ON-W} usage. [ 235.576997] kworker/u16:1/98 [HC0[0]:SC0[0]:HE1:SE1] takes: [ 235.576998] ffff9655f75cad98 (&ab->base_lock){+.?.}-{2:2}, at: ath11k_regd_update+0x28/0x1d0 [ath11k] [ 235.577009] {IN-SOFTIRQ-W} state was registered at: [ 235.577013] __lock_acquire+0x219/0x6e0 [ 235.577015] lock_acquire+0xb6/0x270 [ 235.577018] _raw_spin_lock+0x2c/0x70 [ 235.577023] ath11k_reg_chan_list_event.isra.0+0x10d/0x1e0 [ath11k] [ 235.577028] ath11k_wmi_tlv_op_rx+0x3c3/0x560 [ath11k] [ 235.577033] ath11k_htc_rx_completion_handler+0x207/0x370 [ath11k] [ 235.577039] ath11k_ce_recv_process_cb+0x15e/0x1e0 [ath11k] [ 235.577041] ath11k_pci_ce_tasklet+0x10/0x30 [ath11k_pci] [ 235.577043] tasklet_action_common.constprop.0+0xd4/0xf0 [ 235.577045] __do_softirq+0xc9/0x482 [ 235.577046] asm_call_on_stack+0x12/0x20 [ 235.577048] do_softirq_own_stack+0x49/0x60 [ 235.577049] irq_exit_rcu+0x9a/0xd0 [ 235.577050] common_interrupt+0xa1/0x190 [ 235.577052] asm_common_interrupt+0x1e/0x40 [ 235.577053] cpu_idle_poll.isra.0+0x2e/0x60 [ 235.577055] do_idle+0x5f/0xe0 [ 235.577056] cpu_startup_entry+0x14/0x20 [ 235.577058] start_kernel+0x443/0x464 [ 235.577060] secondary_startup_64+0xa4/0xb0 [ 235.577061] irq event stamp: 432035 [ 235.577063] hardirqs last enabled at (432035): [<ffffffff968d12b4>] _raw_spin_unlock_irqrestore+0x34/0x40 [ 235.577064] hardirqs last disabled at (432034): [<ffffffff968d10d3>] _raw_spin_lock_irqsave+0x63/0x80 [ 235.577066] softirqs last enabled at (431998): [<ffffffff967115c1>] inet6_fill_ifla6_attrs+0x3f1/0x430 [ 235.577067] softirqs last disabled at (431996): [<ffffffff9671159f>] inet6_fill_ifla6_attrs+0x3cf/0x430 [ 235.577068] [ 235.577068] other info that might help us debug this: [ 235.577069] Possible unsafe locking scenario: [ 235.577069] [ 235.577070] CPU0 [ 235.577070] ---- [ 235.577071] lock(&ab->base_lock); [ 235.577072] <Interrupt> [ 235.577073] lock(&ab->base_lock); [ 235.577074] [ 235.577074] *** DEADLOCK *** [ 235.577074] [ 235.577075] 3 locks held by kworker/u16:1/98: [ 235.577076] #0: ffff9655f75b1d48 ((wq_completion)ath11k_qmi_driver_event){+.+.}-{0:0}, at: process_one_work+0x1d3/0x5d0 [ 235.577079] #1: ffffa33cc02f3e70 ((work_completion)(&ab->qmi.event_work)){+.+.}-{0:0}, at: process_one_work+0x1d3/0x5d0 [ 235.577081] #2: ffff9655f75cad50 (&ab->core_lock){+.+.}-{3:3}, at: ath11k_core_qmi_firmware_ready.part.0+0x4e/0x160 [ath11k] [ 235.577087] [ 235.577087] stack backtrace: [ 235.577088] CPU: 3 PID: 98 Comm: kworker/u16:1 Not tainted 5.9.0-rc5-wt-ath+ #196 [ 235.577089] Hardware name: Intel(R) Client Systems NUC8i7HVK/NUC8i7HVB, BIOS HNKBLi70.86A.0049.2018.0801.1601 08/01/2018 [ 235.577095] Workqueue: ath11k_qmi_driver_event ath11k_qmi_driver_event_work [ath11k] [ 235.577096] Call Trace: [ 235.577100] dump_stack+0x77/0xa0 [ 235.577102] mark_lock_irq.cold+0x15/0x3c [ 235.577104] mark_lock+0x1d7/0x540 [ 235.577105] mark_usage+0xc7/0x140 [ 235.577107] __lock_acquire+0x219/0x6e0 [ 235.577108] ? sched_clock_cpu+0xc/0xb0 [ 235.577110] lock_acquire+0xb6/0x270 [ 235.577116] ? ath11k_regd_update+0x28/0x1d0 [ath11k] [ 235.577118] ? atomic_notifier_chain_register+0x2d/0x40 [ 235.577120] _raw_spin_lock+0x2c/0x70 [ 235.577125] ? ath11k_regd_update+0x28/0x1d0 [ath11k] [ 235.577130] ath11k_regd_update+0x28/0x1d0 [ath11k] [ 235.577136] __ath11k_mac_register+0x3fb/0x480 [ath11k] [ 235.577141] ath11k_mac_register+0x119/0x180 [ath11k] [ 235.577146] ath11k_core_pdev_create+0x17/0xe0 [ath11k] [ 235.577150] ath11k_core_qmi_firmware_ready.part.0+0x65/0x160 [ath11k] [ 235.577155] ath11k_qmi_driver_event_work+0x1c5/0x230 [ath11k] [ 235.577158] process_one_work+0x265/0x5d0 [ 235.577160] worker_thread+0x49/0x300 [ 235.577161] ? process_one_work+0x5d0/0x5d0 [ 235.577163] kthread+0x135/0x150 [ 235.577164] ? kthread_create_worker_on_cpu+0x60/0x60 [ 235.577166] ret_from_fork+0x22/0x30
Tested-on: QCA6390 hw2.0 PCI WLAN.HST.1.0.1-01740-QCAHSTSWPLZ_V2_TO_X86-1
Signed-off-by: Wen Gong wgong@codeaurora.org Signed-off-by: Kalle Valo kvalo@codeaurora.org Link: https://lore.kernel.org/r/1601399736-3210-7-git-send-email-kvalo@codeaurora.... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/ath/ath11k/reg.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/net/wireless/ath/ath11k/reg.c b/drivers/net/wireless/ath/ath11k/reg.c index 7c9dc91cc48a9..c79a7c7eb56ee 100644 --- a/drivers/net/wireless/ath/ath11k/reg.c +++ b/drivers/net/wireless/ath/ath11k/reg.c @@ -206,7 +206,7 @@ int ath11k_regd_update(struct ath11k *ar, bool init) ab = ar->ab; pdev_id = ar->pdev_idx;
- spin_lock(&ab->base_lock); + spin_lock_bh(&ab->base_lock);
if (init) { /* Apply the regd received during init through @@ -227,7 +227,7 @@ int ath11k_regd_update(struct ath11k *ar, bool init)
if (!regd) { ret = -EINVAL; - spin_unlock(&ab->base_lock); + spin_unlock_bh(&ab->base_lock); goto err; }
@@ -238,7 +238,7 @@ int ath11k_regd_update(struct ath11k *ar, bool init) if (regd_copy) ath11k_copy_regd(regd, regd_copy);
- spin_unlock(&ab->base_lock); + spin_unlock_bh(&ab->base_lock);
if (!regd_copy) { ret = -ENOMEM;
From: Xie He xie.he.0141@gmail.com
[ Upstream commit 8306266c1d51aac9aa7aa907fe99032a58c6382c ]
The fr_hard_header function is used to prepend the header to skbs before transmission. It is used in 3 situations: 1) When a control packet is generated internally in this driver; 2) When a user sends an skb on an Ethernet-emulating PVC device; 3) When a user sends an skb on a normal PVC device.
These 3 situations need to be handled differently by fr_hard_header. Different headers should be prepended to the skb in different situations.
Currently fr_hard_header distinguishes these 3 situations using skb->protocol. For situation 1 and 2, a special skb->protocol value will be assigned before calling fr_hard_header, so that it can recognize these 2 situations. All skb->protocol values other than these special ones are treated by fr_hard_header as situation 3.
However, it is possible that in situation 3, the user sends an skb with one of the special skb->protocol values. In this case, fr_hard_header would incorrectly treat it as situation 1 or 2.
This patch tries to solve this issue by using skb->dev instead of skb->protocol to distinguish between these 3 situations. For situation 1, skb->dev would be NULL; for situation 2, skb->dev->type would be ARPHRD_ETHER; and for situation 3, skb->dev->type would be ARPHRD_DLCI.
This way fr_hard_header would be able to distinguish these 3 situations correctly regardless what skb->protocol value the user tries to use in situation 3.
Cc: Krzysztof Halasa khc@pm.waw.pl Signed-off-by: Xie He xie.he.0141@gmail.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wan/hdlc_fr.c | 98 ++++++++++++++++++++------------------- 1 file changed, 51 insertions(+), 47 deletions(-)
diff --git a/drivers/net/wan/hdlc_fr.c b/drivers/net/wan/hdlc_fr.c index d6cfd51613ed8..3a44dad87602d 100644 --- a/drivers/net/wan/hdlc_fr.c +++ b/drivers/net/wan/hdlc_fr.c @@ -273,63 +273,69 @@ static inline struct net_device **get_dev_p(struct pvc_device *pvc,
static int fr_hard_header(struct sk_buff **skb_p, u16 dlci) { - u16 head_len; struct sk_buff *skb = *skb_p;
- switch (skb->protocol) { - case cpu_to_be16(NLPID_CCITT_ANSI_LMI): - head_len = 4; - skb_push(skb, head_len); - skb->data[3] = NLPID_CCITT_ANSI_LMI; - break; - - case cpu_to_be16(NLPID_CISCO_LMI): - head_len = 4; - skb_push(skb, head_len); - skb->data[3] = NLPID_CISCO_LMI; - break; - - case cpu_to_be16(ETH_P_IP): - head_len = 4; - skb_push(skb, head_len); - skb->data[3] = NLPID_IP; - break; - - case cpu_to_be16(ETH_P_IPV6): - head_len = 4; - skb_push(skb, head_len); - skb->data[3] = NLPID_IPV6; - break; - - case cpu_to_be16(ETH_P_802_3): - head_len = 10; - if (skb_headroom(skb) < head_len) { - struct sk_buff *skb2 = skb_realloc_headroom(skb, - head_len); + if (!skb->dev) { /* Control packets */ + switch (dlci) { + case LMI_CCITT_ANSI_DLCI: + skb_push(skb, 4); + skb->data[3] = NLPID_CCITT_ANSI_LMI; + break; + + case LMI_CISCO_DLCI: + skb_push(skb, 4); + skb->data[3] = NLPID_CISCO_LMI; + break; + + default: + return -EINVAL; + } + + } else if (skb->dev->type == ARPHRD_DLCI) { + switch (skb->protocol) { + case htons(ETH_P_IP): + skb_push(skb, 4); + skb->data[3] = NLPID_IP; + break; + + case htons(ETH_P_IPV6): + skb_push(skb, 4); + skb->data[3] = NLPID_IPV6; + break; + + default: + skb_push(skb, 10); + skb->data[3] = FR_PAD; + skb->data[4] = NLPID_SNAP; + /* OUI 00-00-00 indicates an Ethertype follows */ + skb->data[5] = 0x00; + skb->data[6] = 0x00; + skb->data[7] = 0x00; + /* This should be an Ethertype: */ + *(__be16 *)(skb->data + 8) = skb->protocol; + } + + } else if (skb->dev->type == ARPHRD_ETHER) { + if (skb_headroom(skb) < 10) { + struct sk_buff *skb2 = skb_realloc_headroom(skb, 10); if (!skb2) return -ENOBUFS; dev_kfree_skb(skb); skb = *skb_p = skb2; } - skb_push(skb, head_len); + skb_push(skb, 10); skb->data[3] = FR_PAD; skb->data[4] = NLPID_SNAP; - skb->data[5] = FR_PAD; + /* OUI 00-80-C2 stands for the 802.1 organization */ + skb->data[5] = 0x00; skb->data[6] = 0x80; skb->data[7] = 0xC2; + /* PID 00-07 stands for Ethernet frames without FCS */ skb->data[8] = 0x00; - skb->data[9] = 0x07; /* bridged Ethernet frame w/out FCS */ - break; + skb->data[9] = 0x07;
- default: - head_len = 10; - skb_push(skb, head_len); - skb->data[3] = FR_PAD; - skb->data[4] = NLPID_SNAP; - skb->data[5] = FR_PAD; - skb->data[6] = FR_PAD; - skb->data[7] = FR_PAD; - *(__be16*)(skb->data + 8) = skb->protocol; + } else { + return -EINVAL; }
dlci_to_q922(skb->data, dlci); @@ -425,8 +431,8 @@ static netdev_tx_t pvc_xmit(struct sk_buff *skb, struct net_device *dev) skb_put(skb, pad); memset(skb->data + len, 0, pad); } - skb->protocol = cpu_to_be16(ETH_P_802_3); } + skb->dev = dev; if (!fr_hard_header(&skb, pvc->dlci)) { dev->stats.tx_bytes += skb->len; dev->stats.tx_packets++; @@ -494,10 +500,8 @@ static void fr_lmi_send(struct net_device *dev, int fullrep) memset(skb->data, 0, len); skb_reserve(skb, 4); if (lmi == LMI_CISCO) { - skb->protocol = cpu_to_be16(NLPID_CISCO_LMI); fr_hard_header(&skb, LMI_CISCO_DLCI); } else { - skb->protocol = cpu_to_be16(NLPID_CCITT_ANSI_LMI); fr_hard_header(&skb, LMI_CCITT_ANSI_DLCI); } data = skb_tail_pointer(skb);
From: Li Jun jun.li@nxp.com
[ Upstream commit dc336b19e82d0454ea60270cd18fbb4749e162f6 ]
Do not try to queue a drd work if dr_mode is not USB_DR_MODE_OTG because the work is not inited, this may be triggered by user try to change mode file of debugfs on a single role port, which will cause below kernel dump: [ 60.115529] ------------[ cut here ]------------ [ 60.120166] WARNING: CPU: 1 PID: 627 at kernel/workqueue.c:1473 __queue_work+0x46c/0x520 [ 60.128254] Modules linked in: [ 60.131313] CPU: 1 PID: 627 Comm: sh Not tainted 5.7.0-rc4-00022-g914a586-dirty #135 [ 60.139054] Hardware name: NXP i.MX8MQ EVK (DT) [ 60.143585] pstate: a0000085 (NzCv daIf -PAN -UAO) [ 60.148376] pc : __queue_work+0x46c/0x520 [ 60.152385] lr : __queue_work+0x314/0x520 [ 60.156393] sp : ffff8000124ebc40 [ 60.159705] x29: ffff8000124ebc40 x28: ffff800011808018 [ 60.165018] x27: ffff800011819ef8 x26: ffff800011d39980 [ 60.170331] x25: ffff800011808018 x24: 0000000000000100 [ 60.175643] x23: 0000000000000013 x22: 0000000000000001 [ 60.180955] x21: ffff0000b7c08e00 x20: ffff0000b6c31080 [ 60.186267] x19: ffff0000bb99bc00 x18: 0000000000000000 [ 60.191579] x17: 0000000000000000 x16: 0000000000000000 [ 60.196891] x15: 0000000000000000 x14: 0000000000000000 [ 60.202202] x13: 0000000000000000 x12: 0000000000000000 [ 60.207515] x11: 0000000000000000 x10: 0000000000000040 [ 60.212827] x9 : ffff800011d55460 x8 : ffff800011d55458 [ 60.218138] x7 : ffff0000b7800028 x6 : 0000000000000000 [ 60.223450] x5 : ffff0000b7800000 x4 : 0000000000000000 [ 60.228762] x3 : ffff0000bb997cc0 x2 : 0000000000000001 [ 60.234074] x1 : 0000000000000000 x0 : ffff0000b6c31088 [ 60.239386] Call trace: [ 60.241834] __queue_work+0x46c/0x520 [ 60.245496] queue_work_on+0x6c/0x90 [ 60.249075] dwc3_set_mode+0x48/0x58 [ 60.252651] dwc3_mode_write+0xf8/0x150 [ 60.256489] full_proxy_write+0x5c/0xa8 [ 60.260327] __vfs_write+0x18/0x40 [ 60.263729] vfs_write+0xdc/0x1c8 [ 60.267045] ksys_write+0x68/0xf0 [ 60.270360] __arm64_sys_write+0x18/0x20 [ 60.274286] el0_svc_common.constprop.0+0x68/0x160 [ 60.279077] do_el0_svc+0x20/0x80 [ 60.282394] el0_sync_handler+0x10c/0x178 [ 60.286403] el0_sync+0x140/0x180 [ 60.289716] ---[ end trace 70b155582e2b7988 ]---
Signed-off-by: Li Jun jun.li@nxp.com Signed-off-by: Felipe Balbi balbi@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/dwc3/core.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c index 2f9f4ad562d4e..6dd02a8802f4b 100644 --- a/drivers/usb/dwc3/core.c +++ b/drivers/usb/dwc3/core.c @@ -121,9 +121,6 @@ static void __dwc3_set_mode(struct work_struct *work) int ret; u32 reg;
- if (dwc->dr_mode != USB_DR_MODE_OTG) - return; - pm_runtime_get_sync(dwc->dev);
if (dwc->current_dr_role == DWC3_GCTL_PRTCAP_OTG) @@ -209,6 +206,9 @@ void dwc3_set_mode(struct dwc3 *dwc, u32 mode) { unsigned long flags;
+ if (dwc->dr_mode != USB_DR_MODE_OTG) + return; + spin_lock_irqsave(&dwc->lock, flags); dwc->desired_dr_role = mode; spin_unlock_irqrestore(&dwc->lock, flags);
From: Bhaumik Bhatt bbhatt@codeaurora.org
[ Upstream commit 515847c557dd33167be86cb429fc0674a331bc88 ]
Add the missing check to abort suspends if a client driver has pending outgoing packets to send to the device. This allows better utilization of the MHI bus wherein clients on the host are not left waiting for longer suspend or resume cycles to finish for data transfers.
Reviewed-by: Manivannan Sadhasivam manivannan.sadhasivam@linaro.org Signed-off-by: Bhaumik Bhatt bbhatt@codeaurora.org Signed-off-by: Manivannan Sadhasivam manivannan.sadhasivam@linaro.org Link: https://lore.kernel.org/r/20200929175218.8178-4-manivannan.sadhasivam@linaro... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/bus/mhi/core/pm.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c index 7960980780832..661d704c8093d 100644 --- a/drivers/bus/mhi/core/pm.c +++ b/drivers/bus/mhi/core/pm.c @@ -686,7 +686,8 @@ int mhi_pm_suspend(struct mhi_controller *mhi_cntrl) return -EIO;
/* Return busy if there are any pending resources */ - if (atomic_read(&mhi_cntrl->dev_wake)) + if (atomic_read(&mhi_cntrl->dev_wake) || + atomic_read(&mhi_cntrl->pending_pkts)) return -EBUSY;
/* Take MHI out of M2 state */ @@ -712,7 +713,8 @@ int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
write_lock_irq(&mhi_cntrl->pm_lock);
- if (atomic_read(&mhi_cntrl->dev_wake)) { + if (atomic_read(&mhi_cntrl->dev_wake) || + atomic_read(&mhi_cntrl->pending_pkts)) { write_unlock_irq(&mhi_cntrl->pm_lock); return -EBUSY; }
From: Diana Craciun diana.craciun@oss.nxp.com
[ Upstream commit 5026cf605143e764e1785bbf9158559d17f8d260 ]
Before destroying the mc_io, check first that it was allocated.
Reviewed-by: Laurentiu Tudor laurentiu.tudor@nxp.com Acked-by: Laurentiu Tudor laurentiu.tudor@nxp.com Signed-off-by: Diana Craciun diana.craciun@oss.nxp.com Link: https://lore.kernel.org/r/20200929085441.17448-11-diana.craciun@oss.nxp.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/bus/fsl-mc/mc-io.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/bus/fsl-mc/mc-io.c b/drivers/bus/fsl-mc/mc-io.c index a30b53f1d87d8..305015486b91c 100644 --- a/drivers/bus/fsl-mc/mc-io.c +++ b/drivers/bus/fsl-mc/mc-io.c @@ -129,7 +129,12 @@ error_destroy_mc_io: */ void fsl_destroy_mc_io(struct fsl_mc_io *mc_io) { - struct fsl_mc_device *dpmcp_dev = mc_io->dpmcp_dev; + struct fsl_mc_device *dpmcp_dev; + + if (!mc_io) + return; + + dpmcp_dev = mc_io->dpmcp_dev;
if (dpmcp_dev) fsl_mc_io_unset_dpmcp(mc_io);
From: Jonathan Cameron Jonathan.Cameron@huawei.com
[ Upstream commit 2c5b9bde95c96942f2873cea6ef383c02800e4a8 ]
In ACPI 6.3, the Memory Proximity Domain Attributes Structure changed substantially. One of those changes was that the flag for "Memory Proximity Domain field is valid" was deprecated.
This was because the field "Proximity Domain for the Memory" became a required field and hence having a validity flag makes no sense.
So the correct logic is to always assume the field is there. Current code assumes it never is.
Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/acpi/numa/hmat.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/acpi/numa/hmat.c b/drivers/acpi/numa/hmat.c index 2c32cfb723701..6a91a55229aee 100644 --- a/drivers/acpi/numa/hmat.c +++ b/drivers/acpi/numa/hmat.c @@ -424,7 +424,8 @@ static int __init hmat_parse_proximity_domain(union acpi_subtable_headers *heade pr_info("HMAT: Memory Flags:%04x Processor Domain:%u Memory Domain:%u\n", p->flags, p->processor_PD, p->memory_PD);
- if (p->flags & ACPI_HMAT_MEMORY_PD_VALID && hmat_revision == 1) { + if ((hmat_revision == 1 && p->flags & ACPI_HMAT_MEMORY_PD_VALID) || + hmat_revision > 1) { target = find_mem_target(p->memory_PD); if (!target) { pr_debug("HMAT: Memory Domain missing from SRAT\n");
From: Xiongfeng Wang wangxiongfeng2@huawei.com
[ Upstream commit c07fa6c1631333f02750cf59f22b615d768b4d8f ]
When I cat some module parameters by sysfs, it displays as follows. It's better to add a newline for easy reading.
root@syzkaller:~# cd /sys/module/test_power/parameters/ root@syzkaller:/sys/module/test_power/parameters# cat ac_online onroot@syzkaller:/sys/module/test_power/parameters# cat battery_present trueroot@syzkaller:/sys/module/test_power/parameters# cat battery_health goodroot@syzkaller:/sys/module/test_power/parameters# cat battery_status dischargingroot@syzkaller:/sys/module/test_power/parameters# cat battery_technology LIONroot@syzkaller:/sys/module/test_power/parameters# cat usb_online onroot@syzkaller:/sys/module/test_power/parameters#
Signed-off-by: Xiongfeng Wang wangxiongfeng2@huawei.com Signed-off-by: Sebastian Reichel sebastian.reichel@collabora.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/power/supply/test_power.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/drivers/power/supply/test_power.c b/drivers/power/supply/test_power.c index 04acd76bbaa12..4895ee5e63a9a 100644 --- a/drivers/power/supply/test_power.c +++ b/drivers/power/supply/test_power.c @@ -353,6 +353,7 @@ static int param_set_ac_online(const char *key, const struct kernel_param *kp) static int param_get_ac_online(char *buffer, const struct kernel_param *kp) { strcpy(buffer, map_get_key(map_ac_online, ac_online, "unknown")); + strcat(buffer, "\n"); return strlen(buffer); }
@@ -366,6 +367,7 @@ static int param_set_usb_online(const char *key, const struct kernel_param *kp) static int param_get_usb_online(char *buffer, const struct kernel_param *kp) { strcpy(buffer, map_get_key(map_ac_online, usb_online, "unknown")); + strcat(buffer, "\n"); return strlen(buffer); }
@@ -380,6 +382,7 @@ static int param_set_battery_status(const char *key, static int param_get_battery_status(char *buffer, const struct kernel_param *kp) { strcpy(buffer, map_get_key(map_status, battery_status, "unknown")); + strcat(buffer, "\n"); return strlen(buffer); }
@@ -394,6 +397,7 @@ static int param_set_battery_health(const char *key, static int param_get_battery_health(char *buffer, const struct kernel_param *kp) { strcpy(buffer, map_get_key(map_health, battery_health, "unknown")); + strcat(buffer, "\n"); return strlen(buffer); }
@@ -409,6 +413,7 @@ static int param_get_battery_present(char *buffer, const struct kernel_param *kp) { strcpy(buffer, map_get_key(map_present, battery_present, "unknown")); + strcat(buffer, "\n"); return strlen(buffer); }
@@ -426,6 +431,7 @@ static int param_get_battery_technology(char *buffer, { strcpy(buffer, map_get_key(map_technology, battery_technology, "unknown")); + strcat(buffer, "\n"); return strlen(buffer); }
From: Fangzhi Zuo Jerry.Zuo@amd.com
[ Upstream commit 95d620adb48f7728e67d82f56f756e8d451cf8d2 ]
[Why] Currently mode validation is bypassed if remote sink exists. That leads to mode set issue when a BW bottle neck exists in the link path, e.g., a DP-to-HDMI converter that only supports HDMI 1.4.
Any invalid mode passed to Linux user space will cause the modeset failure due to limitation of Linux user space implementation.
[How] Mode validation is skipped only if in edid override. For real remote sink, clock limit check should be done for HDMI remote sink.
Have HDMI related remote sink going through mode validation to elimiate modes which pixel clock exceeds BW limitation.
Signed-off-by: Fangzhi Zuo Jerry.Zuo@amd.com Reviewed-by: Hersen Wu hersenxs.wu@amd.com Acked-by: Eryk Brol eryk.brol@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/dc/core/dc_link.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c index 437d1a7a16fe7..b0f8bfd48d102 100644 --- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c +++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c @@ -2441,7 +2441,7 @@ enum dc_status dc_link_validate_mode_timing( /* A hack to avoid failing any modes for EDID override feature on * topology change such as lower quality cable for DP or different dongle */ - if (link->remote_sinks[0]) + if (link->remote_sinks[0] && link->remote_sinks[0]->sink_signal == SIGNAL_TYPE_VIRTUAL) return DC_OK;
/* Passive Dongle */
From: Rodrigo Siqueira Rodrigo.Siqueira@amd.com
[ Upstream commit 2f8be0e516803cc3fd87c1671247896571a5a8fb ]
[Why] Sometimes CRTCs can be disabled due to display unplugging or temporarily transition in the userspace; in these circumstances, DCE tries to set the minimum clock threshold. When we have this situation, the function bw_calcs is invoked with number_of_displays set to zero, making DCE set dispclk_khz and sclk_khz to zero. For these reasons, we have seen some ATOM bios errors that look like:
[drm:atom_op_jump [amdgpu]] *ERROR* atombios stuck in loop for more than 5secs aborting [drm:amdgpu_atom_execute_table_locked [amdgpu]] *ERROR* atombios stuck executing EA8A (len 761, WS 0, PS 0) @ 0xEABA
[How] This error happens due to an attempt to optimize the bandwidth using the sclk, and the dispclk clock set to zero. Technically we handle this in the function dce112_set_clock, but we are not considering the case that this value is set to zero. This commit fixes this issue by ensuring that we never set a minimum value below the minimum clock threshold.
Signed-off-by: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Reviewed-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com Acked-by: Eryk Brol eryk.brol@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/dc/clk_mgr/dce112/dce112_clk_mgr.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce112/dce112_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce112/dce112_clk_mgr.c index d031bd3d30724..807dca8f7d7aa 100644 --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce112/dce112_clk_mgr.c +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce112/dce112_clk_mgr.c @@ -79,8 +79,7 @@ int dce112_set_clock(struct clk_mgr *clk_mgr_base, int requested_clk_khz) memset(&dce_clk_params, 0, sizeof(dce_clk_params));
/* Make sure requested clock isn't lower than minimum threshold*/ - if (requested_clk_khz > 0) - requested_clk_khz = max(requested_clk_khz, + requested_clk_khz = max(requested_clk_khz, clk_mgr_dce->base.dentist_vco_freq_khz / 62);
dce_clk_params.target_clock_frequency = requested_clk_khz;
From: Zhen Lei thunder.leizhen@huawei.com
[ Upstream commit 05b1be68c4d6d76970025e6139bfd735c2256ee5 ]
xxx/arc/boot/dts/axs101.dt.yaml: dw-apb-ictl@e0012000: $nodename:0: \ 'dw-apb-ictl@e0012000' does not match '^interrupt-controller(@[0-9a-f,]+)*$' From schema: xxx/interrupt-controller/snps,dw-apb-ictl.yaml
The node name of the interrupt controller must start with "interrupt-controller" instead of "dw-apb-ictl".
Signed-off-by: Zhen Lei thunder.leizhen@huawei.com Signed-off-by: Vineet Gupta vgupta@synopsys.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arc/boot/dts/axc001.dtsi | 2 +- arch/arc/boot/dts/axc003.dtsi | 2 +- arch/arc/boot/dts/axc003_idu.dtsi | 2 +- arch/arc/boot/dts/vdk_axc003.dtsi | 2 +- arch/arc/boot/dts/vdk_axc003_idu.dtsi | 2 +- 5 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/arc/boot/dts/axc001.dtsi b/arch/arc/boot/dts/axc001.dtsi index 79ec27c043c1d..2a151607b0805 100644 --- a/arch/arc/boot/dts/axc001.dtsi +++ b/arch/arc/boot/dts/axc001.dtsi @@ -91,7 +91,7 @@ * avoid duplicating the MB dtsi file given that IRQ from * this intc to cpu intc are different for axs101 and axs103 */ - mb_intc: dw-apb-ictl@e0012000 { + mb_intc: interrupt-controller@e0012000 { #interrupt-cells = <1>; compatible = "snps,dw-apb-ictl"; reg = < 0x0 0xe0012000 0x0 0x200 >; diff --git a/arch/arc/boot/dts/axc003.dtsi b/arch/arc/boot/dts/axc003.dtsi index ac8e1b463a709..cd1edcf4f95ef 100644 --- a/arch/arc/boot/dts/axc003.dtsi +++ b/arch/arc/boot/dts/axc003.dtsi @@ -129,7 +129,7 @@ * avoid duplicating the MB dtsi file given that IRQ from * this intc to cpu intc are different for axs101 and axs103 */ - mb_intc: dw-apb-ictl@e0012000 { + mb_intc: interrupt-controller@e0012000 { #interrupt-cells = <1>; compatible = "snps,dw-apb-ictl"; reg = < 0x0 0xe0012000 0x0 0x200 >; diff --git a/arch/arc/boot/dts/axc003_idu.dtsi b/arch/arc/boot/dts/axc003_idu.dtsi index 9da21e7fd246f..70779386ca796 100644 --- a/arch/arc/boot/dts/axc003_idu.dtsi +++ b/arch/arc/boot/dts/axc003_idu.dtsi @@ -135,7 +135,7 @@ * avoid duplicating the MB dtsi file given that IRQ from * this intc to cpu intc are different for axs101 and axs103 */ - mb_intc: dw-apb-ictl@e0012000 { + mb_intc: interrupt-controller@e0012000 { #interrupt-cells = <1>; compatible = "snps,dw-apb-ictl"; reg = < 0x0 0xe0012000 0x0 0x200 >; diff --git a/arch/arc/boot/dts/vdk_axc003.dtsi b/arch/arc/boot/dts/vdk_axc003.dtsi index f8be7ba8dad49..c21d0eb07bf67 100644 --- a/arch/arc/boot/dts/vdk_axc003.dtsi +++ b/arch/arc/boot/dts/vdk_axc003.dtsi @@ -46,7 +46,7 @@
};
- mb_intc: dw-apb-ictl@e0012000 { + mb_intc: interrupt-controller@e0012000 { #interrupt-cells = <1>; compatible = "snps,dw-apb-ictl"; reg = < 0xe0012000 0x200 >; diff --git a/arch/arc/boot/dts/vdk_axc003_idu.dtsi b/arch/arc/boot/dts/vdk_axc003_idu.dtsi index 0afa3e53a4e39..4d348853ac7c5 100644 --- a/arch/arc/boot/dts/vdk_axc003_idu.dtsi +++ b/arch/arc/boot/dts/vdk_axc003_idu.dtsi @@ -54,7 +54,7 @@
};
- mb_intc: dw-apb-ictl@e0012000 { + mb_intc: interrupt-controller@e0012000 { #interrupt-cells = <1>; compatible = "snps,dw-apb-ictl"; reg = < 0xe0012000 0x200 >;
From: Gabriel Krisman Bertazi krisman@collabora.com
[ Upstream commit a926c7afffcc0f2e35e6acbccb16921bacf34617 ]
According to Documentation/block/stat.rst, inflight should not include I/O requests that are in the queue but not yet dispatched to the device, but blk-mq identifies as inflight any request that has a tag allocated, which, for queues without elevator, happens at request allocation time and before it is queued in the ctx (default case in blk_mq_submit_bio).
In addition, current behavior is different for queues with elevator from queues without it, since for the former the driver tag is allocated at dispatch time. A more precise approach would be to only consider requests with state MQ_RQ_IN_FLIGHT.
This effectively reverts commit 6131837b1de6 ("blk-mq: count allocated but not started requests in iostats inflight") to consolidate blk-mq behavior with itself (elevator case) and with original documentation, but it differs from the behavior used by the legacy path.
This version differs from v1 by using blk_mq_rq_state to access the state attribute. Avoid using blk_mq_request_started, which was suggested, since we don't want to include MQ_RQ_COMPLETE.
Signed-off-by: Gabriel Krisman Bertazi krisman@collabora.com Cc: Omar Sandoval osandov@fb.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- block/blk-mq.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c index 94a53d779c12b..ca2fdb58e7af5 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -105,7 +105,7 @@ static bool blk_mq_check_inflight(struct blk_mq_hw_ctx *hctx, { struct mq_inflight *mi = priv;
- if (rq->part == mi->part) + if (rq->part == mi->part && blk_mq_rq_state(rq) == MQ_RQ_IN_FLIGHT) mi->inflight[rq_data_dir(rq)]++;
return true;
From: Anand Jain anand.jain@oracle.com
[ Upstream commit c6a5d954950c5031444173ad2195efc163afcac9 ]
If you replace a seed device in a sprouted fs, it appears to have successfully replaced the seed device, but if you look closely, it didn't. Here is an example.
$ mkfs.btrfs /dev/sda $ btrfstune -S1 /dev/sda $ mount /dev/sda /btrfs $ btrfs device add /dev/sdb /btrfs $ umount /btrfs $ btrfs device scan --forget $ mount -o device=/dev/sda /dev/sdb /btrfs $ btrfs replace start -f /dev/sda /dev/sdc /btrfs $ echo $? 0
BTRFS info (device sdb): dev_replace from /dev/sda (devid 1) to /dev/sdc started BTRFS info (device sdb): dev_replace from /dev/sda (devid 1) to /dev/sdc finished
$ btrfs fi show Label: none uuid: ab2c88b7-be81-4a7e-9849-c3666e7f9f4f Total devices 2 FS bytes used 256.00KiB devid 1 size 3.00GiB used 520.00MiB path /dev/sdc devid 2 size 3.00GiB used 896.00MiB path /dev/sdb
Label: none uuid: 10bd3202-0415-43af-96a8-d5409f310a7e Total devices 1 FS bytes used 128.00KiB devid 1 size 3.00GiB used 536.00MiB path /dev/sda
So as per the replace start command and kernel log replace was successful. Now let's try to clean mount.
$ umount /btrfs $ btrfs device scan --forget
$ mount -o device=/dev/sdc /dev/sdb /btrfs mount: /btrfs: wrong fs type, bad option, bad superblock on /dev/sdb, missing codepage or helper program, or other error.
[ 636.157517] BTRFS error (device sdc): failed to read chunk tree: -2 [ 636.180177] BTRFS error (device sdc): open_ctree failed
That's because per dev items it is still looking for the original seed device.
$ btrfs inspect-internal dump-tree -d /dev/sdb
item 0 key (DEV_ITEMS DEV_ITEM 1) itemoff 16185 itemsize 98 devid 1 total_bytes 3221225472 bytes_used 545259520 io_align 4096 io_width 4096 sector_size 4096 type 0 generation 6 start_offset 0 dev_group 0 seek_speed 0 bandwidth 0 uuid 59368f50-9af2-4b17-91da-8a783cc418d4 <--- seed uuid fsid 10bd3202-0415-43af-96a8-d5409f310a7e <--- seed fsid item 1 key (DEV_ITEMS DEV_ITEM 2) itemoff 16087 itemsize 98 devid 2 total_bytes 3221225472 bytes_used 939524096 io_align 4096 io_width 4096 sector_size 4096 type 0 generation 0 start_offset 0 dev_group 0 seek_speed 0 bandwidth 0 uuid 56a0a6bc-4630-4998-8daf-3c3030c4256a <- sprout uuid fsid ab2c88b7-be81-4a7e-9849-c3666e7f9f4f <- sprout fsid
But the replaced target has the following uuid+fsid in its superblock which doesn't match with the expected uuid+fsid in its devitem.
$ btrfs in dump-super /dev/sdc | egrep '^generation|dev_item.uuid|dev_item.fsid|devid' generation 20 dev_item.uuid 59368f50-9af2-4b17-91da-8a783cc418d4 dev_item.fsid ab2c88b7-be81-4a7e-9849-c3666e7f9f4f [match] dev_item.devid 1
So if you provide the original seed device the mount shall be successful. Which so long happening in the test case btrfs/163.
$ btrfs device scan --forget $ mount -o device=/dev/sda /dev/sdb /btrfs
Fix in this patch: If a seed is not sprouted then there is no replacement of it, because of its read-only filesystem with a read-only device. Similarly, in the case of a sprouted filesystem, the seed device is still read only. So, mark it as you can't replace a seed device, you can only add a new device and then delete the seed device. If replace is attempted then returns -EINVAL.
Signed-off-by: Anand Jain anand.jain@oracle.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/btrfs/dev-replace.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c index e4a1c6afe35dc..0cb36746060da 100644 --- a/fs/btrfs/dev-replace.c +++ b/fs/btrfs/dev-replace.c @@ -230,7 +230,7 @@ static int btrfs_init_dev_replace_tgtdev(struct btrfs_fs_info *fs_info, int ret = 0;
*device_out = NULL; - if (fs_info->fs_devices->seeding) { + if (srcdev->fs_devices->seeding) { btrfs_err(fs_info, "the filesystem is a seed filesystem!"); return -EINVAL; }
From: Zhao Heming heming.zhao@suse.com
[ Upstream commit d837f7277f56e70d82b3a4a037d744854e62f387 ]
md_bitmap_get_counter() has code:
``` if (bitmap->bp[page].hijacked || bitmap->bp[page].map == NULL) csize = ((sector_t)1) << (bitmap->chunkshift + PAGE_COUNTER_SHIFT - 1); ```
The minus 1 is wrong, this branch should report 2048 bits of space. With "-1" action, this only report 1024 bit of space.
This bug code returns wrong blocks, but it doesn't inflence bitmap logic: 1. Most callers focus this function return value (the counter of offset), not the parameter blocks. 2. The bug is only triggered when hijacked is true or map is NULL. the hijacked true condition is very rare. the "map == null" only true when array is creating or resizing. 3. Even the caller gets wrong blocks, current code makes caller just to call md_bitmap_get_counter() one more time.
Signed-off-by: Zhao Heming heming.zhao@suse.com Signed-off-by: Song Liu songliubraving@fb.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/md/md-bitmap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c index c61ab86a28b52..d910833feeb4d 100644 --- a/drivers/md/md-bitmap.c +++ b/drivers/md/md-bitmap.c @@ -1367,7 +1367,7 @@ __acquires(bitmap->lock) if (bitmap->bp[page].hijacked || bitmap->bp[page].map == NULL) csize = ((sector_t)1) << (bitmap->chunkshift + - PAGE_COUNTER_SHIFT - 1); + PAGE_COUNTER_SHIFT); else csize = ((sector_t)1) << bitmap->chunkshift; *blocks = csize - (offset & (csize - 1));
From: Chao Yu yuchao0@huawei.com
[ Upstream commit d662fad143c0470ad7f46ea7b02da539f613d7d7 ]
If compressed inode has inconsistent fields on i_compress_algorithm, i_compr_blocks and i_log_cluster_size, we missed to set SBI_NEED_FSCK to notice fsck to repair the inode, fix it.
Signed-off-by: Chao Yu yuchao0@huawei.com Signed-off-by: Jaegeuk Kim jaegeuk@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- fs/f2fs/inode.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c index 5195e083fc1e6..12c7fa1631935 100644 --- a/fs/f2fs/inode.c +++ b/fs/f2fs/inode.c @@ -299,6 +299,7 @@ static bool sanity_check_inode(struct inode *inode, struct page *node_page) F2FS_FITS_IN_INODE(ri, fi->i_extra_isize, i_log_cluster_size)) { if (ri->i_compress_algorithm >= COMPRESS_MAX) { + set_sbi_flag(sbi, SBI_NEED_FSCK); f2fs_warn(sbi, "%s: inode (ino=%lx) has unsupported " "compress algorithm: %u, run fsck to fix", __func__, inode->i_ino, @@ -307,6 +308,7 @@ static bool sanity_check_inode(struct inode *inode, struct page *node_page) } if (le64_to_cpu(ri->i_compr_blocks) > SECTOR_TO_BLOCK(inode->i_blocks)) { + set_sbi_flag(sbi, SBI_NEED_FSCK); f2fs_warn(sbi, "%s: inode (ino=%lx) has inconsistent " "i_compr_blocks:%llu, i_blocks:%llu, run fsck to fix", __func__, inode->i_ino, @@ -316,6 +318,7 @@ static bool sanity_check_inode(struct inode *inode, struct page *node_page) } if (ri->i_log_cluster_size < MIN_COMPRESS_LOG_SIZE || ri->i_log_cluster_size > MAX_COMPRESS_LOG_SIZE) { + set_sbi_flag(sbi, SBI_NEED_FSCK); f2fs_warn(sbi, "%s: inode (ino=%lx) has unsupported " "log cluster size: %u, run fsck to fix", __func__, inode->i_ino,
From: Michael Chan michael.chan@broadcom.com
[ Upstream commit 8eddb3e7ce124dd6375d3664f1aae13873318b0f ]
If the VF virtual link is set to always enabled, the speed may be unknown when the physical link is down. The driver currently logs the link speed as 4294967295 Mbps which is SPEED_UNKNOWN. Modify the link up log message as "speed unknown" which makes more sense.
Reviewed-by: Vasundhara Volam vasundhara-v.volam@broadcom.com Reviewed-by: Edwin Peer edwin.peer@broadcom.com Signed-off-by: Michael Chan michael.chan@broadcom.com Link: https://lore.kernel.org/r/1602493854-29283-7-git-send-email-michael.chan@bro... Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 7b5d521924872..b8d534b719d4f 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -8735,6 +8735,11 @@ static void bnxt_report_link(struct bnxt *bp) u16 fec;
netif_carrier_on(bp->dev); + speed = bnxt_fw_to_ethtool_speed(bp->link_info.link_speed); + if (speed == SPEED_UNKNOWN) { + netdev_info(bp->dev, "NIC Link is Up, speed unknown\n"); + return; + } if (bp->link_info.duplex == BNXT_LINK_DUPLEX_FULL) duplex = "full"; else @@ -8747,7 +8752,6 @@ static void bnxt_report_link(struct bnxt *bp) flow_ctrl = "ON - receive"; else flow_ctrl = "none"; - speed = bnxt_fw_to_ethtool_speed(bp->link_info.link_speed); netdev_info(bp->dev, "NIC Link is Up, %u Mbps %s duplex, Flow control: %s\n", speed, duplex, flow_ctrl); if (bp->flags & BNXT_FLAG_EEE_CAP)
From: Chris Lew clew@codeaurora.org
[ Upstream commit 4fcdaf6e28d11e2f3820d54dd23cd12a47ddd44e ]
The open_req and open_ack completion variables are the state variables to represet a remote channel as open. Use complete_all so there are no races with waiters and using completion_done.
Signed-off-by: Chris Lew clew@codeaurora.org Signed-off-by: Arun Kumar Neelakantam aneela@codeaurora.org Signed-off-by: Deepak Kumar Singh deesin@codeaurora.org Link: https://lore.kernel.org/r/1593017121-7953-2-git-send-email-deesin@codeaurora... Signed-off-by: Bjorn Andersson bjorn.andersson@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/rpmsg/qcom_glink_native.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c index f40312b16da06..b5570c83a28c6 100644 --- a/drivers/rpmsg/qcom_glink_native.c +++ b/drivers/rpmsg/qcom_glink_native.c @@ -970,7 +970,7 @@ static int qcom_glink_rx_open_ack(struct qcom_glink *glink, unsigned int lcid) return -EINVAL; }
- complete(&channel->open_ack); + complete_all(&channel->open_ack);
return 0; } @@ -1178,7 +1178,7 @@ static int qcom_glink_announce_create(struct rpmsg_device *rpdev) __be32 *val = defaults; int size;
- if (glink->intentless) + if (glink->intentless || !completion_done(&channel->open_ack)) return 0;
prop = of_find_property(np, "qcom,intents", NULL); @@ -1413,7 +1413,7 @@ static int qcom_glink_rx_open(struct qcom_glink *glink, unsigned int rcid, channel->rcid = ret; spin_unlock_irqrestore(&glink->idr_lock, flags);
- complete(&channel->open_req); + complete_all(&channel->open_req);
if (create_device) { rpdev = kzalloc(sizeof(*rpdev), GFP_KERNEL);
From: Tuan Phan tuanphan@os.amperecomputing.com
[ Upstream commit 877c1a5f79c6984bbe3f2924234c08e2f4f1acd5 ]
Ampere Altra SOC supports only 32-bit ECAM reads. Add an MCFG quirk for the platform.
Link: https://lore.kernel.org/r/1596751055-12316-1-git-send-email-tuanphan@os.ampe... Signed-off-by: Tuan Phan tuanphan@os.amperecomputing.com Signed-off-by: Bjorn Helgaas bhelgaas@google.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/acpi/pci_mcfg.c | 20 ++++++++++++++++++++ drivers/pci/ecam.c | 10 ++++++++++ include/linux/pci-ecam.h | 1 + 3 files changed, 31 insertions(+)
diff --git a/drivers/acpi/pci_mcfg.c b/drivers/acpi/pci_mcfg.c index 54b36b7ad47d9..e526571e0ebdb 100644 --- a/drivers/acpi/pci_mcfg.c +++ b/drivers/acpi/pci_mcfg.c @@ -142,6 +142,26 @@ static struct mcfg_fixup mcfg_quirks[] = { XGENE_V2_ECAM_MCFG(4, 0), XGENE_V2_ECAM_MCFG(4, 1), XGENE_V2_ECAM_MCFG(4, 2), + +#define ALTRA_ECAM_QUIRK(rev, seg) \ + { "Ampere", "Altra ", rev, seg, MCFG_BUS_ANY, &pci_32b_read_ops } + + ALTRA_ECAM_QUIRK(1, 0), + ALTRA_ECAM_QUIRK(1, 1), + ALTRA_ECAM_QUIRK(1, 2), + ALTRA_ECAM_QUIRK(1, 3), + ALTRA_ECAM_QUIRK(1, 4), + ALTRA_ECAM_QUIRK(1, 5), + ALTRA_ECAM_QUIRK(1, 6), + ALTRA_ECAM_QUIRK(1, 7), + ALTRA_ECAM_QUIRK(1, 8), + ALTRA_ECAM_QUIRK(1, 9), + ALTRA_ECAM_QUIRK(1, 10), + ALTRA_ECAM_QUIRK(1, 11), + ALTRA_ECAM_QUIRK(1, 12), + ALTRA_ECAM_QUIRK(1, 13), + ALTRA_ECAM_QUIRK(1, 14), + ALTRA_ECAM_QUIRK(1, 15), };
static char mcfg_oem_id[ACPI_OEM_ID_SIZE]; diff --git a/drivers/pci/ecam.c b/drivers/pci/ecam.c index 8f065a42fc1a2..b54d32a316693 100644 --- a/drivers/pci/ecam.c +++ b/drivers/pci/ecam.c @@ -168,4 +168,14 @@ const struct pci_ecam_ops pci_32b_ops = { .write = pci_generic_config_write32, } }; + +/* ECAM ops for 32-bit read only (non-compliant) */ +const struct pci_ecam_ops pci_32b_read_ops = { + .bus_shift = 20, + .pci_ops = { + .map_bus = pci_ecam_map_bus, + .read = pci_generic_config_read32, + .write = pci_generic_config_write, + } +}; #endif diff --git a/include/linux/pci-ecam.h b/include/linux/pci-ecam.h index 1af5cb02ef7f9..033ce74f02e81 100644 --- a/include/linux/pci-ecam.h +++ b/include/linux/pci-ecam.h @@ -51,6 +51,7 @@ extern const struct pci_ecam_ops pci_generic_ecam_ops;
#if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS) extern const struct pci_ecam_ops pci_32b_ops; /* 32-bit accesses only */ +extern const struct pci_ecam_ops pci_32b_read_ops; /* 32-bit read only */ extern const struct pci_ecam_ops hisi_pcie_ops; /* HiSilicon */ extern const struct pci_ecam_ops thunder_pem_ecam_ops; /* Cavium ThunderX 1.x & 2.x */ extern const struct pci_ecam_ops pci_thunder_ecam_ops; /* Cavium ThunderX 1.x */
From: Tero Kristo t-kristo@ti.com
[ Upstream commit b7a7943fe291b983b104bcbd2f16e8e896f56590 ]
Fix a memory leak induced by not calling clk_put after doing of_clk_get.
Reported-by: Dan Murphy dmurphy@ti.com Signed-off-by: Tero Kristo t-kristo@ti.com Link: https://lore.kernel.org/r/20200907082600.454-3-t-kristo@ti.com Signed-off-by: Stephen Boyd sboyd@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/clk/ti/clockdomain.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/clk/ti/clockdomain.c b/drivers/clk/ti/clockdomain.c index ee56306f79d5f..700b7f44f6716 100644 --- a/drivers/clk/ti/clockdomain.c +++ b/drivers/clk/ti/clockdomain.c @@ -148,10 +148,12 @@ static void __init of_ti_clockdomain_setup(struct device_node *node) if (!omap2_clk_is_hw_omap(clk_hw)) { pr_warn("can't setup clkdm for basic clk %s\n", __clk_get_name(clk)); + clk_put(clk); continue; } to_clk_hw_omap(clk_hw)->clkdm_name = clkdm_name; omap2_init_clk_clkdm(clk_hw); + clk_put(clk); } }
From: Hou Tao houtao1@huawei.com
[ Upstream commit 3caf91757ced158e6c4a44d8b105bd7b3e1767d8 ]
Now when a read delegation is given, two delegation related traces will be printed:
nfsd_deleg_open: client 5f45b854:e6058001 stateid 00000030:00000001 nfsd_deleg_none: client 5f45b854:e6058001 stateid 0000002f:00000001
Although the intention is to let developers know two stateid are returned, the traces are confusing about whether or not a read delegation is handled out. So renaming trace_nfsd_deleg_none() to trace_nfsd_open() and trace_nfsd_deleg_open() to trace_nfsd_deleg_read() to make the intension clearer.
The patched traces will be:
nfsd_deleg_read: client 5f48a967:b55b21cd stateid 00000003:00000001 nfsd_open: client 5f48a967:b55b21cd stateid 00000002:00000001
Suggested-by: Chuck Lever chuck.lever@oracle.com Signed-off-by: Hou Tao houtao1@huawei.com Signed-off-by: J. Bruce Fields bfields@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/nfsd/nfs4state.c | 4 ++-- fs/nfsd/trace.h | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c index c09a2a4281ec9..0525acfe31314 100644 --- a/fs/nfsd/nfs4state.c +++ b/fs/nfsd/nfs4state.c @@ -5126,7 +5126,7 @@ nfs4_open_delegation(struct svc_fh *fh, struct nfsd4_open *open,
memcpy(&open->op_delegate_stateid, &dp->dl_stid.sc_stateid, sizeof(dp->dl_stid.sc_stateid));
- trace_nfsd_deleg_open(&dp->dl_stid.sc_stateid); + trace_nfsd_deleg_read(&dp->dl_stid.sc_stateid); open->op_delegate_type = NFS4_OPEN_DELEGATE_READ; nfs4_put_stid(&dp->dl_stid); return; @@ -5243,7 +5243,7 @@ nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nf nfs4_open_delegation(current_fh, open, stp); nodeleg: status = nfs_ok; - trace_nfsd_deleg_none(&stp->st_stid.sc_stateid); + trace_nfsd_open(&stp->st_stid.sc_stateid); out: /* 4.1 client trying to upgrade/downgrade delegation? */ if (open->op_delegate_type == NFS4_OPEN_DELEGATE_NONE && dp && diff --git a/fs/nfsd/trace.h b/fs/nfsd/trace.h index 1861db1bdc670..99bf07800cd09 100644 --- a/fs/nfsd/trace.h +++ b/fs/nfsd/trace.h @@ -289,8 +289,8 @@ DEFINE_STATEID_EVENT(layout_recall_done); DEFINE_STATEID_EVENT(layout_recall_fail); DEFINE_STATEID_EVENT(layout_recall_release);
-DEFINE_STATEID_EVENT(deleg_open); -DEFINE_STATEID_EVENT(deleg_none); +DEFINE_STATEID_EVENT(open); +DEFINE_STATEID_EVENT(deleg_read); DEFINE_STATEID_EVENT(deleg_break); DEFINE_STATEID_EVENT(deleg_recall);
From: J. Bruce Fields bfields@redhat.com
[ Upstream commit 50747dd5e47bde3b7d7f839c84d0d3b554090497 ]
There are actually rare races where this is possible (e.g. if a new open intervenes between the read of i_writecount and the fi_fds).
Signed-off-by: J. Bruce Fields bfields@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/nfsd/nfs4state.c | 1 - 1 file changed, 1 deletion(-)
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c index 0525acfe31314..1f646a27481fb 100644 --- a/fs/nfsd/nfs4state.c +++ b/fs/nfsd/nfs4state.c @@ -4954,7 +4954,6 @@ static int nfsd4_check_conflicting_opens(struct nfs4_client *clp, writes--; if (fp->fi_fds[O_RDWR]) writes--; - WARN_ON_ONCE(writes < 0); if (writes > 0) return -EAGAIN; spin_lock(&fp->fi_lock);
From: Anant Thazhemadam anant.thazhemadam@gmail.com
[ Upstream commit 7ca1db21ef8e0e6725b4d25deed1ca196f7efb28 ]
In p9_fd_create_unix, checking is performed to see if the addr (passed as an argument) is NULL or not. However, no check is performed to see if addr is a valid address, i.e., it doesn't entirely consist of only 0's. The initialization of sun_server.sun_path to be equal to this faulty addr value leads to an uninitialized variable, as detected by KMSAN. Checking for this (faulty addr) and returning a negative error number appropriately, resolves this issue.
Link: http://lkml.kernel.org/r/20201012042404.2508-1-anant.thazhemadam@gmail.com Reported-by: syzbot+75d51fe5bf4ebe988518@syzkaller.appspotmail.com Tested-by: syzbot+75d51fe5bf4ebe988518@syzkaller.appspotmail.com Signed-off-by: Anant Thazhemadam anant.thazhemadam@gmail.com Signed-off-by: Dominique Martinet asmadeus@codewreck.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/9p/trans_fd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/9p/trans_fd.c b/net/9p/trans_fd.c index c0762a302162c..8f528e783a6c5 100644 --- a/net/9p/trans_fd.c +++ b/net/9p/trans_fd.c @@ -1023,7 +1023,7 @@ p9_fd_create_unix(struct p9_client *client, const char *addr, char *args)
csocket = NULL;
- if (addr == NULL) + if (!addr || !strlen(addr)) return -EINVAL;
if (strlen(addr) >= UNIX_PATH_MAX) {
From: Yan, Zheng zyan@redhat.com
[ Upstream commit a33f6432b3a63a4909dbbb0967f7c9df8ff2de91 ]
Since nautilus, MDS tracks dirfrags whose child inodes have caps in open file table. When MDS recovers, it prefetches all of these dirfrags. This avoids using backtrace to load inodes. But dirfrags prefetch may load lots of useless inodes into cache, and make MDS run out of memory.
Recent MDS adds an option that disables dirfrags prefetch. When dirfrags prefetch is disabled. Recovering MDS only prefetches corresponding dir inodes. Including inodes' parent/d_name in cap reconnect message can help MDS to load inodes into its cache.
Signed-off-by: "Yan, Zheng" zyan@redhat.com Reviewed-by: Jeff Layton jlayton@kernel.org Signed-off-by: Ilya Dryomov idryomov@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/ceph/mds_client.c | 89 ++++++++++++++++++++++++++++++-------------- 1 file changed, 61 insertions(+), 28 deletions(-)
diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c index 4a26862d7667e..76d8d9495d1d4 100644 --- a/fs/ceph/mds_client.c +++ b/fs/ceph/mds_client.c @@ -3612,6 +3612,39 @@ fail_msg: return err; }
+static struct dentry* d_find_primary(struct inode *inode) +{ + struct dentry *alias, *dn = NULL; + + if (hlist_empty(&inode->i_dentry)) + return NULL; + + spin_lock(&inode->i_lock); + if (hlist_empty(&inode->i_dentry)) + goto out_unlock; + + if (S_ISDIR(inode->i_mode)) { + alias = hlist_entry(inode->i_dentry.first, struct dentry, d_u.d_alias); + if (!IS_ROOT(alias)) + dn = dget(alias); + goto out_unlock; + } + + hlist_for_each_entry(alias, &inode->i_dentry, d_u.d_alias) { + spin_lock(&alias->d_lock); + if (!d_unhashed(alias) && + (ceph_dentry(alias)->flags & CEPH_DENTRY_PRIMARY_LINK)) { + dn = dget_dlock(alias); + } + spin_unlock(&alias->d_lock); + if (dn) + break; + } +out_unlock: + spin_unlock(&inode->i_lock); + return dn; +} + /* * Encode information about a cap for a reconnect with the MDS. */ @@ -3625,13 +3658,32 @@ static int reconnect_caps_cb(struct inode *inode, struct ceph_cap *cap, struct ceph_inode_info *ci = cap->ci; struct ceph_reconnect_state *recon_state = arg; struct ceph_pagelist *pagelist = recon_state->pagelist; - int err; + struct dentry *dentry; + char *path; + int pathlen, err; + u64 pathbase; u64 snap_follows;
dout(" adding %p ino %llx.%llx cap %p %lld %s\n", inode, ceph_vinop(inode), cap, cap->cap_id, ceph_cap_string(cap->issued));
+ dentry = d_find_primary(inode); + if (dentry) { + /* set pathbase to parent dir when msg_version >= 2 */ + path = ceph_mdsc_build_path(dentry, &pathlen, &pathbase, + recon_state->msg_version >= 2); + dput(dentry); + if (IS_ERR(path)) { + err = PTR_ERR(path); + goto out_err; + } + } else { + path = NULL; + pathlen = 0; + pathbase = 0; + } + spin_lock(&ci->i_ceph_lock); cap->seq = 0; /* reset cap seq */ cap->issue_seq = 0; /* and issue_seq */ @@ -3652,7 +3704,7 @@ static int reconnect_caps_cb(struct inode *inode, struct ceph_cap *cap, rec.v2.wanted = cpu_to_le32(__ceph_caps_wanted(ci)); rec.v2.issued = cpu_to_le32(cap->issued); rec.v2.snaprealm = cpu_to_le64(ci->i_snap_realm->ino); - rec.v2.pathbase = 0; + rec.v2.pathbase = cpu_to_le64(pathbase); rec.v2.flock_len = (__force __le32) ((ci->i_ceph_flags & CEPH_I_ERROR_FILELOCK) ? 0 : 1); } else { @@ -3663,7 +3715,7 @@ static int reconnect_caps_cb(struct inode *inode, struct ceph_cap *cap, ceph_encode_timespec64(&rec.v1.mtime, &inode->i_mtime); ceph_encode_timespec64(&rec.v1.atime, &inode->i_atime); rec.v1.snaprealm = cpu_to_le64(ci->i_snap_realm->ino); - rec.v1.pathbase = 0; + rec.v1.pathbase = cpu_to_le64(pathbase); }
if (list_empty(&ci->i_cap_snaps)) { @@ -3725,7 +3777,7 @@ encode_again: sizeof(struct ceph_filelock); rec.v2.flock_len = cpu_to_le32(struct_len);
- struct_len += sizeof(u32) + sizeof(rec.v2); + struct_len += sizeof(u32) + pathlen + sizeof(rec.v2);
if (struct_v >= 2) struct_len += sizeof(u64); /* snap_follows */ @@ -3749,7 +3801,7 @@ encode_again: ceph_pagelist_encode_8(pagelist, 1); ceph_pagelist_encode_32(pagelist, struct_len); } - ceph_pagelist_encode_string(pagelist, NULL, 0); + ceph_pagelist_encode_string(pagelist, path, pathlen); ceph_pagelist_append(pagelist, &rec, sizeof(rec.v2)); ceph_locks_to_pagelist(flocks, pagelist, num_fcntl_locks, num_flock_locks); @@ -3758,39 +3810,20 @@ encode_again: out_freeflocks: kfree(flocks); } else { - u64 pathbase = 0; - int pathlen = 0; - char *path = NULL; - struct dentry *dentry; - - dentry = d_find_alias(inode); - if (dentry) { - path = ceph_mdsc_build_path(dentry, - &pathlen, &pathbase, 0); - dput(dentry); - if (IS_ERR(path)) { - err = PTR_ERR(path); - goto out_err; - } - rec.v1.pathbase = cpu_to_le64(pathbase); - } - err = ceph_pagelist_reserve(pagelist, sizeof(u64) + sizeof(u32) + pathlen + sizeof(rec.v1)); - if (err) { - goto out_freepath; - } + if (err) + goto out_err;
ceph_pagelist_encode_64(pagelist, ceph_ino(inode)); ceph_pagelist_encode_string(pagelist, path, pathlen); ceph_pagelist_append(pagelist, &rec, sizeof(rec.v1)); -out_freepath: - ceph_mdsc_free_path(path, pathlen); }
out_err: - if (err >= 0) + ceph_mdsc_free_path(path, pathlen); + if (!err) recon_state->nr_caps++; return err; }
From: Madhuparna Bhowmik madhuparnabhowmik10@gmail.com
[ Upstream commit 4b2e7f99cdd314263c9d172bc17193b8b6bba463 ]
In rdc321x_wdt_probe(), rdc321x_wdt_device.queue is initialized after misc_register(), hence if ioctl is called before its initialization which can call rdc321x_wdt_start() function, it will see an uninitialized value of rdc321x_wdt_device.queue, hence initialize it before misc_register(). Also, rdc321x_wdt_device.default_ticks is accessed in reset() function called from write callback, thus initialize it before misc_register().
Found by Linux Driver Verification project (linuxtesting.org).
Signed-off-by: Madhuparna Bhowmik madhuparnabhowmik10@gmail.com Reviewed-by: Guenter Roeck linux@roeck-us.net Reviewed-by: Florian Fainelli f.fainelli@gmail.com Link: https://lore.kernel.org/r/20200807112902.28764-1-madhuparnabhowmik10@gmail.c... Signed-off-by: Guenter Roeck linux@roeck-us.net Signed-off-by: Wim Van Sebroeck wim@linux-watchdog.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/watchdog/rdc321x_wdt.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/watchdog/rdc321x_wdt.c b/drivers/watchdog/rdc321x_wdt.c index 57187efeb86f1..f0c94ea51c3e4 100644 --- a/drivers/watchdog/rdc321x_wdt.c +++ b/drivers/watchdog/rdc321x_wdt.c @@ -231,6 +231,8 @@ static int rdc321x_wdt_probe(struct platform_device *pdev)
rdc321x_wdt_device.sb_pdev = pdata->sb_pdev; rdc321x_wdt_device.base_reg = r->start; + rdc321x_wdt_device.queue = 0; + rdc321x_wdt_device.default_ticks = ticks;
err = misc_register(&rdc321x_wdt_misc); if (err < 0) { @@ -245,14 +247,11 @@ static int rdc321x_wdt_probe(struct platform_device *pdev) rdc321x_wdt_device.base_reg, RDC_WDT_RST);
init_completion(&rdc321x_wdt_device.stop); - rdc321x_wdt_device.queue = 0;
clear_bit(0, &rdc321x_wdt_device.inuse);
timer_setup(&rdc321x_wdt_device.timer, rdc321x_wdt_trigger, 0);
- rdc321x_wdt_device.default_ticks = ticks; - dev_info(&pdev->dev, "watchdog init success\n");
return 0;
From: changfengnan fengnanchang@foxmail.com
[ Upstream commit fc750a3b44bdccb9fb96d6abbc48a9b8e480ce7b ]
When ext4 is formatted with lazy_journal_init=1 and transactions from the previous filesystem are still on disk, it is possible that they are considered during a recovery after a crash. Because the checksum seed has changed, the CRC check will fail, and the journal recovery fails with checksum error although the journal is otherwise perfectly valid. Fix the problem by checking commit block time stamps to determine whether the data in the journal block is just stale or whether it is indeed corrupt.
Reported-by: kernel test robot lkp@intel.com Reviewed-by: Andreas Dilger adilger@dilger.ca Signed-off-by: Fengnan Chang changfengnan@hikvision.com Signed-off-by: Jan Kara jack@suse.cz Link: https://lore.kernel.org/r/20201012164900.20197-1-jack@suse.cz Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Sasha Levin sashal@kernel.org --- fs/jbd2/recovery.c | 78 +++++++++++++++++++++++++++++++++++++++------- 1 file changed, 66 insertions(+), 12 deletions(-)
diff --git a/fs/jbd2/recovery.c b/fs/jbd2/recovery.c index faa97d748474d..fb134c7a12c89 100644 --- a/fs/jbd2/recovery.c +++ b/fs/jbd2/recovery.c @@ -428,6 +428,8 @@ static int do_one_pass(journal_t *journal, __u32 crc32_sum = ~0; /* Transactional Checksums */ int descr_csum_size = 0; int block_error = 0; + bool need_check_commit_time = false; + __u64 last_trans_commit_time = 0, commit_time;
/* * First thing is to establish what we expect to find in the log @@ -520,12 +522,21 @@ static int do_one_pass(journal_t *journal, if (descr_csum_size > 0 && !jbd2_descriptor_block_csum_verify(journal, bh->b_data)) { - printk(KERN_ERR "JBD2: Invalid checksum " - "recovering block %lu in log\n", - next_log_block); - err = -EFSBADCRC; - brelse(bh); - goto failed; + /* + * PASS_SCAN can see stale blocks due to lazy + * journal init. Don't error out on those yet. + */ + if (pass != PASS_SCAN) { + pr_err("JBD2: Invalid checksum recovering block %lu in log\n", + next_log_block); + err = -EFSBADCRC; + brelse(bh); + goto failed; + } + need_check_commit_time = true; + jbd_debug(1, + "invalid descriptor block found in %lu\n", + next_log_block); }
/* If it is a valid descriptor block, replay it @@ -535,6 +546,7 @@ static int do_one_pass(journal_t *journal, if (pass != PASS_REPLAY) { if (pass == PASS_SCAN && jbd2_has_feature_checksum(journal) && + !need_check_commit_time && !info->end_transaction) { if (calc_chksums(journal, bh, &next_log_block, @@ -683,11 +695,41 @@ static int do_one_pass(journal_t *journal, * mentioned conditions. Hence assume * "Interrupted Commit".) */ + commit_time = be64_to_cpu( + ((struct commit_header *)bh->b_data)->h_commit_sec); + /* + * If need_check_commit_time is set, it means we are in + * PASS_SCAN and csum verify failed before. If + * commit_time is increasing, it's the same journal, + * otherwise it is stale journal block, just end this + * recovery. + */ + if (need_check_commit_time) { + if (commit_time >= last_trans_commit_time) { + pr_err("JBD2: Invalid checksum found in transaction %u\n", + next_commit_ID); + err = -EFSBADCRC; + brelse(bh); + goto failed; + } + ignore_crc_mismatch: + /* + * It likely does not belong to same journal, + * just end this recovery with success. + */ + jbd_debug(1, "JBD2: Invalid checksum ignored in transaction %u, likely stale data\n", + next_commit_ID); + err = 0; + brelse(bh); + goto done; + }
- /* Found an expected commit block: if checksums - * are present verify them in PASS_SCAN; else not + /* + * Found an expected commit block: if checksums + * are present, verify them in PASS_SCAN; else not * much to do other than move on to the next sequence - * number. */ + * number. + */ if (pass == PASS_SCAN && jbd2_has_feature_checksum(journal)) { struct commit_header *cbh = @@ -719,6 +761,8 @@ static int do_one_pass(journal_t *journal, !jbd2_commit_block_csum_verify(journal, bh->b_data)) { chksum_error: + if (commit_time < last_trans_commit_time) + goto ignore_crc_mismatch; info->end_transaction = next_commit_ID;
if (!jbd2_has_feature_async_commit(journal)) { @@ -728,11 +772,24 @@ static int do_one_pass(journal_t *journal, break; } } + if (pass == PASS_SCAN) + last_trans_commit_time = commit_time; brelse(bh); next_commit_ID++; continue;
case JBD2_REVOKE_BLOCK: + /* + * Check revoke block crc in pass_scan, if csum verify + * failed, check commit block time later. + */ + if (pass == PASS_SCAN && + !jbd2_descriptor_block_csum_verify(journal, + bh->b_data)) { + jbd_debug(1, "JBD2: invalid revoke block found in %lu\n", + next_log_block); + need_check_commit_time = true; + } /* If we aren't in the REVOKE pass, then we can * just skip over this block. */ if (pass != PASS_REVOKE) { @@ -800,9 +857,6 @@ static int scan_revoke_records(journal_t *journal, struct buffer_head *bh, offset = sizeof(jbd2_journal_revoke_header_t); rcount = be32_to_cpu(header->r_count);
- if (!jbd2_descriptor_block_csum_verify(journal, header)) - return -EFSBADCRC; - if (jbd2_journal_has_csum_v2or3(journal)) csum_size = sizeof(struct jbd2_journal_block_tail); if (rcount > journal->j_blocksize - csum_size)
From: Jan Kara jack@suse.cz
[ Upstream commit e0770e91424f694b461141cbc99adf6b23006b60 ]
When we try to use file already used as a quota file again (for the same or different quota type), strange things can happen. At the very least lockdep annotations may be wrong but also inode flags may be wrongly set / reset. When the file is used for two quota types at once we can even corrupt the file and likely crash the kernel. Catch all these cases by checking whether passed file is already used as quota file and bail early in that case.
This fixes occasional generic/219 failure due to lockdep complaint.
Reviewed-by: Andreas Dilger adilger@dilger.ca Reported-by: Ritesh Harjani riteshh@linux.ibm.com Signed-off-by: Jan Kara jack@suse.cz Link: https://lore.kernel.org/r/20201015110330.28716-1-jack@suse.cz Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Sasha Levin sashal@kernel.org --- fs/ext4/super.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/fs/ext4/super.c b/fs/ext4/super.c index ea425b49b3456..d31ae5a878594 100644 --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -6042,6 +6042,11 @@ static int ext4_quota_on(struct super_block *sb, int type, int format_id, /* Quotafile not on the same filesystem? */ if (path->dentry->d_sb != sb) return -EXDEV; + + /* Quota already enabled for this file? */ + if (IS_NOQUOTA(d_inode(path->dentry))) + return -EBUSY; + /* Journaling quota? */ if (EXT4_SB(sb)->s_qf_names[type]) { /* Quotafile not in fs root? */
From: Fabiano Rosas farosas@linux.ibm.com
[ Upstream commit 05e6295dc7de859c9d56334805485c4d20bebf25 ]
The current nested KVM code does not support HPT guests. This is informed/enforced in some ways:
- Hosts < P9 will not be able to enable the nested HV feature;
- The nested hypervisor MMU capabilities will not contain KVM_CAP_PPC_MMU_HASH_V3;
- QEMU reflects the MMU capabilities in the 'ibm,arch-vec-5-platform-support' device-tree property;
- The nested guest, at 'prom_parse_mmu_model' ignores the 'disable_radix' kernel command line option if HPT is not supported;
- The KVM_PPC_CONFIGURE_V3_MMU ioctl will fail if trying to use HPT.
There is, however, still a way to start a HPT guest by using max-compat-cpu=power8 at the QEMU machine options. This leads to the guest being set to use hash after QEMU calls the KVM_PPC_ALLOCATE_HTAB ioctl.
With the guest set to hash, the nested hypervisor goes through the entry path that has no knowledge of nesting (kvmppc_run_vcpu) and crashes when it tries to execute an hypervisor-privileged (mtspr HDEC) instruction at __kvmppc_vcore_entry:
root@L1:~ $ qemu-system-ppc64 -machine pseries,max-cpu-compat=power8 ...
<snip> [ 538.543303] CPU: 83 PID: 25185 Comm: CPU 0/KVM Not tainted 5.9.0-rc4 #1 [ 538.543355] NIP: c00800000753f388 LR: c00800000753f368 CTR: c0000000001e5ec0 [ 538.543417] REGS: c0000013e91e33b0 TRAP: 0700 Not tainted (5.9.0-rc4) [ 538.543470] MSR: 8000000002843033 <SF,VEC,VSX,FP,ME,IR,DR,RI,LE> CR: 22422882 XER: 20040000 [ 538.543546] CFAR: c00800000753f4b0 IRQMASK: 3 GPR00: c0080000075397a0 c0000013e91e3640 c00800000755e600 0000000080000000 GPR04: 0000000000000000 c0000013eab19800 c000001394de0000 00000043a054db72 GPR08: 00000000003b1652 0000000000000000 0000000000000000 c0080000075502e0 GPR12: c0000000001e5ec0 c0000007ffa74200 c0000013eab19800 0000000000000008 GPR16: 0000000000000000 c00000139676c6c0 c000000001d23948 c0000013e91e38b8 GPR20: 0000000000000053 0000000000000000 0000000000000001 0000000000000000 GPR24: 0000000000000001 0000000000000001 0000000000000000 0000000000000001 GPR28: 0000000000000001 0000000000000053 c0000013eab19800 0000000000000001 [ 538.544067] NIP [c00800000753f388] __kvmppc_vcore_entry+0x90/0x104 [kvm_hv] [ 538.544121] LR [c00800000753f368] __kvmppc_vcore_entry+0x70/0x104 [kvm_hv] [ 538.544173] Call Trace: [ 538.544196] [c0000013e91e3640] [c0000013e91e3680] 0xc0000013e91e3680 (unreliable) [ 538.544260] [c0000013e91e3820] [c0080000075397a0] kvmppc_run_core+0xbc8/0x19d0 [kvm_hv] [ 538.544325] [c0000013e91e39e0] [c00800000753d99c] kvmppc_vcpu_run_hv+0x404/0xc00 [kvm_hv] [ 538.544394] [c0000013e91e3ad0] [c0080000072da4fc] kvmppc_vcpu_run+0x34/0x48 [kvm] [ 538.544472] [c0000013e91e3af0] [c0080000072d61b8] kvm_arch_vcpu_ioctl_run+0x310/0x420 [kvm] [ 538.544539] [c0000013e91e3b80] [c0080000072c7450] kvm_vcpu_ioctl+0x298/0x778 [kvm] [ 538.544605] [c0000013e91e3ce0] [c0000000004b8c2c] sys_ioctl+0x1dc/0xc90 [ 538.544662] [c0000013e91e3dc0] [c00000000002f9a4] system_call_exception+0xe4/0x1c0 [ 538.544726] [c0000013e91e3e20] [c00000000000d140] system_call_common+0xf0/0x27c [ 538.544787] Instruction dump: [ 538.544821] f86d1098 60000000 60000000 48000099 e8ad0fe8 e8c500a0 e9264140 75290002 [ 538.544886] 7d1602a6 7cec42a6 40820008 7d0807b4 <7d164ba6> 7d083a14 f90d10a0 480104fd [ 538.544953] ---[ end trace 74423e2b948c2e0c ]---
This patch makes the KVM_PPC_ALLOCATE_HTAB ioctl fail when running in the nested hypervisor, causing QEMU to abort.
Reported-by: Satheesh Rajendran sathnaga@linux.vnet.ibm.com Signed-off-by: Fabiano Rosas farosas@linux.ibm.com Reviewed-by: Greg Kurz groug@kaod.org Reviewed-by: David Gibson david@gibson.dropbear.id.au Signed-off-by: Paul Mackerras paulus@ozlabs.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/powerpc/kvm/book3s_hv.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 3bd3118c76330..e2b476d76506a 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -5257,6 +5257,12 @@ static long kvm_arch_vm_ioctl_hv(struct file *filp, case KVM_PPC_ALLOCATE_HTAB: { u32 htab_order;
+ /* If we're a nested hypervisor, we currently only support radix */ + if (kvmhv_on_pseries()) { + r = -EOPNOTSUPP; + break; + } + r = -EFAULT; if (get_user(htab_order, (u32 __user *)argp)) break;
From: Christoph Hellwig hch@lst.de
[ Upstream commit 7007e9dd56767a95de0947b3f7599bcc2f21687f ]
Rename scsi_init_io() to scsi_alloc_sgtables(), and ensure callers call scsi_free_sgtables() to cleanup failures close to scsi_init_io() instead of leaking it down the generic I/O submission path.
Link: https://lore.kernel.org/r/20201005084130.143273-9-hch@lst.de Reviewed-by: Hannes Reinecke hare@suse.de Signed-off-by: Christoph Hellwig hch@lst.de Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/scsi_lib.c | 22 ++++++++-------------- drivers/scsi/sd.c | 27 +++++++++++++++------------ drivers/scsi/sr.c | 16 ++++++---------- include/scsi/scsi_cmnd.h | 3 ++- 4 files changed, 31 insertions(+), 37 deletions(-)
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index 7affaaf8b98e0..198130b6a9963 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -530,7 +530,7 @@ static void scsi_uninit_cmd(struct scsi_cmnd *cmd) } }
-static void scsi_free_sgtables(struct scsi_cmnd *cmd) +void scsi_free_sgtables(struct scsi_cmnd *cmd) { if (cmd->sdb.table.nents) sg_free_table_chained(&cmd->sdb.table, @@ -539,6 +539,7 @@ static void scsi_free_sgtables(struct scsi_cmnd *cmd) sg_free_table_chained(&cmd->prot_sdb->table, SCSI_INLINE_PROT_SG_CNT); } +EXPORT_SYMBOL_GPL(scsi_free_sgtables);
static void scsi_mq_uninit_cmd(struct scsi_cmnd *cmd) { @@ -966,7 +967,7 @@ static inline bool scsi_cmd_needs_dma_drain(struct scsi_device *sdev, }
/** - * scsi_init_io - SCSI I/O initialization function. + * scsi_alloc_sgtables - allocate S/G tables for a command * @cmd: command descriptor we wish to initialize * * Returns: @@ -974,7 +975,7 @@ static inline bool scsi_cmd_needs_dma_drain(struct scsi_device *sdev, * * BLK_STS_RESOURCE - if the failure is retryable * * BLK_STS_IOERR - if the failure is fatal */ -blk_status_t scsi_init_io(struct scsi_cmnd *cmd) +blk_status_t scsi_alloc_sgtables(struct scsi_cmnd *cmd) { struct scsi_device *sdev = cmd->device; struct request *rq = cmd->request; @@ -1066,7 +1067,7 @@ out_free_sgtables: scsi_free_sgtables(cmd); return ret; } -EXPORT_SYMBOL(scsi_init_io); +EXPORT_SYMBOL(scsi_alloc_sgtables);
/** * scsi_initialize_rq - initialize struct scsi_cmnd partially @@ -1154,7 +1155,7 @@ static blk_status_t scsi_setup_scsi_cmnd(struct scsi_device *sdev, * submit a request without an attached bio. */ if (req->bio) { - blk_status_t ret = scsi_init_io(cmd); + blk_status_t ret = scsi_alloc_sgtables(cmd); if (unlikely(ret != BLK_STS_OK)) return ret; } else { @@ -1194,7 +1195,6 @@ static blk_status_t scsi_setup_cmnd(struct scsi_device *sdev, struct request *req) { struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(req); - blk_status_t ret;
if (!blk_rq_bytes(req)) cmd->sc_data_direction = DMA_NONE; @@ -1204,14 +1204,8 @@ static blk_status_t scsi_setup_cmnd(struct scsi_device *sdev, cmd->sc_data_direction = DMA_FROM_DEVICE;
if (blk_rq_is_scsi(req)) - ret = scsi_setup_scsi_cmnd(sdev, req); - else - ret = scsi_setup_fs_cmnd(sdev, req); - - if (ret != BLK_STS_OK) - scsi_free_sgtables(cmd); - - return ret; + return scsi_setup_scsi_cmnd(sdev, req); + return scsi_setup_fs_cmnd(sdev, req); }
static blk_status_t diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c index 16503e22691ed..e93a9a874004f 100644 --- a/drivers/scsi/sd.c +++ b/drivers/scsi/sd.c @@ -866,7 +866,7 @@ static blk_status_t sd_setup_unmap_cmnd(struct scsi_cmnd *cmd) cmd->transfersize = data_len; rq->timeout = SD_TIMEOUT;
- return scsi_init_io(cmd); + return scsi_alloc_sgtables(cmd); }
static blk_status_t sd_setup_write_same16_cmnd(struct scsi_cmnd *cmd, @@ -897,7 +897,7 @@ static blk_status_t sd_setup_write_same16_cmnd(struct scsi_cmnd *cmd, cmd->transfersize = data_len; rq->timeout = unmap ? SD_TIMEOUT : SD_WRITE_SAME_TIMEOUT;
- return scsi_init_io(cmd); + return scsi_alloc_sgtables(cmd); }
static blk_status_t sd_setup_write_same10_cmnd(struct scsi_cmnd *cmd, @@ -928,7 +928,7 @@ static blk_status_t sd_setup_write_same10_cmnd(struct scsi_cmnd *cmd, cmd->transfersize = data_len; rq->timeout = unmap ? SD_TIMEOUT : SD_WRITE_SAME_TIMEOUT;
- return scsi_init_io(cmd); + return scsi_alloc_sgtables(cmd); }
static blk_status_t sd_setup_write_zeroes_cmnd(struct scsi_cmnd *cmd) @@ -1069,7 +1069,7 @@ static blk_status_t sd_setup_write_same_cmnd(struct scsi_cmnd *cmd) * knows how much to actually write. */ rq->__data_len = sdp->sector_size; - ret = scsi_init_io(cmd); + ret = scsi_alloc_sgtables(cmd); rq->__data_len = blk_rq_bytes(rq);
return ret; @@ -1187,23 +1187,24 @@ static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd) unsigned int dif; bool dix;
- ret = scsi_init_io(cmd); + ret = scsi_alloc_sgtables(cmd); if (ret != BLK_STS_OK) return ret;
+ ret = BLK_STS_IOERR; if (!scsi_device_online(sdp) || sdp->changed) { scmd_printk(KERN_ERR, cmd, "device offline or changed\n"); - return BLK_STS_IOERR; + goto fail; }
if (blk_rq_pos(rq) + blk_rq_sectors(rq) > get_capacity(rq->rq_disk)) { scmd_printk(KERN_ERR, cmd, "access beyond end of device\n"); - return BLK_STS_IOERR; + goto fail; }
if ((blk_rq_pos(rq) & mask) || (blk_rq_sectors(rq) & mask)) { scmd_printk(KERN_ERR, cmd, "request not aligned to the logical block size\n"); - return BLK_STS_IOERR; + goto fail; }
/* @@ -1225,7 +1226,7 @@ static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd) if (req_op(rq) == REQ_OP_ZONE_APPEND) { ret = sd_zbc_prepare_zone_append(cmd, &lba, nr_blocks); if (ret) - return ret; + goto fail; }
fua = rq->cmd_flags & REQ_FUA ? 0x8 : 0; @@ -1253,7 +1254,7 @@ static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd) }
if (unlikely(ret != BLK_STS_OK)) - return ret; + goto fail;
/* * We shouldn't disconnect in the middle of a sector, so with a dumb @@ -1277,10 +1278,12 @@ static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd) blk_rq_sectors(rq)));
/* - * This indicates that the command is ready from our end to be - * queued. + * This indicates that the command is ready from our end to be queued. */ return BLK_STS_OK; +fail: + scsi_free_sgtables(cmd); + return ret; }
static blk_status_t sd_init_command(struct scsi_cmnd *cmd) diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c index 3b3a53c6a0de5..7e8fe55f3b339 100644 --- a/drivers/scsi/sr.c +++ b/drivers/scsi/sr.c @@ -392,15 +392,11 @@ static blk_status_t sr_init_command(struct scsi_cmnd *SCpnt) struct request *rq = SCpnt->request; blk_status_t ret;
- ret = scsi_init_io(SCpnt); + ret = scsi_alloc_sgtables(SCpnt); if (ret != BLK_STS_OK) - goto out; + return ret; cd = scsi_cd(rq->rq_disk);
- /* from here on until we're complete, any goto out - * is used for a killable error condition */ - ret = BLK_STS_IOERR; - SCSI_LOG_HLQUEUE(1, scmd_printk(KERN_INFO, SCpnt, "Doing sr request, block = %d\n", block));
@@ -509,12 +505,12 @@ static blk_status_t sr_init_command(struct scsi_cmnd *SCpnt) SCpnt->allowed = MAX_RETRIES;
/* - * This indicates that the command is ready from our end to be - * queued. + * This indicates that the command is ready from our end to be queued. */ - ret = BLK_STS_OK; + return BLK_STS_OK; out: - return ret; + scsi_free_sgtables(SCpnt); + return BLK_STS_IOERR; }
static int sr_block_open(struct block_device *bdev, fmode_t mode) diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h index e76bac4d14c51..69ade4fb71aab 100644 --- a/include/scsi/scsi_cmnd.h +++ b/include/scsi/scsi_cmnd.h @@ -165,7 +165,8 @@ extern void *scsi_kmap_atomic_sg(struct scatterlist *sg, int sg_count, size_t *offset, size_t *len); extern void scsi_kunmap_atomic_sg(void *virt);
-extern blk_status_t scsi_init_io(struct scsi_cmnd *cmd); +blk_status_t scsi_alloc_sgtables(struct scsi_cmnd *cmd); +void scsi_free_sgtables(struct scsi_cmnd *cmd);
#ifdef CONFIG_SCSI_DMA extern int scsi_dma_map(struct scsi_cmnd *cmd);
From: Bob Peterson rpeterso@redhat.com
[ Upstream commit ee1e2c773e4f4ce2213f9d77cc703b669ca6fa3f ]
Before this patch, we were not calling truncate_inode_pages_final for the address space for glocks, which left the possibility of a leak. We now take care of the problem instead of complaining, and we do it during glock tear-down..
Signed-off-by: Bob Peterson rpeterso@redhat.com Signed-off-by: Andreas Gruenbacher agruenba@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/gfs2/glock.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c index f13b136654cae..3554e71be06ec 100644 --- a/fs/gfs2/glock.c +++ b/fs/gfs2/glock.c @@ -270,7 +270,12 @@ static void __gfs2_glock_put(struct gfs2_glock *gl) gfs2_glock_remove_from_lru(gl); spin_unlock(&gl->gl_lockref.lock); GLOCK_BUG_ON(gl, !list_empty(&gl->gl_holders)); - GLOCK_BUG_ON(gl, mapping && mapping->nrpages && !gfs2_withdrawn(sdp)); + if (mapping) { + truncate_inode_pages_final(mapping); + if (!gfs2_withdrawn(sdp)) + GLOCK_BUG_ON(gl, mapping->nrpages || + mapping->nrexceptional); + } trace_gfs2_glock_put(gl); sdp->sd_lockstruct.ls_ops->lm_put_lock(gl); }
From: Andrew Price anprice@redhat.com
[ Upstream commit 0e539ca1bbbe85a86549c97a30a765ada4a09df9 ]
When an rindex entry is found to be corrupt, compute_bitstructs() calls gfs2_consist_rgrpd() which calls gfs2_rgrp_dump() like this:
gfs2_rgrp_dump(NULL, rgd->rd_gl, fs_id_buf);
gfs2_rgrp_dump then dereferences the gl without checking it and we get
BUG: KASAN: null-ptr-deref in gfs2_rgrp_dump+0x28/0x280
because there's no rgrp glock involved while reading the rindex on mount.
Fix this by changing gfs2_rgrp_dump to take an rgrp argument.
Reported-by: syzbot+43fa87986bdd31df9de6@syzkaller.appspotmail.com Signed-off-by: Andrew Price anprice@redhat.com Signed-off-by: Andreas Gruenbacher agruenba@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/gfs2/glops.c | 11 ++++++++++- fs/gfs2/rgrp.c | 9 +++------ fs/gfs2/rgrp.h | 2 +- fs/gfs2/util.c | 2 +- 4 files changed, 15 insertions(+), 9 deletions(-)
diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c index de1d5f1d9ff85..c2c90747d79b5 100644 --- a/fs/gfs2/glops.c +++ b/fs/gfs2/glops.c @@ -227,6 +227,15 @@ static void rgrp_go_inval(struct gfs2_glock *gl, int flags) rgd->rd_flags &= ~GFS2_RDF_UPTODATE; }
+static void gfs2_rgrp_go_dump(struct seq_file *seq, struct gfs2_glock *gl, + const char *fs_id_buf) +{ + struct gfs2_rgrpd *rgd = gfs2_glock2rgrp(gl); + + if (rgd) + gfs2_rgrp_dump(seq, rgd, fs_id_buf); +} + static struct gfs2_inode *gfs2_glock2inode(struct gfs2_glock *gl) { struct gfs2_inode *ip; @@ -712,7 +721,7 @@ const struct gfs2_glock_operations gfs2_rgrp_glops = { .go_sync = rgrp_go_sync, .go_inval = rgrp_go_inval, .go_lock = gfs2_rgrp_go_lock, - .go_dump = gfs2_rgrp_dump, + .go_dump = gfs2_rgrp_go_dump, .go_type = LM_TYPE_RGRP, .go_flags = GLOF_LVB, }; diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c index 074f228ea8390..1bba5a9d45fa3 100644 --- a/fs/gfs2/rgrp.c +++ b/fs/gfs2/rgrp.c @@ -2209,20 +2209,17 @@ static void rgblk_free(struct gfs2_sbd *sdp, struct gfs2_rgrpd *rgd, /** * gfs2_rgrp_dump - print out an rgrp * @seq: The iterator - * @gl: The glock in question + * @rgd: The rgrp in question * @fs_id_buf: pointer to file system id (if requested) * */
-void gfs2_rgrp_dump(struct seq_file *seq, struct gfs2_glock *gl, +void gfs2_rgrp_dump(struct seq_file *seq, struct gfs2_rgrpd *rgd, const char *fs_id_buf) { - struct gfs2_rgrpd *rgd = gl->gl_object; struct gfs2_blkreserv *trs; const struct rb_node *n;
- if (rgd == NULL) - return; gfs2_print_dbg(seq, "%s R: n:%llu f:%02x b:%u/%u i:%u r:%u e:%u\n", fs_id_buf, (unsigned long long)rgd->rd_addr, rgd->rd_flags, @@ -2253,7 +2250,7 @@ static void gfs2_rgrp_error(struct gfs2_rgrpd *rgd) (unsigned long long)rgd->rd_addr); fs_warn(sdp, "umount on all nodes and run fsck.gfs2 to fix the error\n"); sprintf(fs_id_buf, "fsid=%s: ", sdp->sd_fsname); - gfs2_rgrp_dump(NULL, rgd->rd_gl, fs_id_buf); + gfs2_rgrp_dump(NULL, rgd, fs_id_buf); rgd->rd_flags |= GFS2_RDF_ERROR; }
diff --git a/fs/gfs2/rgrp.h b/fs/gfs2/rgrp.h index a1d7e14fc55b9..9a587ada51eda 100644 --- a/fs/gfs2/rgrp.h +++ b/fs/gfs2/rgrp.h @@ -67,7 +67,7 @@ extern void gfs2_rlist_add(struct gfs2_inode *ip, struct gfs2_rgrp_list *rlist, extern void gfs2_rlist_alloc(struct gfs2_rgrp_list *rlist); extern void gfs2_rlist_free(struct gfs2_rgrp_list *rlist); extern u64 gfs2_ri_total(struct gfs2_sbd *sdp); -extern void gfs2_rgrp_dump(struct seq_file *seq, struct gfs2_glock *gl, +extern void gfs2_rgrp_dump(struct seq_file *seq, struct gfs2_rgrpd *rgd, const char *fs_id_buf); extern int gfs2_rgrp_send_discards(struct gfs2_sbd *sdp, u64 offset, struct buffer_head *bh, diff --git a/fs/gfs2/util.c b/fs/gfs2/util.c index 1cd0328cae20a..0fba3bf641890 100644 --- a/fs/gfs2/util.c +++ b/fs/gfs2/util.c @@ -419,7 +419,7 @@ void gfs2_consist_rgrpd_i(struct gfs2_rgrpd *rgd, char fs_id_buf[sizeof(sdp->sd_fsname) + 7];
sprintf(fs_id_buf, "fsid=%s: ", sdp->sd_fsname); - gfs2_rgrp_dump(NULL, rgd->rd_gl, fs_id_buf); + gfs2_rgrp_dump(NULL, rgd, fs_id_buf); gfs2_lm(sdp, "fatal: filesystem consistency error\n" " RG = %llu\n"
From: Jamie Iles jamie@nuviainc.com
[ Upstream commit c2a04b02c060c4858762edce4674d5cba3e5a96f ]
syzkaller found the following splat with CONFIG_DEBUG_KOBJECT_RELEASE=y:
Read of size 1 at addr ffff000028e896b8 by task kworker/1:2/228
CPU: 1 PID: 228 Comm: kworker/1:2 Tainted: G S 5.9.0-rc8+ #101 Hardware name: linux,dummy-virt (DT) Workqueue: events kobject_delayed_cleanup Call trace: dump_backtrace+0x0/0x4d8 show_stack+0x34/0x48 dump_stack+0x174/0x1f8 print_address_description.constprop.0+0x5c/0x550 kasan_report+0x13c/0x1c0 __asan_report_load1_noabort+0x34/0x60 memcmp+0xd0/0xd8 gfs2_uevent+0xc4/0x188 kobject_uevent_env+0x54c/0x1240 kobject_uevent+0x2c/0x40 __kobject_del+0x190/0x1d8 kobject_delayed_cleanup+0x2bc/0x3b8 process_one_work+0x96c/0x18c0 worker_thread+0x3f0/0xc30 kthread+0x390/0x498 ret_from_fork+0x10/0x18
Allocated by task 1110: kasan_save_stack+0x28/0x58 __kasan_kmalloc.isra.0+0xc8/0xe8 kasan_kmalloc+0x10/0x20 kmem_cache_alloc_trace+0x1d8/0x2f0 alloc_super+0x64/0x8c0 sget_fc+0x110/0x620 get_tree_bdev+0x190/0x648 gfs2_get_tree+0x50/0x228 vfs_get_tree+0x84/0x2e8 path_mount+0x1134/0x1da8 do_mount+0x124/0x138 __arm64_sys_mount+0x164/0x238 el0_svc_common.constprop.0+0x15c/0x598 do_el0_svc+0x60/0x150 el0_svc+0x34/0xb0 el0_sync_handler+0xc8/0x5b4 el0_sync+0x15c/0x180
Freed by task 228: kasan_save_stack+0x28/0x58 kasan_set_track+0x28/0x40 kasan_set_free_info+0x24/0x48 __kasan_slab_free+0x118/0x190 kasan_slab_free+0x14/0x20 slab_free_freelist_hook+0x6c/0x210 kfree+0x13c/0x460
Use the same pattern as f2fs + ext4 where the kobject destruction must complete before allowing the FS itself to be freed. This means that we need an explicit free_sbd in the callers.
Cc: Bob Peterson rpeterso@redhat.com Cc: Andreas Gruenbacher agruenba@redhat.com Signed-off-by: Jamie Iles jamie@nuviainc.com [Also go to fail_free when init_names fails.] Signed-off-by: Andreas Gruenbacher agruenba@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/gfs2/incore.h | 1 + fs/gfs2/ops_fstype.c | 22 +++++----------------- fs/gfs2/super.c | 1 + fs/gfs2/sys.c | 5 ++++- 4 files changed, 11 insertions(+), 18 deletions(-)
diff --git a/fs/gfs2/incore.h b/fs/gfs2/incore.h index ca2ec02436ec7..387e99d6eda9e 100644 --- a/fs/gfs2/incore.h +++ b/fs/gfs2/incore.h @@ -705,6 +705,7 @@ struct gfs2_sbd { struct super_block *sd_vfs; struct gfs2_pcpu_lkstats __percpu *sd_lkstats; struct kobject sd_kobj; + struct completion sd_kobj_unregister; unsigned long sd_flags; /* SDF_... */ struct gfs2_sb_host sd_sb;
diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c index 6d18d2c91add2..5bd602a290f72 100644 --- a/fs/gfs2/ops_fstype.c +++ b/fs/gfs2/ops_fstype.c @@ -1062,26 +1062,14 @@ static int gfs2_fill_super(struct super_block *sb, struct fs_context *fc) }
error = init_names(sdp, silent); - if (error) { - /* In this case, we haven't initialized sysfs, so we have to - manually free the sdp. */ - free_sbd(sdp); - sb->s_fs_info = NULL; - return error; - } + if (error) + goto fail_free;
snprintf(sdp->sd_fsname, sizeof(sdp->sd_fsname), "%s", sdp->sd_table_name);
error = gfs2_sys_fs_add(sdp); - /* - * If we hit an error here, gfs2_sys_fs_add will have called function - * kobject_put which causes the sysfs usage count to go to zero, which - * causes sysfs to call function gfs2_sbd_release, which frees sdp. - * Subsequent error paths here will call gfs2_sys_fs_del, which also - * kobject_put to free sdp. - */ if (error) - return error; + goto fail_free;
gfs2_create_debugfs_file(sdp);
@@ -1179,9 +1167,9 @@ fail_lm: gfs2_lm_unmount(sdp); fail_debug: gfs2_delete_debugfs_file(sdp); - /* gfs2_sys_fs_del must be the last thing we do, since it causes - * sysfs to call function gfs2_sbd_release, which frees sdp. */ gfs2_sys_fs_del(sdp); +fail_free: + free_sbd(sdp); sb->s_fs_info = NULL; return error; } diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c index 9f4d9e7be8397..a28cf447b6b12 100644 --- a/fs/gfs2/super.c +++ b/fs/gfs2/super.c @@ -736,6 +736,7 @@ restart:
/* At this point, we're through participating in the lockspace */ gfs2_sys_fs_del(sdp); + free_sbd(sdp); }
/** diff --git a/fs/gfs2/sys.c b/fs/gfs2/sys.c index d28c41bd69b05..c3e72dba7418a 100644 --- a/fs/gfs2/sys.c +++ b/fs/gfs2/sys.c @@ -303,7 +303,7 @@ static void gfs2_sbd_release(struct kobject *kobj) { struct gfs2_sbd *sdp = container_of(kobj, struct gfs2_sbd, sd_kobj);
- free_sbd(sdp); + complete(&sdp->sd_kobj_unregister); }
static struct kobj_type gfs2_ktype = { @@ -655,6 +655,7 @@ int gfs2_sys_fs_add(struct gfs2_sbd *sdp) sprintf(ro, "RDONLY=%d", sb_rdonly(sb)); sprintf(spectator, "SPECTATOR=%d", sdp->sd_args.ar_spectator ? 1 : 0);
+ init_completion(&sdp->sd_kobj_unregister); sdp->sd_kobj.kset = gfs2_kset; error = kobject_init_and_add(&sdp->sd_kobj, &gfs2_ktype, NULL, "%s", sdp->sd_table_name); @@ -685,6 +686,7 @@ fail_tune: fail_reg: fs_err(sdp, "error %d adding sysfs files\n", error); kobject_put(&sdp->sd_kobj); + wait_for_completion(&sdp->sd_kobj_unregister); sb->s_fs_info = NULL; return error; } @@ -695,6 +697,7 @@ void gfs2_sys_fs_del(struct gfs2_sbd *sdp) sysfs_remove_group(&sdp->sd_kobj, &tune_group); sysfs_remove_group(&sdp->sd_kobj, &lock_module_group); kobject_put(&sdp->sd_kobj); + wait_for_completion(&sdp->sd_kobj_unregister); }
static int gfs2_uevent(struct kset *kset, struct kobject *kobj,
From: Anant Thazhemadam anant.thazhemadam@gmail.com
[ Upstream commit 0ddc5154b24c96f20e94d653b0a814438de6032b ]
In gfs2_check_sb(), no validation checks are performed with regards to the size of the superblock. syzkaller detected a slab-out-of-bounds bug that was primarily caused because the block size for a superblock was set to zero. A valid size for a superblock is a power of 2 between 512 and PAGE_SIZE. Performing validation checks and ensuring that the size of the superblock is valid fixes this bug.
Reported-by: syzbot+af90d47a37376844e731@syzkaller.appspotmail.com Tested-by: syzbot+af90d47a37376844e731@syzkaller.appspotmail.com Suggested-by: Andrew Price anprice@redhat.com Signed-off-by: Anant Thazhemadam anant.thazhemadam@gmail.com [Minor code reordering.] Signed-off-by: Andreas Gruenbacher agruenba@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/gfs2/ops_fstype.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-)
diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c index 5bd602a290f72..03c33fc03c055 100644 --- a/fs/gfs2/ops_fstype.c +++ b/fs/gfs2/ops_fstype.c @@ -169,15 +169,19 @@ static int gfs2_check_sb(struct gfs2_sbd *sdp, int silent) return -EINVAL; }
- /* If format numbers match exactly, we're done. */ - - if (sb->sb_fs_format == GFS2_FORMAT_FS && - sb->sb_multihost_format == GFS2_FORMAT_MULTI) - return 0; + if (sb->sb_fs_format != GFS2_FORMAT_FS || + sb->sb_multihost_format != GFS2_FORMAT_MULTI) { + fs_warn(sdp, "Unknown on-disk format, unable to mount\n"); + return -EINVAL; + }
- fs_warn(sdp, "Unknown on-disk format, unable to mount\n"); + if (sb->sb_bsize < 512 || sb->sb_bsize > PAGE_SIZE || + (sb->sb_bsize & (sb->sb_bsize - 1))) { + pr_warn("Invalid superblock size\n"); + return -EINVAL; + }
- return -EINVAL; + return 0; }
static void end_bio_io_page(struct bio *bio)
From: Rohith Surabattula rohiths@microsoft.com
[ Upstream commit 8e670f77c4a55013db6d23b962f9bf6673a5e7b6 ]
Currently STATUS_IO_TIMEOUT is not treated as retriable error. It is currently mapped to ETIMEDOUT and returned to userspace for most system calls. STATUS_IO_TIMEOUT is returned by server in case of unavailability or throttling errors.
This patch will map the STATUS_IO_TIMEOUT to EAGAIN, so that it can be retried. Also, added a check to drop the connection to not overload the server in case of ongoing unavailability.
Signed-off-by: Rohith Surabattula rohiths@microsoft.com Reviewed-by: Aurelien Aptel aaptel@suse.com Reviewed-by: Pavel Shilovsky pshilov@microsoft.com Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/cifs/cifsglob.h | 2 ++ fs/cifs/connect.c | 15 ++++++++++++++- fs/cifs/smb2maperror.c | 2 +- fs/cifs/smb2ops.c | 15 +++++++++++++++ 4 files changed, 32 insertions(+), 2 deletions(-)
diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h index b565d83ba89ed..5a491afafacc7 100644 --- a/fs/cifs/cifsglob.h +++ b/fs/cifs/cifsglob.h @@ -510,6 +510,8 @@ struct smb_version_operations { struct fiemap_extent_info *, u64, u64); /* version specific llseek implementation */ loff_t (*llseek)(struct file *, struct cifs_tcon *, loff_t, int); + /* Check for STATUS_IO_TIMEOUT */ + bool (*is_status_io_timeout)(char *buf); };
struct smb_version_values { diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c index 9817a31a39db6..b8780a79a42a2 100644 --- a/fs/cifs/connect.c +++ b/fs/cifs/connect.c @@ -69,6 +69,9 @@ extern bool disable_legacy_dialects; #define TLINK_ERROR_EXPIRE (1 * HZ) #define TLINK_IDLE_EXPIRE (600 * HZ)
+/* Drop the connection to not overload the server */ +#define NUM_STATUS_IO_TIMEOUT 5 + enum { /* Mount options that take no arguments */ Opt_user_xattr, Opt_nouser_xattr, @@ -1117,7 +1120,7 @@ cifs_demultiplex_thread(void *p) struct task_struct *task_to_wake = NULL; struct mid_q_entry *mids[MAX_COMPOUND]; char *bufs[MAX_COMPOUND]; - unsigned int noreclaim_flag; + unsigned int noreclaim_flag, num_io_timeout = 0;
noreclaim_flag = memalloc_noreclaim_save(); cifs_dbg(FYI, "Demultiplex PID: %d\n", task_pid_nr(current)); @@ -1213,6 +1216,16 @@ next_pdu: continue; }
+ if (server->ops->is_status_io_timeout && + server->ops->is_status_io_timeout(buf)) { + num_io_timeout++; + if (num_io_timeout > NUM_STATUS_IO_TIMEOUT) { + cifs_reconnect(server); + num_io_timeout = 0; + continue; + } + } + server->lstrp = jiffies;
for (i = 0; i < num_mids; i++) { diff --git a/fs/cifs/smb2maperror.c b/fs/cifs/smb2maperror.c index 7fde3775cb574..b004cf87692a7 100644 --- a/fs/cifs/smb2maperror.c +++ b/fs/cifs/smb2maperror.c @@ -488,7 +488,7 @@ static const struct status_to_posix_error smb2_error_map_table[] = { {STATUS_PIPE_CONNECTED, -EIO, "STATUS_PIPE_CONNECTED"}, {STATUS_PIPE_LISTENING, -EIO, "STATUS_PIPE_LISTENING"}, {STATUS_INVALID_READ_MODE, -EIO, "STATUS_INVALID_READ_MODE"}, - {STATUS_IO_TIMEOUT, -ETIMEDOUT, "STATUS_IO_TIMEOUT"}, + {STATUS_IO_TIMEOUT, -EAGAIN, "STATUS_IO_TIMEOUT"}, {STATUS_FILE_FORCED_CLOSED, -EIO, "STATUS_FILE_FORCED_CLOSED"}, {STATUS_PROFILING_NOT_STARTED, -EIO, "STATUS_PROFILING_NOT_STARTED"}, {STATUS_PROFILING_NOT_STOPPED, -EIO, "STATUS_PROFILING_NOT_STOPPED"}, diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c index 09e1cd320ee56..e2e53652193e6 100644 --- a/fs/cifs/smb2ops.c +++ b/fs/cifs/smb2ops.c @@ -2346,6 +2346,17 @@ smb2_is_session_expired(char *buf) return true; }
+static bool +smb2_is_status_io_timeout(char *buf) +{ + struct smb2_sync_hdr *shdr = (struct smb2_sync_hdr *)buf; + + if (shdr->Status == STATUS_IO_TIMEOUT) + return true; + else + return false; +} + static int smb2_oplock_response(struct cifs_tcon *tcon, struct cifs_fid *fid, struct cifsInodeInfo *cinode) @@ -4816,6 +4827,7 @@ struct smb_version_operations smb20_operations = { .make_node = smb2_make_node, .fiemap = smb3_fiemap, .llseek = smb3_llseek, + .is_status_io_timeout = smb2_is_status_io_timeout, };
struct smb_version_operations smb21_operations = { @@ -4916,6 +4928,7 @@ struct smb_version_operations smb21_operations = { .make_node = smb2_make_node, .fiemap = smb3_fiemap, .llseek = smb3_llseek, + .is_status_io_timeout = smb2_is_status_io_timeout, };
struct smb_version_operations smb30_operations = { @@ -5026,6 +5039,7 @@ struct smb_version_operations smb30_operations = { .make_node = smb2_make_node, .fiemap = smb3_fiemap, .llseek = smb3_llseek, + .is_status_io_timeout = smb2_is_status_io_timeout, };
struct smb_version_operations smb311_operations = { @@ -5137,6 +5151,7 @@ struct smb_version_operations smb311_operations = { .make_node = smb2_make_node, .fiemap = smb3_fiemap, .llseek = smb3_llseek, + .is_status_io_timeout = smb2_is_status_io_timeout, };
struct smb_version_values smb20_values = {
From: Ronnie Sahlberg lsahlber@redhat.com
[ Upstream commit c6cc4c5a72505a0ecefc9b413f16bec512f38078 ]
RHBZ: 1848178
Some calls that set attributes, like utimensat(), are not supposed to return -EINTR and thus do not have handlers for this in glibc which causes us to leak -EINTR to the applications which are also unprepared to handle it.
For example tar will break if utimensat() return -EINTR and abort unpacking the archive. Other applications may break too.
To handle this we add checks, and retry, for -EINTR in cifs_setattr()
Signed-off-by: Ronnie Sahlberg lsahlber@redhat.com Signed-off-by: Steve French stfrench@microsoft.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/cifs/inode.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c index 1f75b25e559a7..daec31be85718 100644 --- a/fs/cifs/inode.c +++ b/fs/cifs/inode.c @@ -2883,13 +2883,18 @@ cifs_setattr(struct dentry *direntry, struct iattr *attrs) { struct cifs_sb_info *cifs_sb = CIFS_SB(direntry->d_sb); struct cifs_tcon *pTcon = cifs_sb_master_tcon(cifs_sb); + int rc, retries = 0;
- if (pTcon->unix_ext) - return cifs_setattr_unix(direntry, attrs); - - return cifs_setattr_nounix(direntry, attrs); + do { + if (pTcon->unix_ext) + rc = cifs_setattr_unix(direntry, attrs); + else + rc = cifs_setattr_nounix(direntry, attrs); + retries++; + } while (is_retryable_error(rc) && retries < 2);
/* BB: add cifs_setattr_legacy for really old servers */ + return rc; }
#if 0
From: Yoshihiro Shimoda yoshihiro.shimoda.uh@renesas.com
[ Upstream commit 992d7a8b88c83c05664b649fc54501ce58e19132 ]
Add full-pwr-cycle-in-suspend property to do a graceful shutdown of the eMMC device in system suspend.
Signed-off-by: Yoshihiro Shimoda yoshihiro.shimoda.uh@renesas.com Link: https://lore.kernel.org/r/1594989201-24228-1-git-send-email-yoshihiro.shimod... Signed-off-by: Geert Uytterhoeven geert+renesas@glider.be Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/boot/dts/renesas/ulcb.dtsi | 1 + 1 file changed, 1 insertion(+)
diff --git a/arch/arm64/boot/dts/renesas/ulcb.dtsi b/arch/arm64/boot/dts/renesas/ulcb.dtsi index ff88af8e39d3f..a2e085db87c53 100644 --- a/arch/arm64/boot/dts/renesas/ulcb.dtsi +++ b/arch/arm64/boot/dts/renesas/ulcb.dtsi @@ -469,6 +469,7 @@ mmc-hs200-1_8v; mmc-hs400-1_8v; non-removable; + full-pwr-cycle-in-suspend; status = "okay"; };
From: Tony Lindgren tony@atomide.com
[ Upstream commit 19d3e9a0bdd57b90175f30390edeb06851f5f9f3 ]
We currently have a different clock rate for droid4 compared to the stock v3.0.8 based Android Linux kernel:
# cat /sys/kernel/debug/clk/dpll_*_m7x2_ck/clk_rate 266666667 307200000 # cat /sys/kernel/debug/clk/l3_gfx_cm:clk:0000:0/clk_rate 307200000
Let's fix this by configuring sgx to use 153.6 MHz instead of 307.2 MHz. Looks like also at least duover needs this change to avoid hangs, so let's apply it for all 4430.
This helps a bit with thermal issues that seem to be related to memory corruption when using sgx. It seems that other driver related issues still remain though.
Cc: Arthur Demchenkov spinal.by@gmail.com Cc: Merlijn Wajer merlijn@wizzup.org Cc: Sebastian Reichel sre@kernel.org Signed-off-by: Tony Lindgren tony@atomide.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm/boot/dts/omap4.dtsi | 2 +- arch/arm/boot/dts/omap443x.dtsi | 10 ++++++++++ 2 files changed, 11 insertions(+), 1 deletion(-)
diff --git a/arch/arm/boot/dts/omap4.dtsi b/arch/arm/boot/dts/omap4.dtsi index 0282b9de3384f..52e8298275050 100644 --- a/arch/arm/boot/dts/omap4.dtsi +++ b/arch/arm/boot/dts/omap4.dtsi @@ -410,7 +410,7 @@ status = "disabled"; };
- target-module@56000000 { + sgx_module: target-module@56000000 { compatible = "ti,sysc-omap4", "ti,sysc"; reg = <0x5600fe00 0x4>, <0x5600fe10 0x4>; diff --git a/arch/arm/boot/dts/omap443x.dtsi b/arch/arm/boot/dts/omap443x.dtsi index 8ed510ab00c52..cb309743de5da 100644 --- a/arch/arm/boot/dts/omap443x.dtsi +++ b/arch/arm/boot/dts/omap443x.dtsi @@ -74,3 +74,13 @@ };
/include/ "omap443x-clocks.dtsi" + +/* + * Use dpll_per for sgx at 153.6MHz like droid4 stock v3.0.8 Android kernel + */ +&sgx_module { + assigned-clocks = <&l3_gfx_clkctrl OMAP4_GPU_CLKCTRL 24>, + <&dpll_per_m7x2_ck>; + assigned-clock-rates = <0>, <153600000>; + assigned-clock-parents = <&dpll_per_m7x2_ck>; +};
From: Dan Carpenter dan.carpenter@oracle.com
[ Upstream commit fd22781648080cc400772b3c68aa6b059d2d5420 ]
Callers are generally not supposed to check the return values from debugfs functions. Debugfs functions never return NULL so this error handling will never trigger. (Historically debugfs functions used to return a mix of NULL and error pointers but it was eventually deemed too complicated for something which wasn't intended to be used in normal situations).
Delete all the error handling.
Signed-off-by: Dan Carpenter dan.carpenter@oracle.com Acked-by: Santosh Shilimkar ssantosh@kernel.org Link: https://lore.kernel.org/r/20200826113759.GF393664@mwanda Signed-off-by: Krzysztof Kozlowski krzk@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/memory/emif.c | 33 +++++---------------------------- 1 file changed, 5 insertions(+), 28 deletions(-)
diff --git a/drivers/memory/emif.c b/drivers/memory/emif.c index bb6a71d267988..5c4d8319c9cfb 100644 --- a/drivers/memory/emif.c +++ b/drivers/memory/emif.c @@ -163,35 +163,12 @@ static const struct file_operations emif_mr4_fops = {
static int __init_or_module emif_debugfs_init(struct emif_data *emif) { - struct dentry *dentry; - int ret; - - dentry = debugfs_create_dir(dev_name(emif->dev), NULL); - if (!dentry) { - ret = -ENOMEM; - goto err0; - } - emif->debugfs_root = dentry; - - dentry = debugfs_create_file("regcache_dump", S_IRUGO, - emif->debugfs_root, emif, &emif_regdump_fops); - if (!dentry) { - ret = -ENOMEM; - goto err1; - } - - dentry = debugfs_create_file("mr4", S_IRUGO, - emif->debugfs_root, emif, &emif_mr4_fops); - if (!dentry) { - ret = -ENOMEM; - goto err1; - } - + emif->debugfs_root = debugfs_create_dir(dev_name(emif->dev), NULL); + debugfs_create_file("regcache_dump", S_IRUGO, emif->debugfs_root, emif, + &emif_regdump_fops); + debugfs_create_file("mr4", S_IRUGO, emif->debugfs_root, emif, + &emif_mr4_fops); return 0; -err1: - debugfs_remove_recursive(emif->debugfs_root); -err0: - return ret; }
static void __exit emif_debugfs_exit(struct emif_data *emif)
From: Jonathan Bakker xc-racer2@live.ca
[ Upstream commit cd972fe90008adf49de0790250c1275480ac5cdc ]
Both the Galaxy S and the Fascinate4G have a WM8994 codec, but they differ slightly in their jack detection and micbias configuration.
Signed-off-by: Jonathan Bakker xc-racer2@live.ca Signed-off-by: Krzysztof Kozlowski krzk@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm/boot/dts/s5pv210-aries.dtsi | 10 +++ arch/arm/boot/dts/s5pv210-fascinate4g.dts | 98 +++++++++++++++++++++++ arch/arm/boot/dts/s5pv210-galaxys.dts | 85 ++++++++++++++++++++ 3 files changed, 193 insertions(+)
diff --git a/arch/arm/boot/dts/s5pv210-aries.dtsi b/arch/arm/boot/dts/s5pv210-aries.dtsi index 822207f63ee0a..a3f83f668ce14 100644 --- a/arch/arm/boot/dts/s5pv210-aries.dtsi +++ b/arch/arm/boot/dts/s5pv210-aries.dtsi @@ -47,6 +47,11 @@ }; };
+ bt_codec: bt_sco { + compatible = "linux,bt-sco"; + #sound-dai-cells = <0>; + }; + vibrator_pwr: regulator-fixed-0 { compatible = "regulator-fixed"; regulator-name = "vibrator-en"; @@ -624,6 +629,11 @@ }; };
+&i2s0 { + dmas = <&pdma0 9>, <&pdma0 10>, <&pdma0 11>; + status = "okay"; +}; + &mfc { memory-region = <&mfc_left>, <&mfc_right>; }; diff --git a/arch/arm/boot/dts/s5pv210-fascinate4g.dts b/arch/arm/boot/dts/s5pv210-fascinate4g.dts index 65eed01cfced1..ca064359dd308 100644 --- a/arch/arm/boot/dts/s5pv210-fascinate4g.dts +++ b/arch/arm/boot/dts/s5pv210-fascinate4g.dts @@ -35,6 +35,80 @@ linux,code = <KEY_VOLUMEUP>; }; }; + + headset_micbias_reg: regulator-fixed-3 { + compatible = "regulator-fixed"; + regulator-name = "Headset_Micbias"; + gpio = <&gpj2 5 GPIO_ACTIVE_HIGH>; + enable-active-high; + + pinctrl-names = "default"; + pinctrl-0 = <&headset_micbias_ena>; + }; + + main_micbias_reg: regulator-fixed-4 { + compatible = "regulator-fixed"; + regulator-name = "Main_Micbias"; + gpio = <&gpj4 2 GPIO_ACTIVE_HIGH>; + enable-active-high; + + pinctrl-names = "default"; + pinctrl-0 = <&main_micbias_ena>; + }; + + sound { + compatible = "samsung,fascinate4g-wm8994"; + + model = "Fascinate4G"; + + extcon = <&fsa9480>; + + main-micbias-supply = <&main_micbias_reg>; + headset-micbias-supply = <&headset_micbias_reg>; + + earpath-sel-gpios = <&gpj2 6 GPIO_ACTIVE_HIGH>; + + io-channels = <&adc 3>; + io-channel-names = "headset-detect"; + headset-detect-gpios = <&gph0 6 GPIO_ACTIVE_HIGH>; + headset-key-gpios = <&gph3 6 GPIO_ACTIVE_HIGH>; + + samsung,audio-routing = + "HP", "HPOUT1L", + "HP", "HPOUT1R", + + "SPK", "SPKOUTLN", + "SPK", "SPKOUTLP", + + "RCV", "HPOUT2N", + "RCV", "HPOUT2P", + + "LINE", "LINEOUT2N", + "LINE", "LINEOUT2P", + + "IN1LP", "Main Mic", + "IN1LN", "Main Mic", + + "IN1RP", "Headset Mic", + "IN1RN", "Headset Mic", + + "Modem Out", "Modem TX", + "Modem RX", "Modem In", + + "Bluetooth SPK", "TX", + "RX", "Bluetooth Mic"; + + pinctrl-names = "default"; + pinctrl-0 = <&headset_det &earpath_sel>; + + cpu { + sound-dai = <&i2s0>, <&bt_codec>; + }; + + codec { + sound-dai = <&wm8994>; + }; + }; };
&fg { @@ -51,6 +125,12 @@ pinctrl-names = "default"; pinctrl-0 = <&sleep_cfg>;
+ headset_det: headset-det { + samsung,pins = "gph0-6", "gph3-6"; + samsung,pin-function = <EXYNOS_PIN_FUNC_F>; + samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>; + }; + fg_irq: fg-irq { samsung,pins = "gph3-3"; samsung,pin-function = <EXYNOS_PIN_FUNC_F>; @@ -58,6 +138,24 @@ samsung,pin-drv = <EXYNOS4_PIN_DRV_LV1>; };
+ headset_micbias_ena: headset-micbias-ena { + samsung,pins = "gpj2-5"; + samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>; + samsung,pin-drv = <EXYNOS4_PIN_DRV_LV1>; + }; + + earpath_sel: earpath-sel { + samsung,pins = "gpj2-6"; + samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>; + samsung,pin-drv = <EXYNOS4_PIN_DRV_LV1>; + }; + + main_micbias_ena: main-micbias-ena { + samsung,pins = "gpj4-2"; + samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>; + samsung,pin-drv = <EXYNOS4_PIN_DRV_LV1>; + }; + /* Based on vendor kernel v2.6.35.7 */ sleep_cfg: sleep-cfg { PIN_SLP(gpa0-0, PREV, NONE); diff --git a/arch/arm/boot/dts/s5pv210-galaxys.dts b/arch/arm/boot/dts/s5pv210-galaxys.dts index 5d10dd67eacc5..560f830b6f6be 100644 --- a/arch/arm/boot/dts/s5pv210-galaxys.dts +++ b/arch/arm/boot/dts/s5pv210-galaxys.dts @@ -72,6 +72,73 @@ pinctrl-0 = <&fm_irq &fm_rst>; }; }; + + micbias_reg: regulator-fixed-3 { + compatible = "regulator-fixed"; + regulator-name = "MICBIAS"; + gpio = <&gpj4 2 GPIO_ACTIVE_HIGH>; + enable-active-high; + + pinctrl-names = "default"; + pinctrl-0 = <&micbias_reg_ena>; + }; + + sound { + compatible = "samsung,aries-wm8994"; + + model = "Aries"; + + extcon = <&fsa9480>; + + main-micbias-supply = <&micbias_reg>; + headset-micbias-supply = <&micbias_reg>; + + earpath-sel-gpios = <&gpj2 6 GPIO_ACTIVE_HIGH>; + + io-channels = <&adc 3>; + io-channel-names = "headset-detect"; + headset-detect-gpios = <&gph0 6 GPIO_ACTIVE_LOW>; + headset-key-gpios = <&gph3 6 GPIO_ACTIVE_HIGH>; + + samsung,audio-routing = + "HP", "HPOUT1L", + "HP", "HPOUT1R", + + "SPK", "SPKOUTLN", + "SPK", "SPKOUTLP", + + "RCV", "HPOUT2N", + "RCV", "HPOUT2P", + + "LINE", "LINEOUT2N", + "LINE", "LINEOUT2P", + + "IN1LP", "Main Mic", + "IN1LN", "Main Mic", + + "IN1RP", "Headset Mic", + "IN1RN", "Headset Mic", + + "IN2LN", "FM In", + "IN2RN", "FM In", + + "Modem Out", "Modem TX", + "Modem RX", "Modem In", + + "Bluetooth SPK", "TX", + "RX", "Bluetooth Mic"; + + pinctrl-names = "default"; + pinctrl-0 = <&headset_det &earpath_sel>; + + cpu { + sound-dai = <&i2s0>, <&bt_codec>; + }; + + codec { + sound-dai = <&wm8994>; + }; + }; };
&aliases { @@ -88,6 +155,12 @@ samsung,pin-drv = <EXYNOS4_PIN_DRV_LV1>; };
+ headset_det: headset-det { + samsung,pins = "gph0-6", "gph3-6"; + samsung,pin-function = <EXYNOS_PIN_FUNC_F>; + samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>; + }; + fm_irq: fm-irq { samsung,pins = "gpj2-4"; samsung,pin-function = <EXYNOS_PIN_FUNC_INPUT>; @@ -102,6 +175,12 @@ samsung,pin-drv = <EXYNOS4_PIN_DRV_LV1>; };
+ earpath_sel: earpath-sel { + samsung,pins = "gpj2-6"; + samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>; + samsung,pin-drv = <EXYNOS4_PIN_DRV_LV1>; + }; + massmemory_en: massmemory-en { samsung,pins = "gpj2-7"; samsung,pin-function = <EXYNOS_PIN_FUNC_OUTPUT>; @@ -109,6 +188,12 @@ samsung,pin-drv = <EXYNOS4_PIN_DRV_LV1>; };
+ micbias_reg_ena: micbias-reg-ena { + samsung,pins = "gpj4-2"; + samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>; + samsung,pin-drv = <EXYNOS4_PIN_DRV_LV1>; + }; + /* Based on CyanogenMod 3.0.101 kernel */ sleep_cfg: sleep-cfg { PIN_SLP(gpa0-0, PREV, NONE);
From: Krzysztof Kozlowski krzk@kernel.org
[ Upstream commit ea4e792f3c8931fffec4d700cf6197d84e9f35a6 ]
There is no need to keep DMA controller nodes under AMBA bus node. Remove the "amba" node to fix dtschema warnings like:
amba: $nodename:0: 'amba' does not match '^([a-z][a-z0-9\-]+-bus|bus|soc|axi|ahb|apb)(@[0-9a-f]+)?$'
Signed-off-by: Krzysztof Kozlowski krzk@kernel.org Tested-by: Jonathan Bakker xc-racer2@live.ca Link: https://lore.kernel.org/r/20200907161141.31034-6-krzk@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm/boot/dts/s5pv210.dtsi | 49 +++++++++++++++------------------- 1 file changed, 21 insertions(+), 28 deletions(-)
diff --git a/arch/arm/boot/dts/s5pv210.dtsi b/arch/arm/boot/dts/s5pv210.dtsi index 1b0ee884e91db..84e4447931de5 100644 --- a/arch/arm/boot/dts/s5pv210.dtsi +++ b/arch/arm/boot/dts/s5pv210.dtsi @@ -128,35 +128,28 @@ }; };
- amba { - #address-cells = <1>; - #size-cells = <1>; - compatible = "simple-bus"; - ranges; - - pdma0: dma@e0900000 { - compatible = "arm,pl330", "arm,primecell"; - reg = <0xe0900000 0x1000>; - interrupt-parent = <&vic0>; - interrupts = <19>; - clocks = <&clocks CLK_PDMA0>; - clock-names = "apb_pclk"; - #dma-cells = <1>; - #dma-channels = <8>; - #dma-requests = <32>; - }; + pdma0: dma@e0900000 { + compatible = "arm,pl330", "arm,primecell"; + reg = <0xe0900000 0x1000>; + interrupt-parent = <&vic0>; + interrupts = <19>; + clocks = <&clocks CLK_PDMA0>; + clock-names = "apb_pclk"; + #dma-cells = <1>; + #dma-channels = <8>; + #dma-requests = <32>; + };
- pdma1: dma@e0a00000 { - compatible = "arm,pl330", "arm,primecell"; - reg = <0xe0a00000 0x1000>; - interrupt-parent = <&vic0>; - interrupts = <20>; - clocks = <&clocks CLK_PDMA1>; - clock-names = "apb_pclk"; - #dma-cells = <1>; - #dma-channels = <8>; - #dma-requests = <32>; - }; + pdma1: dma@e0a00000 { + compatible = "arm,pl330", "arm,primecell"; + reg = <0xe0a00000 0x1000>; + interrupt-parent = <&vic0>; + interrupts = <20>; + clocks = <&clocks CLK_PDMA1>; + clock-names = "apb_pclk"; + #dma-cells = <1>; + #dma-channels = <8>; + #dma-requests = <32>; };
adc: adc@e1700000 {
From: Krzysztof Kozlowski krzk@kernel.org
[ Upstream commit d38cae370e5f2094cbc38db3082b8e9509ae52ce ]
The fixed clocks are kept under dedicated 'external-clocks' node, thus a fake 'reg' was added. This is not correct with dtschema as fixed-clock binding does not have a 'reg' property. Moving fixed clocks out of 'soc' to root node fixes multiple dtbs_check warnings:
external-clocks: $nodename:0: 'external-clocks' does not match '^([a-z][a-z0-9\-]+-bus|bus|soc|axi|ahb|apb)(@[0-9a-f]+)?$' external-clocks: #size-cells:0:0: 0 is not one of [1, 2] external-clocks: oscillator@0:reg:0: [0] is too short external-clocks: oscillator@1:reg:0: [1] is too short external-clocks: 'ranges' is a required property oscillator@0: 'reg' does not match any of the regexes: 'pinctrl-[0-9]+'
Signed-off-by: Krzysztof Kozlowski krzk@kernel.org Tested-by: Jonathan Bakker xc-racer2@live.ca Link: https://lore.kernel.org/r/20200907161141.31034-7-krzk@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm/boot/dts/s5pv210.dtsi | 36 +++++++++++++--------------------- 1 file changed, 14 insertions(+), 22 deletions(-)
diff --git a/arch/arm/boot/dts/s5pv210.dtsi b/arch/arm/boot/dts/s5pv210.dtsi index 84e4447931de5..5c760a6d79557 100644 --- a/arch/arm/boot/dts/s5pv210.dtsi +++ b/arch/arm/boot/dts/s5pv210.dtsi @@ -52,34 +52,26 @@ }; };
+ xxti: oscillator-0 { + compatible = "fixed-clock"; + clock-frequency = <0>; + clock-output-names = "xxti"; + #clock-cells = <0>; + }; + + xusbxti: oscillator-1 { + compatible = "fixed-clock"; + clock-frequency = <0>; + clock-output-names = "xusbxti"; + #clock-cells = <0>; + }; + soc { compatible = "simple-bus"; #address-cells = <1>; #size-cells = <1>; ranges;
- external-clocks { - compatible = "simple-bus"; - #address-cells = <1>; - #size-cells = <0>; - - xxti: oscillator@0 { - compatible = "fixed-clock"; - reg = <0>; - clock-frequency = <0>; - clock-output-names = "xxti"; - #clock-cells = <0>; - }; - - xusbxti: oscillator@1 { - compatible = "fixed-clock"; - reg = <1>; - clock-frequency = <0>; - clock-output-names = "xusbxti"; - #clock-cells = <0>; - }; - }; - onenand: onenand@b0600000 { compatible = "samsung,s5pv210-onenand"; reg = <0xb0600000 0x2000>,
From: Krzysztof Kozlowski krzk@kernel.org
[ Upstream commit bb98fff84ad1ea321823759edaba573a16fa02bd ]
The Power Management Unit (PMU) is a separate device which has little common with clock controller. Moving it to one level up (from clock controller child to SoC) allows to remove fake simple-bus compatible and dtbs_check warnings like:
clock-controller@e0100000: $nodename:0: 'clock-controller@e0100000' does not match '^([a-z][a-z0-9\-]+-bus|bus|soc|axi|ahb|apb)(@[0-9a-f]+)?$'
Signed-off-by: Krzysztof Kozlowski krzk@kernel.org Tested-by: Jonathan Bakker xc-racer2@live.ca Link: https://lore.kernel.org/r/20200907161141.31034-8-krzk@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm/boot/dts/s5pv210.dtsi | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-)
diff --git a/arch/arm/boot/dts/s5pv210.dtsi b/arch/arm/boot/dts/s5pv210.dtsi index 5c760a6d79557..46221a5c8ce59 100644 --- a/arch/arm/boot/dts/s5pv210.dtsi +++ b/arch/arm/boot/dts/s5pv210.dtsi @@ -92,19 +92,16 @@ };
clocks: clock-controller@e0100000 { - compatible = "samsung,s5pv210-clock", "simple-bus"; + compatible = "samsung,s5pv210-clock"; reg = <0xe0100000 0x10000>; clock-names = "xxti", "xusbxti"; clocks = <&xxti>, <&xusbxti>; #clock-cells = <1>; - #address-cells = <1>; - #size-cells = <1>; - ranges; + };
- pmu_syscon: syscon@e0108000 { - compatible = "samsung-s5pv210-pmu", "syscon"; - reg = <0xe0108000 0x8000>; - }; + pmu_syscon: syscon@e0108000 { + compatible = "samsung-s5pv210-pmu", "syscon"; + reg = <0xe0108000 0x8000>; };
pinctrl0: pinctrl@e0200000 {
From: Krzysztof Kozlowski krzk@kernel.org
[ Upstream commit 6c17a2974abf68a58517f75741b15c4aba42b4b8 ]
The 'audio-subsystem' node is an artificial creation, not representing real hardware. The hardware is described by its nodes - AUDSS clock controller and I2S0.
Remove the 'audio-subsystem' node along with its undocumented compatible to fix dtbs_check warnings like:
audio-subsystem: $nodename:0: 'audio-subsystem' does not match '^([a-z][a-z0-9\-]+-bus|bus|soc|axi|ahb|apb)(@[0-9a-f]+)?$'
Signed-off-by: Krzysztof Kozlowski krzk@kernel.org Tested-by: Jonathan Bakker xc-racer2@live.ca Link: https://lore.kernel.org/r/20200907161141.31034-9-krzk@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm/boot/dts/s5pv210.dtsi | 65 +++++++++++++++------------------- 1 file changed, 29 insertions(+), 36 deletions(-)
diff --git a/arch/arm/boot/dts/s5pv210.dtsi b/arch/arm/boot/dts/s5pv210.dtsi index 46221a5c8ce59..2871351ab9074 100644 --- a/arch/arm/boot/dts/s5pv210.dtsi +++ b/arch/arm/boot/dts/s5pv210.dtsi @@ -223,43 +223,36 @@ status = "disabled"; };
- audio-subsystem { - compatible = "samsung,s5pv210-audss", "simple-bus"; - #address-cells = <1>; - #size-cells = <1>; - ranges; - - clk_audss: clock-controller@eee10000 { - compatible = "samsung,s5pv210-audss-clock"; - reg = <0xeee10000 0x1000>; - clock-names = "hclk", "xxti", - "fout_epll", - "sclk_audio0"; - clocks = <&clocks DOUT_HCLKP>, <&xxti>, - <&clocks FOUT_EPLL>, - <&clocks SCLK_AUDIO0>; - #clock-cells = <1>; - }; + clk_audss: clock-controller@eee10000 { + compatible = "samsung,s5pv210-audss-clock"; + reg = <0xeee10000 0x1000>; + clock-names = "hclk", "xxti", + "fout_epll", + "sclk_audio0"; + clocks = <&clocks DOUT_HCLKP>, <&xxti>, + <&clocks FOUT_EPLL>, + <&clocks SCLK_AUDIO0>; + #clock-cells = <1>; + };
- i2s0: i2s@eee30000 { - compatible = "samsung,s5pv210-i2s"; - reg = <0xeee30000 0x1000>; - interrupt-parent = <&vic2>; - interrupts = <16>; - dma-names = "rx", "tx", "tx-sec"; - dmas = <&pdma1 9>, <&pdma1 10>, <&pdma1 11>; - clock-names = "iis", - "i2s_opclk0", - "i2s_opclk1"; - clocks = <&clk_audss CLK_I2S>, - <&clk_audss CLK_I2S>, - <&clk_audss CLK_DOUT_AUD_BUS>; - samsung,idma-addr = <0xc0010000>; - pinctrl-names = "default"; - pinctrl-0 = <&i2s0_bus>; - #sound-dai-cells = <0>; - status = "disabled"; - }; + i2s0: i2s@eee30000 { + compatible = "samsung,s5pv210-i2s"; + reg = <0xeee30000 0x1000>; + interrupt-parent = <&vic2>; + interrupts = <16>; + dma-names = "rx", "tx", "tx-sec"; + dmas = <&pdma1 9>, <&pdma1 10>, <&pdma1 11>; + clock-names = "iis", + "i2s_opclk0", + "i2s_opclk1"; + clocks = <&clk_audss CLK_I2S>, + <&clk_audss CLK_I2S>, + <&clk_audss CLK_DOUT_AUD_BUS>; + samsung,idma-addr = <0xc0010000>; + pinctrl-names = "default"; + pinctrl-0 = <&i2s0_bus>; + #sound-dai-cells = <0>; + status = "disabled"; };
i2s1: i2s@e2100000 {
From: Krzysztof Kozlowski krzk@kernel.org
[ Upstream commit 086c4498b0cc87fdb09188f3da7056e898814948 ]
The S3C RTC requires 32768 Hz clock as input which is provided by PMIC. However there is no such clock provider but rather a regulator driver which registers the clock as a regulator. This is an old driver which will not be updated so add a workaround - a fixed-clock to fill missing clock phandle reference in S3C RTC.
This fixes dtbs_check warnings:
rtc@e2800000: clocks: [[2, 145]] is too short rtc@e2800000: clock-names: ['rtc'] is too short
Signed-off-by: Krzysztof Kozlowski krzk@kernel.org Tested-by: Jonathan Bakker xc-racer2@live.ca Link: https://lore.kernel.org/r/20200907161141.31034-12-krzk@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm/boot/dts/s5pv210-aries.dtsi | 12 ++++++++++++ 1 file changed, 12 insertions(+)
diff --git a/arch/arm/boot/dts/s5pv210-aries.dtsi b/arch/arm/boot/dts/s5pv210-aries.dtsi index a3f83f668ce14..8a98b35b9b0de 100644 --- a/arch/arm/boot/dts/s5pv210-aries.dtsi +++ b/arch/arm/boot/dts/s5pv210-aries.dtsi @@ -47,6 +47,13 @@ }; };
+ pmic_ap_clk: clock-0 { + /* Workaround for missing clock on PMIC */ + compatible = "fixed-clock"; + #clock-cells = <0>; + clock-frequency = <32768>; + }; + bt_codec: bt_sco { compatible = "linux,bt-sco"; #sound-dai-cells = <0>; @@ -825,6 +832,11 @@ samsung,pwm-outputs = <1>; };
+&rtc { + clocks = <&clocks CLK_RTC>, <&pmic_ap_clk>; + clock-names = "rtc", "rtc_src"; +}; + &sdhci1 { #address-cells = <1>; #size-cells = <0>;
From: Krzysztof Kozlowski krzk@kernel.org
[ Upstream commit 1ed7f6d0bab2f1794f1eb4ed032e90575552fd21 ]
The device tree schema expects SPI controller to be named "spi", otherwise dtbs_check complain with a warning like:
spi-gpio-0: $nodename:0: 'spi-gpio-0' does not match '^spi(@.*|-[0-9a-f])*$'
Signed-off-by: Krzysztof Kozlowski krzk@kernel.org Tested-by: Jonathan Bakker xc-racer2@live.ca Link: https://lore.kernel.org/r/20200907161141.31034-25-krzk@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm/boot/dts/s5pv210-aries.dtsi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm/boot/dts/s5pv210-aries.dtsi b/arch/arm/boot/dts/s5pv210-aries.dtsi index 8a98b35b9b0de..3762098233c0c 100644 --- a/arch/arm/boot/dts/s5pv210-aries.dtsi +++ b/arch/arm/boot/dts/s5pv210-aries.dtsi @@ -545,7 +545,7 @@ value = <0x5200>; };
- spi_lcd: spi-gpio-0 { + spi_lcd: spi-2 { compatible = "spi-gpio"; #address-cells = <1>; #size-cells = <0>;
From: Stephen Boyd swboyd@chromium.org
[ Upstream commit 2bc20f3c8487bd5bc4dd9ad2c06d2ba05fd4e838 ]
The busy loop in rpmh_rsc_send_data() is written with the assumption that the udelay will be preempted by the tcs_tx_done() irq handler when the TCS slots are all full. This doesn't hold true when the calling thread is an irqthread and the tcs_tx_done() irq is also an irqthread. That's because kernel irqthreads are SCHED_FIFO and thus need to voluntarily give up priority by calling into the scheduler so that other threads can run.
I see RCU stalls when I boot with irqthreads on the kernel commandline because the modem remoteproc driver is trying to send an rpmh async message from an irqthread that needs to give up the CPU for the rpmh irqthread to run and clear out tcs slots.
rcu: INFO: rcu_preempt self-detected stall on CPU rcu: 0-....: (1 GPs behind) idle=402/1/0x4000000000000002 softirq=2108/2109 fqs=4920 (t=21016 jiffies g=2933 q=590) Task dump for CPU 0: irq/11-smp2p R running task 0 148 2 0x00000028 Call trace: dump_backtrace+0x0/0x154 show_stack+0x20/0x2c sched_show_task+0xfc/0x108 dump_cpu_task+0x44/0x50 rcu_dump_cpu_stacks+0xa4/0xf8 rcu_sched_clock_irq+0x7dc/0xaa8 update_process_times+0x30/0x54 tick_sched_handle+0x50/0x64 tick_sched_timer+0x4c/0x8c __hrtimer_run_queues+0x21c/0x36c hrtimer_interrupt+0xf0/0x22c arch_timer_handler_phys+0x40/0x50 handle_percpu_devid_irq+0x114/0x25c __handle_domain_irq+0x84/0xc4 gic_handle_irq+0xd0/0x178 el1_irq+0xbc/0x180 save_return_addr+0x18/0x28 return_address+0x54/0x88 preempt_count_sub+0x40/0x88 _raw_spin_unlock_irqrestore+0x4c/0x6c ___ratelimit+0xd0/0x128 rpmh_rsc_send_data+0x24c/0x378 __rpmh_write+0x1b0/0x208 rpmh_write_async+0x90/0xbc rpmhpd_send_corner+0x60/0x8c rpmhpd_aggregate_corner+0x8c/0x124 rpmhpd_set_performance_state+0x8c/0xbc _genpd_set_performance_state+0xdc/0x1b8 dev_pm_genpd_set_performance_state+0xb8/0xf8 q6v5_pds_disable+0x34/0x60 [qcom_q6v5_mss] qcom_msa_handover+0x38/0x44 [qcom_q6v5_mss] q6v5_handover_interrupt+0x24/0x3c [qcom_q6v5] handle_nested_irq+0xd0/0x138 qcom_smp2p_intr+0x188/0x200 irq_thread_fn+0x2c/0x70 irq_thread+0xfc/0x14c kthread+0x11c/0x12c ret_from_fork+0x10/0x18
This busy loop naturally lends itself to using a wait queue so that each thread that tries to send a message will sleep waiting on the waitqueue and only be woken up when a free slot is available. This should make things more predictable too because the scheduler will be able to sleep tasks that are waiting on a free tcs instead of the busy loop we currently have today.
Reviewed-by: Maulik Shah mkshah@codeaurora.org Reviewed-by: Douglas Anderson dianders@chromium.org Tested-by: Stanimir Varbanov stanimir.varbanov@linaro.org Cc: Douglas Anderson dianders@chromium.org Cc: Maulik Shah mkshah@codeaurora.org Cc: Lina Iyer ilina@codeaurora.org Signed-off-by: Stephen Boyd swboyd@chromium.org Link: https://lore.kernel.org/r/20200724211711.810009-1-sboyd@kernel.org Signed-off-by: Bjorn Andersson bjorn.andersson@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/soc/qcom/rpmh-internal.h | 4 ++ drivers/soc/qcom/rpmh-rsc.c | 115 +++++++++++++++---------------- 2 files changed, 58 insertions(+), 61 deletions(-)
diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h index ef60e790a750a..344ba687c13be 100644 --- a/drivers/soc/qcom/rpmh-internal.h +++ b/drivers/soc/qcom/rpmh-internal.h @@ -8,6 +8,7 @@ #define __RPM_INTERNAL_H__
#include <linux/bitmap.h> +#include <linux/wait.h> #include <soc/qcom/tcs.h>
#define TCS_TYPE_NR 4 @@ -106,6 +107,8 @@ struct rpmh_ctrlr { * @lock: Synchronize state of the controller. If RPMH's cache * lock will also be held, the order is: drv->lock then * cache_lock. + * @tcs_wait: Wait queue used to wait for @tcs_in_use to free up a + * slot * @client: Handle to the DRV's client. */ struct rsc_drv { @@ -118,6 +121,7 @@ struct rsc_drv { struct tcs_group tcs[TCS_TYPE_NR]; DECLARE_BITMAP(tcs_in_use, MAX_TCS_NR); spinlock_t lock; + wait_queue_head_t tcs_wait; struct rpmh_ctrlr client; };
diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c index ae66757825813..a297911afe571 100644 --- a/drivers/soc/qcom/rpmh-rsc.c +++ b/drivers/soc/qcom/rpmh-rsc.c @@ -19,6 +19,7 @@ #include <linux/platform_device.h> #include <linux/slab.h> #include <linux/spinlock.h> +#include <linux/wait.h>
#include <soc/qcom/cmd-db.h> #include <soc/qcom/tcs.h> @@ -453,6 +454,7 @@ skip: if (!drv->tcs[ACTIVE_TCS].num_tcs) enable_tcs_irq(drv, i, false); spin_unlock(&drv->lock); + wake_up(&drv->tcs_wait); if (req) rpmh_tx_done(req, err); } @@ -571,73 +573,34 @@ static int find_free_tcs(struct tcs_group *tcs) }
/** - * tcs_write() - Store messages into a TCS right now, or return -EBUSY. + * claim_tcs_for_req() - Claim a tcs in the given tcs_group; only for active. * @drv: The controller. + * @tcs: The tcs_group used for ACTIVE_ONLY transfers. * @msg: The data to be sent. * - * Grabs a TCS for ACTIVE_ONLY transfers and writes the messages to it. + * Claims a tcs in the given tcs_group while making sure that no existing cmd + * is in flight that would conflict with the one in @msg. * - * If there are no free TCSes for ACTIVE_ONLY transfers or if a command for - * the same address is already transferring returns -EBUSY which means the - * client should retry shortly. + * Context: Must be called with the drv->lock held since that protects + * tcs_in_use. * - * Return: 0 on success, -EBUSY if client should retry, or an error. - * Client should have interrupts enabled for a bit before retrying. + * Return: The id of the claimed tcs or -EBUSY if a matching msg is in flight + * or the tcs_group is full. */ -static int tcs_write(struct rsc_drv *drv, const struct tcs_request *msg) +static int claim_tcs_for_req(struct rsc_drv *drv, struct tcs_group *tcs, + const struct tcs_request *msg) { - struct tcs_group *tcs; - int tcs_id; - unsigned long flags; int ret;
- tcs = get_tcs_for_msg(drv, msg); - if (IS_ERR(tcs)) - return PTR_ERR(tcs); - - spin_lock_irqsave(&drv->lock, flags); /* * The h/w does not like if we send a request to the same address, * when one is already in-flight or being processed. */ ret = check_for_req_inflight(drv, tcs, msg); if (ret) - goto unlock; - - ret = find_free_tcs(tcs); - if (ret < 0) - goto unlock; - tcs_id = ret; - - tcs->req[tcs_id - tcs->offset] = msg; - set_bit(tcs_id, drv->tcs_in_use); - if (msg->state == RPMH_ACTIVE_ONLY_STATE && tcs->type != ACTIVE_TCS) { - /* - * Clear previously programmed WAKE commands in selected - * repurposed TCS to avoid triggering them. tcs->slots will be - * cleaned from rpmh_flush() by invoking rpmh_rsc_invalidate() - */ - write_tcs_reg_sync(drv, RSC_DRV_CMD_ENABLE, tcs_id, 0); - write_tcs_reg_sync(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, tcs_id, 0); - enable_tcs_irq(drv, tcs_id, true); - } - spin_unlock_irqrestore(&drv->lock, flags); - - /* - * These two can be done after the lock is released because: - * - We marked "tcs_in_use" under lock. - * - Once "tcs_in_use" has been marked nobody else could be writing - * to these registers until the interrupt goes off. - * - The interrupt can't go off until we trigger w/ the last line - * of __tcs_set_trigger() below. - */ - __tcs_buffer_write(drv, tcs_id, 0, msg); - __tcs_set_trigger(drv, tcs_id, true); + return ret;
- return 0; -unlock: - spin_unlock_irqrestore(&drv->lock, flags); - return ret; + return find_free_tcs(tcs); }
/** @@ -664,18 +627,47 @@ unlock: */ int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg) { - int ret; + struct tcs_group *tcs; + int tcs_id; + unsigned long flags;
- do { - ret = tcs_write(drv, msg); - if (ret == -EBUSY) { - pr_info_ratelimited("TCS Busy, retrying RPMH message send: addr=%#x\n", - msg->cmds[0].addr); - udelay(10); - } - } while (ret == -EBUSY); + tcs = get_tcs_for_msg(drv, msg); + if (IS_ERR(tcs)) + return PTR_ERR(tcs);
- return ret; + spin_lock_irqsave(&drv->lock, flags); + + /* Wait forever for a free tcs. It better be there eventually! */ + wait_event_lock_irq(drv->tcs_wait, + (tcs_id = claim_tcs_for_req(drv, tcs, msg)) >= 0, + drv->lock); + + tcs->req[tcs_id - tcs->offset] = msg; + set_bit(tcs_id, drv->tcs_in_use); + if (msg->state == RPMH_ACTIVE_ONLY_STATE && tcs->type != ACTIVE_TCS) { + /* + * Clear previously programmed WAKE commands in selected + * repurposed TCS to avoid triggering them. tcs->slots will be + * cleaned from rpmh_flush() by invoking rpmh_rsc_invalidate() + */ + write_tcs_reg_sync(drv, RSC_DRV_CMD_ENABLE, tcs_id, 0); + write_tcs_reg_sync(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, tcs_id, 0); + enable_tcs_irq(drv, tcs_id, true); + } + spin_unlock_irqrestore(&drv->lock, flags); + + /* + * These two can be done after the lock is released because: + * - We marked "tcs_in_use" under lock. + * - Once "tcs_in_use" has been marked nobody else could be writing + * to these registers until the interrupt goes off. + * - The interrupt can't go off until we trigger w/ the last line + * of __tcs_set_trigger() below. + */ + __tcs_buffer_write(drv, tcs_id, 0, msg); + __tcs_set_trigger(drv, tcs_id, true); + + return 0; }
/** @@ -983,6 +975,7 @@ static int rpmh_rsc_probe(struct platform_device *pdev) return ret;
spin_lock_init(&drv->lock); + init_waitqueue_head(&drv->tcs_wait); bitmap_zero(drv->tcs_in_use, MAX_TCS_NR);
irq = platform_get_irq(pdev, drv->id);
From: Grygorii Strashko grygorii.strashko@ti.com
[ Upstream commit 95e7be062aea6d2e09116cd4d28957d310c04781 ]
The AM65x SR2.0 Ringacc has fixed errata i2023 "RINGACC, UDMA: RINGACC and UDMA Ring State Interoperability Issue after Channel Teardown". This errata also fixed for J271E SoC.
Use SOC bus data for K3 SoC identification and enable i2023 errate w/a only for the AM65x SR1.0. This also makes obsolete "ti,dma-ring-reset-quirk" DT property.
Signed-off-by: Grygorii Strashko grygorii.strashko@ti.com Signed-off-by: Santosh Shilimkar santosh.shilimkar@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/soc/ti/k3-ringacc.c | 33 ++++++++++++++++++++++++++++++--- 1 file changed, 30 insertions(+), 3 deletions(-)
diff --git a/drivers/soc/ti/k3-ringacc.c b/drivers/soc/ti/k3-ringacc.c index 6dcc21dde0cb7..1147dc4c1d596 100644 --- a/drivers/soc/ti/k3-ringacc.c +++ b/drivers/soc/ti/k3-ringacc.c @@ -10,6 +10,7 @@ #include <linux/init.h> #include <linux/of.h> #include <linux/platform_device.h> +#include <linux/sys_soc.h> #include <linux/soc/ti/k3-ringacc.h> #include <linux/soc/ti/ti_sci_protocol.h> #include <linux/soc/ti/ti_sci_inta_msi.h> @@ -208,6 +209,15 @@ struct k3_ringacc { const struct k3_ringacc_ops *ops; };
+/** + * struct k3_ringacc - Rings accelerator SoC data + * + * @dma_ring_reset_quirk: DMA reset w/a enable + */ +struct k3_ringacc_soc_data { + unsigned dma_ring_reset_quirk:1; +}; + static long k3_ringacc_ring_get_fifo_pos(struct k3_ring *ring) { return K3_RINGACC_FIFO_WINDOW_SIZE_BYTES - @@ -1051,9 +1061,6 @@ static int k3_ringacc_probe_dt(struct k3_ringacc *ringacc) return ret; }
- ringacc->dma_ring_reset_quirk = - of_property_read_bool(node, "ti,dma-ring-reset-quirk"); - ringacc->tisci = ti_sci_get_by_phandle(node, "ti,sci"); if (IS_ERR(ringacc->tisci)) { ret = PTR_ERR(ringacc->tisci); @@ -1084,9 +1091,22 @@ static int k3_ringacc_probe_dt(struct k3_ringacc *ringacc) ringacc->rm_gp_range); }
+static const struct k3_ringacc_soc_data k3_ringacc_soc_data_sr1 = { + .dma_ring_reset_quirk = 1, +}; + +static const struct soc_device_attribute k3_ringacc_socinfo[] = { + { .family = "AM65X", + .revision = "SR1.0", + .data = &k3_ringacc_soc_data_sr1 + }, + {/* sentinel */} +}; + static int k3_ringacc_init(struct platform_device *pdev, struct k3_ringacc *ringacc) { + const struct soc_device_attribute *soc; void __iomem *base_fifo, *base_rt; struct device *dev = &pdev->dev; struct resource *res; @@ -1103,6 +1123,13 @@ static int k3_ringacc_init(struct platform_device *pdev, if (ret) return ret;
+ soc = soc_device_match(k3_ringacc_socinfo); + if (soc && soc->data) { + const struct k3_ringacc_soc_data *soc_data = soc->data; + + ringacc->dma_ring_reset_quirk = soc_data->dma_ring_reset_quirk; + } + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "rt"); base_rt = devm_ioremap_resource(dev, res); if (IS_ERR(base_rt))
From: Grygorii Strashko grygorii.strashko@ti.com
[ Upstream commit aee123f48f387ea62002cddb46c7cb04c96628df ]
Remove "ti,dma-ring-reset-quirk" DT property as proper w/a handling is implemented now in Ringacc driver using SoC info.
Signed-off-by: Grygorii Strashko grygorii.strashko@ti.com Signed-off-by: Santosh Shilimkar santosh.shilimkar@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- Documentation/devicetree/bindings/soc/ti/k3-ringacc.yaml | 6 ------ 1 file changed, 6 deletions(-)
diff --git a/Documentation/devicetree/bindings/soc/ti/k3-ringacc.yaml b/Documentation/devicetree/bindings/soc/ti/k3-ringacc.yaml index ae33fc957141f..c3c595e235a86 100644 --- a/Documentation/devicetree/bindings/soc/ti/k3-ringacc.yaml +++ b/Documentation/devicetree/bindings/soc/ti/k3-ringacc.yaml @@ -62,11 +62,6 @@ properties: $ref: /schemas/types.yaml#/definitions/uint32 description: TI-SCI device id of the ring accelerator
- ti,dma-ring-reset-quirk: - $ref: /schemas/types.yaml#definitions/flag - description: | - enable ringacc/udma ring state interoperability issue software w/a - required: - compatible - reg @@ -94,7 +89,6 @@ examples: reg-names = "rt", "fifos", "proxy_gcfg", "proxy_target"; ti,num-rings = <818>; ti,sci-rm-range-gp-rings = <0x2>; /* GP ring range */ - ti,dma-ring-reset-quirk; ti,sci = <&dmsc>; ti,sci-dev-id = <187>; msi-parent = <&inta_main_udmass>;
From: Sudeep Holla sudeep.holla@arm.com
[ Upstream commit 5a2f0a0bdf201e2183904b6217f9c74774c961a8 ]
In preparation to enable building scmi as a single module, let us move the scmi bus {de-,}initialisation call into the driver.
The main reason for this is to keep it simple instead of maintaining it as separate modules and dealing with all possible initcall races and deferred probe handling. We can move it as separate modules if needed in future.
Link: https://lore.kernel.org/r/20200907195046.56615-3-sudeep.holla@arm.com Tested-by: Cristian Marussi cristian.marussi@arm.com Signed-off-by: Sudeep Holla sudeep.holla@arm.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/firmware/arm_scmi/bus.c | 6 ++---- drivers/firmware/arm_scmi/common.h | 3 +++ drivers/firmware/arm_scmi/driver.c | 16 +++++++++++++++- 3 files changed, 20 insertions(+), 5 deletions(-)
diff --git a/drivers/firmware/arm_scmi/bus.c b/drivers/firmware/arm_scmi/bus.c index db55c43a2cbda..1377ec76a45db 100644 --- a/drivers/firmware/arm_scmi/bus.c +++ b/drivers/firmware/arm_scmi/bus.c @@ -230,7 +230,7 @@ static void scmi_devices_unregister(void) bus_for_each_dev(&scmi_bus_type, NULL, NULL, __scmi_devices_unregister); }
-static int __init scmi_bus_init(void) +int __init scmi_bus_init(void) { int retval;
@@ -240,12 +240,10 @@ static int __init scmi_bus_init(void)
return retval; } -subsys_initcall(scmi_bus_init);
-static void __exit scmi_bus_exit(void) +void __exit scmi_bus_exit(void) { scmi_devices_unregister(); bus_unregister(&scmi_bus_type); ida_destroy(&scmi_bus_id); } -module_exit(scmi_bus_exit); diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h index 6db59a7ac8531..124080955c4a0 100644 --- a/drivers/firmware/arm_scmi/common.h +++ b/drivers/firmware/arm_scmi/common.h @@ -158,6 +158,9 @@ void scmi_setup_protocol_implemented(const struct scmi_handle *handle,
int scmi_base_protocol_init(struct scmi_handle *h);
+int __init scmi_bus_init(void); +void __exit scmi_bus_exit(void); + /* SCMI Transport */ /** * struct scmi_chan_info - Structure representing a SCMI channel information diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c index 28a3e4902ea4e..5c2f4fab40994 100644 --- a/drivers/firmware/arm_scmi/driver.c +++ b/drivers/firmware/arm_scmi/driver.c @@ -936,7 +936,21 @@ static struct platform_driver scmi_driver = { .remove = scmi_remove, };
-module_platform_driver(scmi_driver); +static int __init scmi_driver_init(void) +{ + scmi_bus_init(); + + return platform_driver_register(&scmi_driver); +} +module_init(scmi_driver_init); + +static void __exit scmi_driver_exit(void) +{ + scmi_bus_exit(); + + platform_driver_unregister(&scmi_driver); +} +module_exit(scmi_driver_exit);
MODULE_ALIAS("platform: arm-scmi"); MODULE_AUTHOR("Sudeep Holla sudeep.holla@arm.com");
From: Konrad Dybcio konradybcio@gmail.com
[ Upstream commit e884fb6cc89dce1debeae33704edd7735a3d6d9c ]
There is an issue with Kitakami eMMCs dying when a quirk isn't addressed. Until that happens, disable it.
Signed-off-by: Konrad Dybcio konradybcio@gmail.com Link: https://lore.kernel.org/r/20200814154749.257837-1-konradybcio@gmail.com Signed-off-by: Bjorn Andersson bjorn.andersson@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/boot/dts/qcom/msm8994-sony-xperia-kitakami.dtsi | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/boot/dts/qcom/msm8994-sony-xperia-kitakami.dtsi b/arch/arm64/boot/dts/qcom/msm8994-sony-xperia-kitakami.dtsi index 4032b7478f044..791f254ac3f87 100644 --- a/arch/arm64/boot/dts/qcom/msm8994-sony-xperia-kitakami.dtsi +++ b/arch/arm64/boot/dts/qcom/msm8994-sony-xperia-kitakami.dtsi @@ -221,7 +221,12 @@ };
&sdhc1 { - status = "okay"; + /* There is an issue with the eMMC causing permanent + * damage to the card if a quirk isn't addressed. + * Until it's fixed, disable the MMC so as not to brick + * devices. + */ + status = "disabled";
/* Downstream pushes 2.95V to the sdhci device, * but upstream driver REALLY wants to make vmmc 1.8v
From: Xiubo Li xiubli@redhat.com
[ Upstream commit 87aac3a80af5cbad93e63250e8a1e19095ba0d30 ]
There has one race case for ceph's rbd-nbd tool. When do mapping it may fail with EBUSY from ioctl(nbd, NBD_DO_IT), but actually the nbd device has already unmaped.
It dues to if just after the wake_up(), the recv_work() is scheduled out and defers calling the nbd_config_put(), though the map process has exited the "nbd->recv_task" is not cleared.
Signed-off-by: Xiubo Li xiubli@redhat.com Reviewed-by: Josef Bacik josef@toxicpanda.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/block/nbd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c index edf8b632e3d27..f46e26c9d9b3c 100644 --- a/drivers/block/nbd.c +++ b/drivers/block/nbd.c @@ -801,9 +801,9 @@ static void recv_work(struct work_struct *work) if (likely(!blk_should_fake_timeout(rq->q))) blk_mq_complete_request(rq); } + nbd_config_put(nbd); atomic_dec(&config->recv_threads); wake_up(&config->recv_wq); - nbd_config_put(nbd); kfree(args); }
From: Douglas Gilbert dgilbert@interlog.com
[ Upstream commit b2a182a40278bc5849730e66bca01a762188ed86 ]
sgl_alloc_order() can fail when 'length' is large on a memory constrained system. When order > 0 it will potentially be making several multi-page allocations with the later ones more likely to fail than the earlier one. So it is important that sgl_alloc_order() frees up any pages it has obtained before returning NULL. In the case when order > 0 it calls the wrong free page function and leaks. In testing the leak was sufficient to bring down my 8 GiB laptop with OOM.
Reviewed-by: Bart Van Assche bvanassche@acm.org Signed-off-by: Douglas Gilbert dgilbert@interlog.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- lib/scatterlist.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/scatterlist.c b/lib/scatterlist.c index 5d63a8857f361..c448642e0f786 100644 --- a/lib/scatterlist.c +++ b/lib/scatterlist.c @@ -514,7 +514,7 @@ struct scatterlist *sgl_alloc_order(unsigned long long length, elem_len = min_t(u64, length, PAGE_SIZE << order); page = alloc_pages(gfp, order); if (!page) { - sgl_free(sgl); + sgl_free_order(sgl, order); return NULL; }
From: Chao Leng lengchao@huawei.com
[ Upstream commit 43efdb8e870ee0f58633fd579aa5b5185bf5d39e ]
A crash can happened when a connect is rejected. The host establishes the connection after received ConnectReply, and then continues to send the fabrics Connect command. If the controller does not receive the ReadyToUse capsule, host may receive a ConnectReject reply.
Call nvme_rdma_destroy_queue_ib after the host received the RDMA_CM_EVENT_REJECTED event. Then when the fabrics Connect command times out, nvme_rdma_timeout calls nvme_rdma_complete_rq to fail the request. A crash happenes due to use after free in nvme_rdma_complete_rq.
nvme_rdma_destroy_queue_ib is redundant when handling the RDMA_CM_EVENT_REJECTED event as nvme_rdma_destroy_queue_ib is already called in connection failure handler.
Signed-off-by: Chao Leng lengchao@huawei.com Reviewed-by: Sagi Grimberg sagi@grimberg.me Signed-off-by: Christoph Hellwig hch@lst.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nvme/host/rdma.c | 1 - 1 file changed, 1 deletion(-)
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index 9e378d0a0c01c..116902b1b2c34 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -1926,7 +1926,6 @@ static int nvme_rdma_cm_handler(struct rdma_cm_id *cm_id, complete(&queue->cm_done); return 0; case RDMA_CM_EVENT_REJECTED: - nvme_rdma_destroy_queue_ib(queue); cm_error = nvme_rdma_conn_rejected(queue, ev); break; case RDMA_CM_EVENT_ROUTE_ERROR:
From: Nick Desaulniers ndesaulniers@google.com
commit eff8728fe69880d3f7983bec3fb6cea4c306261f upstream.
Basically, consider .text.{hot|unlikely|unknown}.* part of .text, too.
When compiling with profiling information (collected via PGO instrumentations or AutoFDO sampling), Clang will separate code into .text.hot, .text.unlikely, or .text.unknown sections based on profiling information. After D79600 (clang-11), these sections will have a trailing `.` suffix, ie. .text.hot., .text.unlikely., .text.unknown..
When using -ffunction-sections together with profiling infomation, either explicitly (FGKASLR) or implicitly (LTO), code may be placed in sections following the convention: .text.hot.<foo>, .text.unlikely.<bar>, .text.unknown.<baz> where <foo>, <bar>, and <baz> are functions. (This produces one section per function; we generally try to merge these all back via linker script so that we don't have 50k sections).
For the above cases, we need to teach our linker scripts that such sections might exist and that we'd explicitly like them grouped together, otherwise we can wind up with code outside of the _stext/_etext boundaries that might not be mapped properly for some architectures, resulting in boot failures.
If the linker script is not told about possible input sections, then where the section is placed as output is a heuristic-laiden mess that's non-portable between linkers (ie. BFD and LLD), and has resulted in many hard to debug bugs. Kees Cook is working on cleaning this up by adding --orphan-handling=warn linker flag used in ARCH=powerpc to additional architectures. In the case of linker scripts, borrowing from the Zen of Python: explicit is better than implicit.
Also, ld.bfd's internal linker script considers .text.hot AND .text.hot.* to be part of .text, as well as .text.unlikely and .text.unlikely.*. I didn't see support for .text.unknown.*, and didn't see Clang producing such code in our kernel builds, but I see code in LLVM that can produce such section names if profiling information is missing. That may point to a larger issue with generating or collecting profiles, but I would much rather be safe and explicit than have to debug yet another issue related to orphan section placement.
Reported-by: Jian Cai jiancai@google.com Suggested-by: Fāng-ruì Sòng maskray@google.com Signed-off-by: Nick Desaulniers ndesaulniers@google.com Signed-off-by: Kees Cook keescook@chromium.org Signed-off-by: Ingo Molnar mingo@kernel.org Tested-by: Luis Lozano llozano@google.com Tested-by: Manoj Gupta manojgupta@google.com Acked-by: Kees Cook keescook@chromium.org Cc: linux-arch@vger.kernel.org Cc: stable@vger.kernel.org Link: https://sourceware.org/git/?p=binutils-gdb.git%3Ba=commitdiff%3Bh=add44f8d5c... Link: https://sourceware.org/git/?p=binutils-gdb.git%3Ba=commitdiff%3Bh=1de778ed23... Link: https://reviews.llvm.org/D79600 Link: https://bugs.chromium.org/p/chromium/issues/detail?id=1084760 Link: https://lore.kernel.org/r/20200821194310.3089815-7-keescook@chromium.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
Debugged-by: Luis Lozano llozano@google.com
--- include/asm-generic/vmlinux.lds.h | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
--- a/include/asm-generic/vmlinux.lds.h +++ b/include/asm-generic/vmlinux.lds.h @@ -581,7 +581,10 @@ */ #define TEXT_TEXT \ ALIGN_FUNCTION(); \ - *(.text.hot TEXT_MAIN .text.fixup .text.unlikely) \ + *(.text.hot .text.hot.*) \ + *(TEXT_MAIN .text.fixup) \ + *(.text.unlikely .text.unlikely.*) \ + *(.text.unknown .text.unknown.*) \ NOINSTR_TEXT \ *(.text..refcount) \ *(.ref.text) \
From: Huacai Chen chenhc@lemote.com
commit 1d1e5630de78f7253ac24b92cee6427c3ff04d56 upstream.
In htvec_reset() only the first group of initial interrupts is cleared. This sometimes causes spurious interrupts, so let's clear all groups.
While at it, fix the nearby comment that to match the reality of what the driver does.
Fixes: 818e915fbac518e8c78e1877 ("irqchip: Add Loongson HyperTransport Vector support") Signed-off-by: Huacai Chen chenhc@lemote.com Signed-off-by: Marc Zyngier maz@kernel.org Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/1599819978-13999-2-git-send-email-chenhc@lemote.co... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/irqchip/irq-loongson-htvec.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/irqchip/irq-loongson-htvec.c +++ b/drivers/irqchip/irq-loongson-htvec.c @@ -151,7 +151,7 @@ static void htvec_reset(struct htvec *pr /* Clear IRQ cause registers, mask all interrupts */ for (idx = 0; idx < priv->num_parents; idx++) { writel_relaxed(0x0, priv->base + HTVEC_EN_OFF + 4 * idx); - writel_relaxed(0xFFFFFFFF, priv->base); + writel_relaxed(0xFFFFFFFF, priv->base + 4 * idx); } }
@@ -172,7 +172,7 @@ static int htvec_of_init(struct device_n goto free_priv; }
- /* Interrupt may come from any of the 4 interrupt line */ + /* Interrupt may come from any of the 8 interrupt lines */ for (i = 0; i < HTVEC_MAX_PARENT_IRQ; i++) { parent_irq[i] = irq_of_parse_and_map(node, i); if (parent_irq[i] <= 0)
From: Guoqing Jiang guoqing.jiang@cloud.ionos.com
commit cf0b9b4821a2955f8a23813ef8f422208ced9bd7 upstream.
It should check md_rdev_misc_wq instead of md_misc_wq.
Fixes: cc1ffe61c026 ("md: add new workqueue for delete rdev") Cc: stable@vger.kernel.org # v5.8+ Signed-off-by: Guoqing Jiang guoqing.jiang@cloud.ionos.com Signed-off-by: Song Liu songliubraving@fb.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/md/md.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -9545,7 +9545,7 @@ static int __init md_init(void) goto err_misc_wq;
md_rdev_misc_wq = alloc_workqueue("md_rdev_misc", 0, 0); - if (!md_misc_wq) + if (!md_rdev_misc_wq) goto err_rdev_misc_wq;
if ((ret = register_blkdev(MD_MAJOR, "md")) < 0)
From: Song Liu songliubraving@fb.com
commit b44c018cdf748b96b676ba09fdbc5b34fc443ada upstream.
KoWei reported crash during raid5 reshape:
[ 1032.252932] Oops: 0002 [#1] SMP PTI [...] [ 1032.252943] RIP: 0010:memcpy_erms+0x6/0x10 [...] [ 1032.252947] RSP: 0018:ffffba1ac0c03b78 EFLAGS: 00010286 [ 1032.252949] RAX: 0000784ac0000000 RBX: ffff91bec3d09740 RCX: 0000000000001000 [ 1032.252951] RDX: 0000000000001000 RSI: ffff91be6781c000 RDI: 0000784ac0000000 [ 1032.252953] RBP: ffffba1ac0c03bd8 R08: 0000000000001000 R09: ffffba1ac0c03bf8 [ 1032.252954] R10: 0000000000000000 R11: 0000000000000000 R12: ffffba1ac0c03bf8 [ 1032.252955] R13: 0000000000001000 R14: 0000000000000000 R15: 0000000000000000 [ 1032.252958] FS: 0000000000000000(0000) GS:ffff91becf500000(0000) knlGS:0000000000000000 [ 1032.252959] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 1032.252961] CR2: 0000784ac0000000 CR3: 000000031780a002 CR4: 00000000001606e0 [ 1032.252962] Call Trace: [ 1032.252969] ? async_memcpy+0x179/0x1000 [async_memcpy] [ 1032.252977] ? raid5_release_stripe+0x8e/0x110 [raid456] [ 1032.252982] handle_stripe_expansion+0x15a/0x1f0 [raid456] [ 1032.252988] handle_stripe+0x592/0x1270 [raid456] [ 1032.252993] handle_active_stripes.isra.0+0x3cb/0x5a0 [raid456] [ 1032.252999] raid5d+0x35c/0x550 [raid456] [ 1032.253002] ? schedule+0x42/0xb0 [ 1032.253006] ? schedule_timeout+0x10e/0x160 [ 1032.253011] md_thread+0x97/0x160 [ 1032.253015] ? wait_woken+0x80/0x80 [ 1032.253019] kthread+0x104/0x140 [ 1032.253022] ? md_start_sync+0x60/0x60 [ 1032.253024] ? kthread_park+0x90/0x90 [ 1032.253027] ret_from_fork+0x35/0x40
This is because cache_size_mutex was unlocked too early in resize_stripes, which races with grow_one_stripe() that grow_one_stripe() allocates a stripe with wrong pool_size.
Fix this issue by unlocking cache_size_mutex after updating pool_size.
Cc: stable@vger.kernel.org # v4.4+ Reported-by: KoWei Sung winders@amazon.com Signed-off-by: Song Liu songliubraving@fb.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/md/raid5.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -2429,8 +2429,6 @@ static int resize_stripes(struct r5conf } else err = -ENOMEM;
- mutex_unlock(&conf->cache_size_mutex); - conf->slab_cache = sc; conf->active_name = 1-conf->active_name;
@@ -2453,6 +2451,8 @@ static int resize_stripes(struct r5conf
if (!err) conf->pool_size = newsize; + mutex_unlock(&conf->cache_size_mutex); + return err; }
From: Adrian Hunter adrian.hunter@intel.com
commit 46f4a69ec8ed6ab9f6a6172afe50df792c8bc1b6 upstream.
Some Intel BYT based host controllers support the setting of latency tolerance. Accordingly, implement the PM QoS ->set_latency_tolerance() callback. The raw register values are also exposed via debugfs.
Intel EHL controllers require this support.
Signed-off-by: Adrian Hunter adrian.hunter@intel.com Fixes: cb3a7d4a0aec4e ("mmc: sdhci-pci: Add support for Intel EHL") Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200818104508.7149-1-adrian.hunter@intel.com Signed-off-by: Ulf Hansson ulf.hansson@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/mmc/host/sdhci-pci-core.c | 154 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 154 insertions(+)
--- a/drivers/mmc/host/sdhci-pci-core.c +++ b/drivers/mmc/host/sdhci-pci-core.c @@ -24,6 +24,8 @@ #include <linux/iopoll.h> #include <linux/gpio.h> #include <linux/pm_runtime.h> +#include <linux/pm_qos.h> +#include <linux/debugfs.h> #include <linux/mmc/slot-gpio.h> #include <linux/mmc/sdhci-pci-data.h> #include <linux/acpi.h> @@ -516,6 +518,8 @@ struct intel_host { bool rpm_retune_ok; u32 glk_rx_ctrl1; u32 glk_tun_val; + u32 active_ltr; + u32 idle_ltr; };
static const guid_t intel_dsm_guid = @@ -760,6 +764,108 @@ static int intel_execute_tuning(struct m return 0; }
+#define INTEL_ACTIVELTR 0x804 +#define INTEL_IDLELTR 0x808 + +#define INTEL_LTR_REQ BIT(15) +#define INTEL_LTR_SCALE_MASK GENMASK(11, 10) +#define INTEL_LTR_SCALE_1US (2 << 10) +#define INTEL_LTR_SCALE_32US (3 << 10) +#define INTEL_LTR_VALUE_MASK GENMASK(9, 0) + +static void intel_cache_ltr(struct sdhci_pci_slot *slot) +{ + struct intel_host *intel_host = sdhci_pci_priv(slot); + struct sdhci_host *host = slot->host; + + intel_host->active_ltr = readl(host->ioaddr + INTEL_ACTIVELTR); + intel_host->idle_ltr = readl(host->ioaddr + INTEL_IDLELTR); +} + +static void intel_ltr_set(struct device *dev, s32 val) +{ + struct sdhci_pci_chip *chip = dev_get_drvdata(dev); + struct sdhci_pci_slot *slot = chip->slots[0]; + struct intel_host *intel_host = sdhci_pci_priv(slot); + struct sdhci_host *host = slot->host; + u32 ltr; + + pm_runtime_get_sync(dev); + + /* + * Program latency tolerance (LTR) accordingly what has been asked + * by the PM QoS layer or disable it in case we were passed + * negative value or PM_QOS_LATENCY_ANY. + */ + ltr = readl(host->ioaddr + INTEL_ACTIVELTR); + + if (val == PM_QOS_LATENCY_ANY || val < 0) { + ltr &= ~INTEL_LTR_REQ; + } else { + ltr |= INTEL_LTR_REQ; + ltr &= ~INTEL_LTR_SCALE_MASK; + ltr &= ~INTEL_LTR_VALUE_MASK; + + if (val > INTEL_LTR_VALUE_MASK) { + val >>= 5; + if (val > INTEL_LTR_VALUE_MASK) + val = INTEL_LTR_VALUE_MASK; + ltr |= INTEL_LTR_SCALE_32US | val; + } else { + ltr |= INTEL_LTR_SCALE_1US | val; + } + } + + if (ltr == intel_host->active_ltr) + goto out; + + writel(ltr, host->ioaddr + INTEL_ACTIVELTR); + writel(ltr, host->ioaddr + INTEL_IDLELTR); + + /* Cache the values into lpss structure */ + intel_cache_ltr(slot); +out: + pm_runtime_put_autosuspend(dev); +} + +static bool intel_use_ltr(struct sdhci_pci_chip *chip) +{ + switch (chip->pdev->device) { + case PCI_DEVICE_ID_INTEL_BYT_EMMC: + case PCI_DEVICE_ID_INTEL_BYT_EMMC2: + case PCI_DEVICE_ID_INTEL_BYT_SDIO: + case PCI_DEVICE_ID_INTEL_BYT_SD: + case PCI_DEVICE_ID_INTEL_BSW_EMMC: + case PCI_DEVICE_ID_INTEL_BSW_SDIO: + case PCI_DEVICE_ID_INTEL_BSW_SD: + return false; + default: + return true; + } +} + +static void intel_ltr_expose(struct sdhci_pci_chip *chip) +{ + struct device *dev = &chip->pdev->dev; + + if (!intel_use_ltr(chip)) + return; + + dev->power.set_latency_tolerance = intel_ltr_set; + dev_pm_qos_expose_latency_tolerance(dev); +} + +static void intel_ltr_hide(struct sdhci_pci_chip *chip) +{ + struct device *dev = &chip->pdev->dev; + + if (!intel_use_ltr(chip)) + return; + + dev_pm_qos_hide_latency_tolerance(dev); + dev->power.set_latency_tolerance = NULL; +} + static void byt_probe_slot(struct sdhci_pci_slot *slot) { struct mmc_host_ops *ops = &slot->host->mmc_host_ops; @@ -774,6 +880,43 @@ static void byt_probe_slot(struct sdhci_ ops->start_signal_voltage_switch = intel_start_signal_voltage_switch;
device_property_read_u32(dev, "max-frequency", &mmc->f_max); + + if (!mmc->slotno) { + slot->chip->slots[mmc->slotno] = slot; + intel_ltr_expose(slot->chip); + } +} + +static void byt_add_debugfs(struct sdhci_pci_slot *slot) +{ + struct intel_host *intel_host = sdhci_pci_priv(slot); + struct mmc_host *mmc = slot->host->mmc; + struct dentry *dir = mmc->debugfs_root; + + if (!intel_use_ltr(slot->chip)) + return; + + debugfs_create_x32("active_ltr", 0444, dir, &intel_host->active_ltr); + debugfs_create_x32("idle_ltr", 0444, dir, &intel_host->idle_ltr); + + intel_cache_ltr(slot); +} + +static int byt_add_host(struct sdhci_pci_slot *slot) +{ + int ret = sdhci_add_host(slot->host); + + if (!ret) + byt_add_debugfs(slot); + return ret; +} + +static void byt_remove_slot(struct sdhci_pci_slot *slot, int dead) +{ + struct mmc_host *mmc = slot->host->mmc; + + if (!mmc->slotno) + intel_ltr_hide(slot->chip); }
static int byt_emmc_probe_slot(struct sdhci_pci_slot *slot) @@ -855,6 +998,8 @@ static int glk_emmc_add_host(struct sdhc if (ret) goto cleanup;
+ byt_add_debugfs(slot); + return 0;
cleanup: @@ -1032,6 +1177,8 @@ static const struct sdhci_pci_fixes sdhc #endif .allow_runtime_pm = true, .probe_slot = byt_emmc_probe_slot, + .add_host = byt_add_host, + .remove_slot = byt_remove_slot, .quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC | SDHCI_QUIRK_NO_LED, .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN | @@ -1045,6 +1192,7 @@ static const struct sdhci_pci_fixes sdhc .allow_runtime_pm = true, .probe_slot = glk_emmc_probe_slot, .add_host = glk_emmc_add_host, + .remove_slot = byt_remove_slot, #ifdef CONFIG_PM_SLEEP .suspend = sdhci_cqhci_suspend, .resume = sdhci_cqhci_resume, @@ -1075,6 +1223,8 @@ static const struct sdhci_pci_fixes sdhc SDHCI_QUIRK2_PRESET_VALUE_BROKEN, .allow_runtime_pm = true, .probe_slot = ni_byt_sdio_probe_slot, + .add_host = byt_add_host, + .remove_slot = byt_remove_slot, .ops = &sdhci_intel_byt_ops, .priv_size = sizeof(struct intel_host), }; @@ -1092,6 +1242,8 @@ static const struct sdhci_pci_fixes sdhc SDHCI_QUIRK2_PRESET_VALUE_BROKEN, .allow_runtime_pm = true, .probe_slot = byt_sdio_probe_slot, + .add_host = byt_add_host, + .remove_slot = byt_remove_slot, .ops = &sdhci_intel_byt_ops, .priv_size = sizeof(struct intel_host), }; @@ -1111,6 +1263,8 @@ static const struct sdhci_pci_fixes sdhc .allow_runtime_pm = true, .own_cd_for_runtime_pm = true, .probe_slot = byt_sd_probe_slot, + .add_host = byt_add_host, + .remove_slot = byt_remove_slot, .ops = &sdhci_intel_byt_ops, .priv_size = sizeof(struct intel_host), };
From: Raul E Rangel rrangel@chromium.org
commit f23cc3ba491af77395cea3f9d51204398729f26b upstream.
This change fixes HS400 tuning for devices with invalid presets.
SDHCI presets are not currently used for eMMC HS/HS200/HS400, but are used for DDR52. The HS400 retuning sequence is:
HS400->DDR52->HS->HS200->Perform Tuning->HS->HS400
This means that when HS400 tuning happens, we transition through DDR52 for a very brief period. This causes presets to be enabled unintentionally and stay enabled when transitioning back to HS200 or HS400. Some firmware has invalid presets, so we end up with driver strengths that can cause I/O problems.
Fixes: 34597a3f60b1 ("mmc: sdhci-acpi: Add support for ACPI HID of AMD Controller with HS400") Signed-off-by: Raul E Rangel rrangel@chromium.org Acked-by: Adrian Hunter adrian.hunter@intel.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200928154718.1.Icc21d4b2f354e83e26e57e270dc952f5... Signed-off-by: Ulf Hansson ulf.hansson@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/mmc/host/sdhci-acpi.c | 37 +++++++++++++++++++++++++++++++++++++ 1 file changed, 37 insertions(+)
--- a/drivers/mmc/host/sdhci-acpi.c +++ b/drivers/mmc/host/sdhci-acpi.c @@ -662,6 +662,43 @@ static int sdhci_acpi_emmc_amd_probe_slo (host->mmc->caps & MMC_CAP_1_8V_DDR)) host->mmc->caps2 = MMC_CAP2_HS400_1_8V;
+ /* + * There are two types of presets out in the wild: + * 1) Default/broken presets. + * These presets have two sets of problems: + * a) The clock divisor for SDR12, SDR25, and SDR50 is too small. + * This results in clock frequencies that are 2x higher than + * acceptable. i.e., SDR12 = 25 MHz, SDR25 = 50 MHz, SDR50 = + * 100 MHz.x + * b) The HS200 and HS400 driver strengths don't match. + * By default, the SDR104 preset register has a driver strength of + * A, but the (internal) HS400 preset register has a driver + * strength of B. As part of initializing HS400, HS200 tuning + * needs to be performed. Having different driver strengths + * between tuning and operation is wrong. It results in different + * rise/fall times that lead to incorrect sampling. + * 2) Firmware with properly initialized presets. + * These presets have proper clock divisors. i.e., SDR12 => 12MHz, + * SDR25 => 25 MHz, SDR50 => 50 MHz. Additionally the HS200 and + * HS400 preset driver strengths match. + * + * Enabling presets for HS400 doesn't work for the following reasons: + * 1) sdhci_set_ios has a hard coded list of timings that are used + * to determine if presets should be enabled. + * 2) sdhci_get_preset_value is using a non-standard register to + * read out HS400 presets. The AMD controller doesn't support this + * non-standard register. In fact, it doesn't expose the HS400 + * preset register anywhere in the SDHCI memory map. This results + * in reading a garbage value and using the wrong presets. + * + * Since HS400 and HS200 presets must be identical, we could + * instead use the the SDR104 preset register. + * + * If the above issues are resolved we could remove this quirk for + * firmware that that has valid presets (i.e., SDR12 <= 12 MHz). + */ + host->quirks2 |= SDHCI_QUIRK2_PRESET_VALUE_BROKEN; + host->mmc_host_ops.select_drive_strength = amd_select_drive_strength; host->mmc_host_ops.set_ios = amd_set_ios; host->mmc_host_ops.execute_tuning = amd_sdhci_execute_tuning;
From: Bharata B Rao bharata@linux.ibm.com
commit d1b2cf6cb84a9bd0de6f151512648dd1af82f80f upstream.
Object cgroup charging is done for all the objects during allocation, but during freeing, uncharging ends up happening for only one object in the case of bulk allocation/freeing.
Fix this by having a separate call to uncharge all the objects from kmem_cache_free_bulk() and by modifying memcg_slab_free_hook() to take care of bulk uncharging.
Fixes: 964d4bd370d5 ("mm: memcg/slab: save obj_cgroup for non-root slab objects" Signed-off-by: Bharata B Rao bharata@linux.ibm.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Acked-by: Roman Gushchin guro@fb.com Cc: Christoph Lameter cl@linux.com Cc: David Rientjes rientjes@google.com Cc: Joonsoo Kim iamjoonsoo.kim@lge.com Cc: Vlastimil Babka vbabka@suse.cz Cc: Shakeel Butt shakeelb@google.com Cc: Johannes Weiner hannes@cmpxchg.org Cc: Michal Hocko mhocko@kernel.org Cc: Tejun Heo tj@kernel.org Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20201009060423.390479-1-bharata@linux.ibm.com Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- mm/slab.c | 2 +- mm/slab.h | 50 +++++++++++++++++++++++++++++++------------------- mm/slub.c | 3 ++- 3 files changed, 34 insertions(+), 21 deletions(-)
--- a/mm/slab.c +++ b/mm/slab.c @@ -3440,7 +3440,7 @@ void ___cache_free(struct kmem_cache *ca memset(objp, 0, cachep->object_size); kmemleak_free_recursive(objp, cachep->flags); objp = cache_free_debugcheck(cachep, objp, caller); - memcg_slab_free_hook(cachep, virt_to_head_page(objp), objp); + memcg_slab_free_hook(cachep, &objp, 1);
/* * Skip calling cache_free_alien() when the platform is not numa. --- a/mm/slab.h +++ b/mm/slab.h @@ -345,30 +345,42 @@ static inline void memcg_slab_post_alloc obj_cgroup_put(objcg); }
-static inline void memcg_slab_free_hook(struct kmem_cache *s, struct page *page, - void *p) +static inline void memcg_slab_free_hook(struct kmem_cache *s_orig, + void **p, int objects) { + struct kmem_cache *s; struct obj_cgroup *objcg; + struct page *page; unsigned int off; + int i;
if (!memcg_kmem_enabled()) return;
- if (!page_has_obj_cgroups(page)) - return; - - off = obj_to_index(s, page, p); - objcg = page_obj_cgroups(page)[off]; - page_obj_cgroups(page)[off] = NULL; - - if (!objcg) - return; - - obj_cgroup_uncharge(objcg, obj_full_size(s)); - mod_objcg_state(objcg, page_pgdat(page), cache_vmstat_idx(s), - -obj_full_size(s)); - - obj_cgroup_put(objcg); + for (i = 0; i < objects; i++) { + if (unlikely(!p[i])) + continue; + + page = virt_to_head_page(p[i]); + if (!page_has_obj_cgroups(page)) + continue; + + if (!s_orig) + s = page->slab_cache; + else + s = s_orig; + + off = obj_to_index(s, page, p[i]); + objcg = page_obj_cgroups(page)[off]; + if (!objcg) + continue; + + page_obj_cgroups(page)[off] = NULL; + obj_cgroup_uncharge(objcg, obj_full_size(s)); + mod_objcg_state(objcg, page_pgdat(page), cache_vmstat_idx(s), + -obj_full_size(s)); + obj_cgroup_put(objcg); + } }
#else /* CONFIG_MEMCG_KMEM */ @@ -406,8 +418,8 @@ static inline void memcg_slab_post_alloc { }
-static inline void memcg_slab_free_hook(struct kmem_cache *s, struct page *page, - void *p) +static inline void memcg_slab_free_hook(struct kmem_cache *s, + void **p, int objects) { } #endif /* CONFIG_MEMCG_KMEM */ --- a/mm/slub.c +++ b/mm/slub.c @@ -3091,7 +3091,7 @@ static __always_inline void do_slab_free struct kmem_cache_cpu *c; unsigned long tid;
- memcg_slab_free_hook(s, page, head); + memcg_slab_free_hook(s, &head, 1); redo: /* * Determine the currently cpus per cpu slab. @@ -3253,6 +3253,7 @@ void kmem_cache_free_bulk(struct kmem_ca if (WARN_ON(!size)) return;
+ memcg_slab_free_hook(s, p, size); do { struct detached_freelist df;
From: Jann Horn jannh@google.com
commit dfe719fef03d752f1682fa8aeddf30ba501c8555 upstream.
Currently, init_listener() tries to prevent adding a filter with SECCOMP_FILTER_FLAG_NEW_LISTENER if one of the existing filters already has a listener. However, this check happens without holding any lock that would prevent another thread from concurrently installing a new filter (potentially with a listener) on top of the ones we already have.
Theoretically, this is also a data race: The plain load from current->seccomp.filter can race with concurrent writes to the same location.
Fix it by moving the check into the region that holds the siglock to guard against concurrent TSYNC.
(The "Fixes" tag points to the commit that introduced the theoretical data race; concurrent installation of another filter with TSYNC only became possible later, in commit 51891498f2da ("seccomp: allow TSYNC and USER_NOTIF together").)
Fixes: 6a21cc50f0c7 ("seccomp: add a return code to trap to userspace") Reviewed-by: Tycho Andersen tycho@tycho.pizza Signed-off-by: Jann Horn jannh@google.com Signed-off-by: Kees Cook keescook@chromium.org Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20201005014401.490175-1-jannh@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- kernel/seccomp.c | 38 +++++++++++++++++++++++++++++++------- 1 file changed, 31 insertions(+), 7 deletions(-)
--- a/kernel/seccomp.c +++ b/kernel/seccomp.c @@ -1472,13 +1472,7 @@ static const struct file_operations secc
static struct file *init_listener(struct seccomp_filter *filter) { - struct file *ret = ERR_PTR(-EBUSY); - struct seccomp_filter *cur; - - for (cur = current->seccomp.filter; cur; cur = cur->prev) { - if (cur->notif) - goto out; - } + struct file *ret;
ret = ERR_PTR(-ENOMEM); filter->notif = kzalloc(sizeof(*(filter->notif)), GFP_KERNEL); @@ -1504,6 +1498,31 @@ out: return ret; }
+/* + * Does @new_child have a listener while an ancestor also has a listener? + * If so, we'll want to reject this filter. + * This only has to be tested for the current process, even in the TSYNC case, + * because TSYNC installs @child with the same parent on all threads. + * Note that @new_child is not hooked up to its parent at this point yet, so + * we use current->seccomp.filter. + */ +static bool has_duplicate_listener(struct seccomp_filter *new_child) +{ + struct seccomp_filter *cur; + + /* must be protected against concurrent TSYNC */ + lockdep_assert_held(¤t->sighand->siglock); + + if (!new_child->notif) + return false; + for (cur = current->seccomp.filter; cur; cur = cur->prev) { + if (cur->notif) + return true; + } + + return false; +} + /** * seccomp_set_mode_filter: internal function for setting seccomp filter * @flags: flags to change filter behavior @@ -1575,6 +1594,11 @@ static long seccomp_set_mode_filter(unsi if (!seccomp_may_assign_mode(seccomp_mode)) goto out;
+ if (has_duplicate_listener(prepared)) { + ret = -EBUSY; + goto out; + } + ret = seccomp_attach_filter(flags, prepared); if (ret) goto out;
From: Andy Lutomirski luto@kernel.org
commit 1b9abd1755ad947d7c9913e92e7837b533124c90 upstream.
This tests commit:
8ab49526b53d ("x86/fsgsbase/64: Fix NULL deref in 86_fsgsbase_read_task")
Unpatched kernels will OOPS.
Signed-off-by: Andy Lutomirski luto@kernel.org Signed-off-by: Ingo Molnar mingo@kernel.org Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/c618ae86d1f757e01b1a8e79869f553cb88acf9a.159846115... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- tools/testing/selftests/x86/fsgsbase.c | 65 +++++++++++++++++++++++++++++++++ 1 file changed, 65 insertions(+)
--- a/tools/testing/selftests/x86/fsgsbase.c +++ b/tools/testing/selftests/x86/fsgsbase.c @@ -443,6 +443,68 @@ static void test_unexpected_base(void)
#define USER_REGS_OFFSET(r) offsetof(struct user_regs_struct, r)
+static void test_ptrace_write_gs_read_base(void) +{ + int status; + pid_t child = fork(); + + if (child < 0) + err(1, "fork"); + + if (child == 0) { + printf("[RUN]\tPTRACE_POKE GS, read GSBASE back\n"); + + printf("[RUN]\tARCH_SET_GS to 1\n"); + if (syscall(SYS_arch_prctl, ARCH_SET_GS, 1) != 0) + err(1, "ARCH_SET_GS"); + + if (ptrace(PTRACE_TRACEME, 0, NULL, NULL) != 0) + err(1, "PTRACE_TRACEME"); + + raise(SIGTRAP); + _exit(0); + } + + wait(&status); + + if (WSTOPSIG(status) == SIGTRAP) { + unsigned long base; + unsigned long gs_offset = USER_REGS_OFFSET(gs); + unsigned long base_offset = USER_REGS_OFFSET(gs_base); + + /* Read the initial base. It should be 1. */ + base = ptrace(PTRACE_PEEKUSER, child, base_offset, NULL); + if (base == 1) { + printf("[OK]\tGSBASE started at 1\n"); + } else { + nerrs++; + printf("[FAIL]\tGSBASE started at 0x%lx\n", base); + } + + printf("[RUN]\tSet GS = 0x7, read GSBASE\n"); + + /* Poke an LDT selector into GS. */ + if (ptrace(PTRACE_POKEUSER, child, gs_offset, 0x7) != 0) + err(1, "PTRACE_POKEUSER"); + + /* And read the base. */ + base = ptrace(PTRACE_PEEKUSER, child, base_offset, NULL); + + if (base == 0 || base == 1) { + printf("[OK]\tGSBASE reads as 0x%lx with invalid GS\n", base); + } else { + nerrs++; + printf("[FAIL]\tGSBASE=0x%lx (should be 0 or 1)\n", base); + } + } + + ptrace(PTRACE_CONT, child, NULL, NULL); + + wait(&status); + if (!WIFEXITED(status)) + printf("[WARN]\tChild didn't exit cleanly.\n"); +} + static void test_ptrace_write_gsbase(void) { int status; @@ -529,6 +591,9 @@ int main() shared_scratch = mmap(NULL, 4096, PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_SHARED, -1, 0);
+ /* Do these tests before we have an LDT. */ + test_ptrace_write_gs_read_base(); + /* Probe FSGSBASE */ sethandler(SIGILL, sigill, 0); if (sigsetjmp(jmpbuf, 1) == 0) {
From: Kan Liang kan.liang@linux.intel.com
commit 010cb00265f150bf82b23c02ad1fb87ce5c781e1 upstream.
An error occues when sampling non-PEBS INST_RETIRED.PREC_DIST(0x01c0) event.
perf record -e cpu/event=0xc0,umask=0x01/ -- sleep 1 Error: The sys_perf_event_open() syscall returned with 22 (Invalid argument) for event (cpu/event=0xc0,umask=0x01/). /bin/dmesg | grep -i perf may provide additional information.
The idxmsk64 of the event is set to 0. The event never be successfully scheduled.
The event should be limit to the fixed counter 0.
Fixes: 6017608936c1 ("perf/x86/intel: Add Icelake support") Reported-by: Yi, Ammy ammy.yi@intel.com Signed-off-by: Kan Liang kan.liang@linux.intel.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20200928134726.13090-1-kan.liang@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/events/intel/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -243,7 +243,7 @@ static struct extra_reg intel_skl_extra_
static struct event_constraint intel_icl_event_constraints[] = { FIXED_EVENT_CONSTRAINT(0x00c0, 0), /* INST_RETIRED.ANY */ - INTEL_UEVENT_CONSTRAINT(0x1c0, 0), /* INST_RETIRED.PREC_DIST */ + FIXED_EVENT_CONSTRAINT(0x01c0, 0), /* INST_RETIRED.PREC_DIST */ FIXED_EVENT_CONSTRAINT(0x003c, 1), /* CPU_CLK_UNHALTED.CORE */ FIXED_EVENT_CONSTRAINT(0x0300, 2), /* CPU_CLK_UNHALTED.REF */ FIXED_EVENT_CONSTRAINT(0x0400, 3), /* SLOTS */
From: Kim Phillips kim.phillips@amd.com
commit 26e52558ead4b39c0e0fe7bf08f82f5a9777a412 upstream.
Commit 5738891229a2 ("perf/x86/amd: Add support for Large Increment per Cycle Events") mistakenly zeroes the upper 16 bits of the count in set_period(). That's fine for counting with perf stat, but not sampling with perf record when only Large Increment events are being sampled. To enable sampling, we sign extend the upper 16 bits of the merged counter pair as described in the Family 17h PPRs:
"Software wanting to preload a value to a merged counter pair writes the high-order 16-bit value to the low-order 16 bits of the odd counter and then writes the low-order 48-bit value to the even counter. Reading the even counter of the merged counter pair returns the full 64-bit value."
Fixes: 5738891229a2 ("perf/x86/amd: Add support for Large Increment per Cycle Events") Signed-off-by: Kim Phillips kim.phillips@amd.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Cc: stable@vger.kernel.org Link: https://bugzilla.kernel.org/show_bug.cgi?id=206537 Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/events/core.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -1286,11 +1286,11 @@ int x86_perf_event_set_period(struct per wrmsrl(hwc->event_base, (u64)(-left) & x86_pmu.cntval_mask);
/* - * Clear the Merge event counter's upper 16 bits since + * Sign extend the Merge event counter's upper 16 bits since * we currently declare a 48-bit counter width */ if (is_counter_pair(hwc)) - wrmsrl(x86_pmu_event_addr(idx + 1), 0); + wrmsrl(x86_pmu_event_addr(idx + 1), 0xffff);
/* * Due to erratum on certan cpu we need
From: Kim Phillips kim.phillips@amd.com
commit c8fe99d0701fec9fb849ec880a86bc5592530496 upstream.
Commit 2f217d58a8a0 ("perf/x86/amd/uncore: Set the thread mask for F17h L3 PMCs") inadvertently changed the uncore driver's behaviour wrt perf tool invocations with or without a CPU list, specified with -C / --cpu=.
Change the behaviour of the driver to assume the former all-cpu (-a) case, which is the more commonly desired default. This fixes '-a -A' invocations without explicit cpu lists (-C) to not count L3 events only on behalf of the first thread of the first core in the L3 domain.
BEFORE:
Activity performed by the first thread of the last core (CPU#43) in CPU#40's L3 domain is not reported by CPU#40:
sudo perf stat -a -A -e l3_request_g1.caching_l3_cache_accesses taskset -c 43 perf bench mem memcpy -s 32mb -l 100 -f default ... CPU36 21,835 l3_request_g1.caching_l3_cache_accesses CPU40 87,066 l3_request_g1.caching_l3_cache_accesses CPU44 17,360 l3_request_g1.caching_l3_cache_accesses ...
AFTER:
The L3 domain activity is now reported by CPU#40:
sudo perf stat -a -A -e l3_request_g1.caching_l3_cache_accesses taskset -c 43 perf bench mem memcpy -s 32mb -l 100 -f default ... CPU36 354,891 l3_request_g1.caching_l3_cache_accesses CPU40 1,780,870 l3_request_g1.caching_l3_cache_accesses CPU44 315,062 l3_request_g1.caching_l3_cache_accesses ...
Fixes: 2f217d58a8a0 ("perf/x86/amd/uncore: Set the thread mask for F17h L3 PMCs") Signed-off-by: Kim Phillips kim.phillips@amd.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20200908214740.18097-2-kim.phillips@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/events/amd/uncore.c | 28 ++++++++-------------------- 1 file changed, 8 insertions(+), 20 deletions(-)
--- a/arch/x86/events/amd/uncore.c +++ b/arch/x86/events/amd/uncore.c @@ -181,28 +181,16 @@ static void amd_uncore_del(struct perf_e }
/* - * Convert logical CPU number to L3 PMC Config ThreadMask format + * Return a full thread and slice mask until per-CPU is + * properly supported. */ -static u64 l3_thread_slice_mask(int cpu) +static u64 l3_thread_slice_mask(void) { - u64 thread_mask, core = topology_core_id(cpu); - unsigned int shift, thread = 0; + if (boot_cpu_data.x86 <= 0x18) + return AMD64_L3_SLICE_MASK | AMD64_L3_THREAD_MASK;
- if (topology_smt_supported() && !topology_is_primary_thread(cpu)) - thread = 1; - - if (boot_cpu_data.x86 <= 0x18) { - shift = AMD64_L3_THREAD_SHIFT + 2 * (core % 4) + thread; - thread_mask = BIT_ULL(shift); - - return AMD64_L3_SLICE_MASK | thread_mask; - } - - core = (core << AMD64_L3_COREID_SHIFT) & AMD64_L3_COREID_MASK; - shift = AMD64_L3_THREAD_SHIFT + thread; - thread_mask = BIT_ULL(shift); - - return AMD64_L3_EN_ALL_SLICES | core | thread_mask; + return AMD64_L3_EN_ALL_SLICES | AMD64_L3_EN_ALL_CORES | + AMD64_L3_F19H_THREAD_MASK; }
static int amd_uncore_event_init(struct perf_event *event) @@ -232,7 +220,7 @@ static int amd_uncore_event_init(struct * For other events, the two fields do not affect the count. */ if (l3_mask && is_llc_event(event)) - hwc->config |= l3_thread_slice_mask(event->cpu); + hwc->config |= l3_thread_slice_mask();
uncore = event_to_amd_uncore(event); if (!uncore)
From: Kim Phillips kim.phillips@amd.com
commit 680d69635005ba0e58fe3f4c52fc162b8fc743b0 upstream.
get_ibs_op_count() adds hardware's current count (IbsOpCurCnt) bits to its count regardless of hardware's valid status.
According to the PPR for AMD Family 17h Model 31h B0 55803 Rev 0.54, if the counter rolls over, valid status is set, and the lower 7 bits of IbsOpCurCnt are randomized by hardware.
Don't include those bits in the driver's event count.
Fixes: 8b1e13638d46 ("perf/x86-ibs: Fix usage of IBS op current count") Signed-off-by: Kim Phillips kim.phillips@amd.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Cc: stable@vger.kernel.org Link: https://bugzilla.kernel.org/show_bug.cgi?id=206537 Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/events/amd/ibs.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-)
--- a/arch/x86/events/amd/ibs.c +++ b/arch/x86/events/amd/ibs.c @@ -335,11 +335,15 @@ static u64 get_ibs_op_count(u64 config) { u64 count = 0;
+ /* + * If the internal 27-bit counter rolled over, the count is MaxCnt + * and the lower 7 bits of CurCnt are randomized. + * Otherwise CurCnt has the full 27-bit current counter value. + */ if (config & IBS_OP_VAL) - count += (config & IBS_OP_MAX_CNT) << 4; /* cnt rolled over */ - - if (ibs_caps & IBS_CAPS_RDWROPCNT) - count += (config & IBS_OP_CUR_CNT) >> 32; + count = (config & IBS_OP_MAX_CNT) << 4; + else if (ibs_caps & IBS_CAPS_RDWROPCNT) + count = (config & IBS_OP_CUR_CNT) >> 32;
return count; }
From: Kim Phillips kim.phillips@amd.com
commit 36e1be8ada994d509538b3b1d0af8b63c351e729 upstream.
Neither IbsBrTarget nor OPDATA4 are populated in IBS Fetch mode. Don't accumulate them into raw sample user data in that case.
Also, in Fetch mode, add saving the IBS Fetch Control Extended MSR.
Technically, there is an ABI change here with respect to the IBS raw sample data format, but I don't see any perf driver version information being included in perf.data file headers, but, existing users can detect whether the size of the sample record has reduced by 8 bytes to determine whether the IBS driver has this fix.
Fixes: 904cb3677f3a ("perf/x86/amd/ibs: Update IBS MSRs and feature definitions") Reported-by: Stephane Eranian stephane.eranian@google.com Signed-off-by: Kim Phillips kim.phillips@amd.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20200908214740.18097-6-kim.phillips@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/events/amd/ibs.c | 26 ++++++++++++++++---------- arch/x86/include/asm/msr-index.h | 1 + 2 files changed, 17 insertions(+), 10 deletions(-)
--- a/arch/x86/events/amd/ibs.c +++ b/arch/x86/events/amd/ibs.c @@ -636,18 +636,24 @@ fail: perf_ibs->offset_max, offset + 1); } while (offset < offset_max); + /* + * Read IbsBrTarget, IbsOpData4, and IbsExtdCtl separately + * depending on their availability. + * Can't add to offset_max as they are staggered + */ if (event->attr.sample_type & PERF_SAMPLE_RAW) { - /* - * Read IbsBrTarget and IbsOpData4 separately - * depending on their availability. - * Can't add to offset_max as they are staggered - */ - if (ibs_caps & IBS_CAPS_BRNTRGT) { - rdmsrl(MSR_AMD64_IBSBRTARGET, *buf++); - size++; + if (perf_ibs == &perf_ibs_op) { + if (ibs_caps & IBS_CAPS_BRNTRGT) { + rdmsrl(MSR_AMD64_IBSBRTARGET, *buf++); + size++; + } + if (ibs_caps & IBS_CAPS_OPDATA4) { + rdmsrl(MSR_AMD64_IBSOPDATA4, *buf++); + size++; + } } - if (ibs_caps & IBS_CAPS_OPDATA4) { - rdmsrl(MSR_AMD64_IBSOPDATA4, *buf++); + if (perf_ibs == &perf_ibs_fetch && (ibs_caps & IBS_CAPS_FETCHCTLEXTD)) { + rdmsrl(MSR_AMD64_ICIBSEXTDCTL, *buf++); size++; } } --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -464,6 +464,7 @@ #define MSR_AMD64_IBSOP_REG_MASK ((1UL<<MSR_AMD64_IBSOP_REG_COUNT)-1) #define MSR_AMD64_IBSCTL 0xc001103a #define MSR_AMD64_IBSBRTARGET 0xc001103b +#define MSR_AMD64_ICIBSEXTDCTL 0xc001103c #define MSR_AMD64_IBSOPDATA4 0xc001103d #define MSR_AMD64_IBS_REG_COUNT_MAX 8 /* includes MSR_AMD64_IBSBRTARGET */ #define MSR_AMD64_SEV 0xc0010131
From: Chuanhong Guo gch981213@gmail.com
commit 4cafaddedb5fbef9531202ee547784409fd0de33 upstream.
CLK_TO_US macro is used to calculate potential transfer time for various timeout handling. However it overflows on transfer bigger than 512 bytes because it first did (len * 8 * 1000000). This controller typically operates at 45MHz. This patch did 2 things: 1. calculate clock / 1000000 first 2. add a 4M transfer size cap so that the final timeout in DMA reading doesn't overflow
Fixes: 881d1ee9fe81f ("spi: add support for mediatek spi-nor controller") Cc: stable@vger.kernel.org Signed-off-by: Chuanhong Guo gch981213@gmail.com Link: https://lore.kernel.org/r/20200922114905.2942859-1-gch981213@gmail.com Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/spi/spi-mtk-nor.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/drivers/spi/spi-mtk-nor.c +++ b/drivers/spi/spi-mtk-nor.c @@ -89,7 +89,7 @@ // Buffered page program can do one 128-byte transfer #define MTK_NOR_PP_SIZE 128
-#define CLK_TO_US(sp, clkcnt) ((clkcnt) * 1000000 / sp->spi_freq) +#define CLK_TO_US(sp, clkcnt) DIV_ROUND_UP(clkcnt, sp->spi_freq / 1000000)
struct mtk_nor { struct spi_controller *ctlr; @@ -177,6 +177,10 @@ static int mtk_nor_adjust_op_size(struct if ((op->addr.nbytes == 3) || (op->addr.nbytes == 4)) { if ((op->data.dir == SPI_MEM_DATA_IN) && mtk_nor_match_read(op)) { + // limit size to prevent timeout calculation overflow + if (op->data.nbytes > 0x400000) + op->data.nbytes = 0x400000; + if ((op->addr.val & MTK_NOR_DMA_ALIGN_MASK) || (op->data.nbytes < MTK_NOR_DMA_ALIGN)) op->data.nbytes = 1;
From: Krzysztof Kozlowski krzk@kernel.org
commit 687a2e76186dcfa42f22c14b655c3fb159839e79 upstream.
If dma_request_chan() for TX channel fails with EPROBE_DEFER, the RX channel would not be released and on next re-probe it would be requested second time.
Fixes: 386119bc7be9 ("spi: sprd: spi: sprd: Add DMA mode support") Cc: stable@vger.kernel.org Signed-off-by: Krzysztof Kozlowski krzk@kernel.org Acked-by: Chunyan Zhang zhang.lyra@gmail.com Link: https://lore.kernel.org/r/20200901152713.18629-1-krzk@kernel.org Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/spi/spi-sprd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/spi/spi-sprd.c +++ b/drivers/spi/spi-sprd.c @@ -563,11 +563,11 @@ static int sprd_spi_dma_request(struct s
ss->dma.dma_chan[SPRD_SPI_TX] = dma_request_chan(ss->dev, "tx_chn"); if (IS_ERR_OR_NULL(ss->dma.dma_chan[SPRD_SPI_TX])) { + dma_release_channel(ss->dma.dma_chan[SPRD_SPI_RX]); if (PTR_ERR(ss->dma.dma_chan[SPRD_SPI_TX]) == -EPROBE_DEFER) return PTR_ERR(ss->dma.dma_chan[SPRD_SPI_TX]);
dev_err(ss->dev, "request TX DMA channel failed!\n"); - dma_release_channel(ss->dma.dma_chan[SPRD_SPI_RX]); return PTR_ERR(ss->dma.dma_chan[SPRD_SPI_TX]); }
From: Krzysztof Kozlowski krzk@kernel.org
commit 6aaad58c872db062f7ea2761421ca748bd0931cc upstream.
The driver uses atomic version of gpiod_set_value() without any real reason. It is called in a workqueue under mutex so it could sleep there. Changing it to "can_sleep" flavor allows to use the driver with all GPIO chips.
Fixes: 4ed754de2d66 ("extcon: Add support for ptn5150 extcon driver") Cc: stable@vger.kernel.org Signed-off-by: Krzysztof Kozlowski krzk@kernel.org Reviewed-by: Vijai Kumar K vijaikumar.kanagarajan@gmail.com Signed-off-by: Chanwoo Choi cw00.choi@samsung.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/extcon/extcon-ptn5150.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
--- a/drivers/extcon/extcon-ptn5150.c +++ b/drivers/extcon/extcon-ptn5150.c @@ -127,7 +127,7 @@ static void ptn5150_irq_work(struct work case PTN5150_DFP_ATTACHED: extcon_set_state_sync(info->edev, EXTCON_USB_HOST, false); - gpiod_set_value(info->vbus_gpiod, 0); + gpiod_set_value_cansleep(info->vbus_gpiod, 0); extcon_set_state_sync(info->edev, EXTCON_USB, true); break; @@ -138,9 +138,9 @@ static void ptn5150_irq_work(struct work PTN5150_REG_CC_VBUS_DETECTION_MASK) >> PTN5150_REG_CC_VBUS_DETECTION_SHIFT); if (vbus) - gpiod_set_value(info->vbus_gpiod, 0); + gpiod_set_value_cansleep(info->vbus_gpiod, 0); else - gpiod_set_value(info->vbus_gpiod, 1); + gpiod_set_value_cansleep(info->vbus_gpiod, 1);
extcon_set_state_sync(info->edev, EXTCON_USB_HOST, true); @@ -156,7 +156,7 @@ static void ptn5150_irq_work(struct work EXTCON_USB_HOST, false); extcon_set_state_sync(info->edev, EXTCON_USB, false); - gpiod_set_value(info->vbus_gpiod, 0); + gpiod_set_value_cansleep(info->vbus_gpiod, 0); } }
From: Marek Behún marek.behun@nic.cz
commit ff5c89d44453e7ad99502b04bf798a3fc32c758b upstream.
These two drivers do not provide remove method and use devres for allocation of other resources, yet they use led_classdev_register instead of the devres variant, devm_led_classdev_register.
Fix this.
Signed-off-by: Marek Behún marek.behun@nic.cz Cc: Álvaro Fernández Rojas noltari@gmail.com Cc: Kevin Cernekee cernekee@gmail.com Cc: Jaedon Shin jaedon.shin@gmail.com Signed-off-by: Pavel Machek pavel@ucw.cz Cc: stable@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/leds/leds-bcm6328.c | 2 +- drivers/leds/leds-bcm6358.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/leds/leds-bcm6328.c +++ b/drivers/leds/leds-bcm6328.c @@ -383,7 +383,7 @@ static int bcm6328_led(struct device *de led->cdev.brightness_set = bcm6328_led_set; led->cdev.blink_set = bcm6328_blink_set;
- rc = led_classdev_register(dev, &led->cdev); + rc = devm_led_classdev_register(dev, &led->cdev); if (rc < 0) return rc;
--- a/drivers/leds/leds-bcm6358.c +++ b/drivers/leds/leds-bcm6358.c @@ -137,7 +137,7 @@ static int bcm6358_led(struct device *de
led->cdev.brightness_set = bcm6358_led_set;
- rc = led_classdev_register(dev, &led->cdev); + rc = devm_led_classdev_register(dev, &led->cdev); if (rc < 0) return rc;
From: Steve Foreman foremans@google.com
commit 2b52278150c49559a472f2d6dd66f6f3b405378e upstream.
The max34* family have the IOUT_OC_WARN_LIMIT and IOUT_OC_CRIT_LIMIT registers swapped.
Cc: stable@vger.kernel.org Signed-off-by: Steve Foreman foremans@google.com [groeck: Updated subject, use C comment style, tab after defines] [groeck: Added missing break; statements (by alexandru.ardelean@analog.com)] Signed-off-by: Guenter Roeck linux@roeck-us.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/hwmon/pmbus/max34440.c | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+)
--- a/drivers/hwmon/pmbus/max34440.c +++ b/drivers/hwmon/pmbus/max34440.c @@ -31,6 +31,13 @@ enum chips { max34440, max34441, max3444 #define MAX34440_STATUS_OT_FAULT BIT(5) #define MAX34440_STATUS_OT_WARN BIT(6)
+/* + * The whole max344* family have IOUT_OC_WARN_LIMIT and IOUT_OC_FAULT_LIMIT + * swapped from the standard pmbus spec addresses. + */ +#define MAX34440_IOUT_OC_WARN_LIMIT 0x46 +#define MAX34440_IOUT_OC_FAULT_LIMIT 0x4A + #define MAX34451_MFR_CHANNEL_CONFIG 0xe4 #define MAX34451_MFR_CHANNEL_CONFIG_SEL_MASK 0x3f
@@ -49,6 +56,14 @@ static int max34440_read_word_data(struc const struct max34440_data *data = to_max34440_data(info);
switch (reg) { + case PMBUS_IOUT_OC_FAULT_LIMIT: + ret = pmbus_read_word_data(client, page, phase, + MAX34440_IOUT_OC_FAULT_LIMIT); + break; + case PMBUS_IOUT_OC_WARN_LIMIT: + ret = pmbus_read_word_data(client, page, phase, + MAX34440_IOUT_OC_WARN_LIMIT); + break; case PMBUS_VIRT_READ_VOUT_MIN: ret = pmbus_read_word_data(client, page, phase, MAX34440_MFR_VOUT_MIN); @@ -115,6 +130,14 @@ static int max34440_write_word_data(stru int ret;
switch (reg) { + case PMBUS_IOUT_OC_FAULT_LIMIT: + ret = pmbus_write_word_data(client, page, MAX34440_IOUT_OC_FAULT_LIMIT, + word); + break; + case PMBUS_IOUT_OC_WARN_LIMIT: + ret = pmbus_write_word_data(client, page, MAX34440_IOUT_OC_WARN_LIMIT, + word); + break; case PMBUS_VIRT_RESET_POUT_HISTORY: ret = pmbus_write_word_data(client, page, MAX34446_MFR_POUT_PEAK, 0);
From: Hans de Goede hdegoede@redhat.com
commit 93df48d37c3f03886d84831992926333e7810640 upstream.
uvc_ctrl_add_info() calls uvc_ctrl_get_flags() which will override the fixed-up flags set by uvc_ctrl_fixup_xu_info().
uvc_ctrl_init_xu_ctrl() already calls uvc_ctrl_get_flags() before calling uvc_ctrl_add_info(), so the uvc_ctrl_get_flags() call in uvc_ctrl_add_info() is not necessary for xu ctrls.
This commit moves the uvc_ctrl_get_flags() call for normal controls from uvc_ctrl_add_info() to uvc_ctrl_init_ctrl(), so that we no longer call uvc_ctrl_get_flags() twice for xu controls and so that we no longer override the fixed-up flags set by uvc_ctrl_fixup_xu_info().
This fixes the xu motor controls not working properly on a Logitech 046d:08cc, and presumably also on the other Logitech models which have a quirk for this in the uvc_ctrl_fixup_xu_info() function.
Cc: stable@vger.kernel.org Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Laurent Pinchart laurent.pinchart@ideasonboard.com Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/media/usb/uvc/uvc_ctrl.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-)
--- a/drivers/media/usb/uvc/uvc_ctrl.c +++ b/drivers/media/usb/uvc/uvc_ctrl.c @@ -2033,13 +2033,6 @@ static int uvc_ctrl_add_info(struct uvc_ goto done; }
- /* - * Retrieve control flags from the device. Ignore errors and work with - * default flag values from the uvc_ctrl array when the device doesn't - * properly implement GET_INFO on standard controls. - */ - uvc_ctrl_get_flags(dev, ctrl, &ctrl->info); - ctrl->initialized = 1;
uvc_trace(UVC_TRACE_CONTROL, "Added control %pUl/%u to device %s " @@ -2262,6 +2255,13 @@ static void uvc_ctrl_init_ctrl(struct uv if (uvc_entity_match_guid(ctrl->entity, info->entity) && ctrl->index == info->index) { uvc_ctrl_add_info(dev, ctrl, info); + /* + * Retrieve control flags from the device. Ignore errors + * and work with default flag values from the uvc_ctrl + * array when the device doesn't properly implement + * GET_INFO on standard controls. + */ + uvc_ctrl_get_flags(dev, ctrl, &ctrl->info); break; } }
From: Jan Kara jack@suse.cz
commit 6dbf7bb555981fb5faf7b691e8f6169fc2b2e63b upstream.
If block_write_full_page() is called for a page that is beyond current inode size, it will truncate page buffers for the page and return 0. This logic has been added in 2.5.62 in commit 81eb69062588 ("fix ext3 BUG due to race with truncate") in history.git tree to fix a problem with ext3 in data=ordered mode. This particular problem doesn't exist anymore because ext3 is long gone and ext4 handles ordered data differently. Also normally buffers are invalidated by truncate code and there's no need to specially handle this in ->writepage() code.
This invalidation of page buffers in block_write_full_page() is causing issues to filesystems (e.g. ext4 or ocfs2) when block device is shrunk under filesystem's hands and metadata buffers get discarded while being tracked by the journalling layer. Although it is obviously "not supported" it can cause kernel crashes like:
[ 7986.689400] BUG: unable to handle kernel NULL pointer dereference at +0000000000000008 [ 7986.697197] PGD 0 P4D 0 [ 7986.699724] Oops: 0002 [#1] SMP PTI [ 7986.703200] CPU: 4 PID: 203778 Comm: jbd2/dm-3-8 Kdump: loaded Tainted: G +O --------- - - 4.18.0-147.5.0.5.h126.eulerosv2r9.x86_64 #1 [ 7986.716438] Hardware name: Huawei RH2288H V3/BC11HGSA0, BIOS 1.57 08/11/2015 [ 7986.723462] RIP: 0010:jbd2_journal_grab_journal_head+0x1b/0x40 [jbd2] ... [ 7986.810150] Call Trace: [ 7986.812595] __jbd2_journal_insert_checkpoint+0x23/0x70 [jbd2] [ 7986.818408] jbd2_journal_commit_transaction+0x155f/0x1b60 [jbd2] [ 7986.836467] kjournald2+0xbd/0x270 [jbd2]
which is not great. The crash happens because bh->b_private is suddently NULL although BH_JBD flag is still set (this is because block_invalidatepage() cleared BH_Mapped flag and subsequent bh lookup found buffer without BH_Mapped set, called init_page_buffers() which has rewritten bh->b_private). So just remove the invalidation in block_write_full_page().
Note that the buffer cache invalidation when block device changes size is already careful to avoid similar problems by using invalidate_mapping_pages() which skips busy buffers so it was only this odd block_write_full_page() behavior that could tear down bdev buffers under filesystem's hands.
Reported-by: Ye Bin yebin10@huawei.com Signed-off-by: Jan Kara jack@suse.cz Reviewed-by: Christoph Hellwig hch@lst.de CC: stable@vger.kernel.org Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/buffer.c | 16 ---------------- 1 file changed, 16 deletions(-)
--- a/fs/buffer.c +++ b/fs/buffer.c @@ -2771,16 +2771,6 @@ int nobh_writepage(struct page *page, ge /* Is the page fully outside i_size? (truncate in progress) */ offset = i_size & (PAGE_SIZE-1); if (page->index >= end_index+1 || !offset) { - /* - * The page may have dirty, unmapped buffers. For example, - * they may have been added in ext3_writepage(). Make them - * freeable here, so the page does not leak. - */ -#if 0 - /* Not really sure about this - do we need this ? */ - if (page->mapping->a_ops->invalidatepage) - page->mapping->a_ops->invalidatepage(page, offset); -#endif unlock_page(page); return 0; /* don't care */ } @@ -2975,12 +2965,6 @@ int block_write_full_page(struct page *p /* Is the page fully outside i_size? (truncate in progress) */ offset = i_size & (PAGE_SIZE-1); if (page->index >= end_index+1 || !offset) { - /* - * The page may have dirty, unmapped buffers. For example, - * they may have been added in ext3_writepage(). Make them - * freeable here, so the page does not leak. - */ - do_invalidatepage(page, 0, PAGE_SIZE); unlock_page(page); return 0; /* don't care */ }
From: Hanjun Guo guohanjun@huawei.com
commit 9a2e849fb6de471b82d19989a7944d3b7671793c upstream.
config_item_put() should be called in the drop_item callback, to decrement refcount for the config item.
Fixes: 772bf1e2878ec ("ACPI: configfs: Unload SSDT on configfs entry removal") Signed-off-by: Hanjun Guo guohanjun@huawei.com [ rjw: Subject edit ] Cc: 4.13+ stable@vger.kernel.org # 4.13+ Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/acpi/acpi_configfs.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/acpi/acpi_configfs.c +++ b/drivers/acpi/acpi_configfs.c @@ -228,6 +228,7 @@ static void acpi_table_drop_item(struct
ACPI_INFO(("Host-directed Dynamic ACPI Table Unload")); acpi_unload_table(table->index); + config_item_put(cfg); }
static struct configfs_group_operations acpi_table_group_ops = {
From: Ashish Sangwan ashishsangwan2@gmail.com
commit 247db73560bc3e5aef6db50c443c3c0db115bc93 upstream.
We are generating incorrect path in case of rename retry because we are restarting from wrong dentry. We should restart from the dentry which was received in the call to nfs_path.
CC: stable@vger.kernel.org Signed-off-by: Ashish Sangwan ashishsangwan2@gmail.com Signed-off-by: Anna Schumaker Anna.Schumaker@Netapp.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/nfs/namespace.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-)
--- a/fs/nfs/namespace.c +++ b/fs/nfs/namespace.c @@ -32,9 +32,9 @@ int nfs_mountpoint_expiry_timeout = 500 /* * nfs_path - reconstruct the path given an arbitrary dentry * @base - used to return pointer to the end of devname part of path - * @dentry - pointer to dentry + * @dentry_in - pointer to dentry * @buffer - result buffer - * @buflen - length of buffer + * @buflen_in - length of buffer * @flags - options (see below) * * Helper function for constructing the server pathname @@ -49,15 +49,19 @@ int nfs_mountpoint_expiry_timeout = 500 * the original device (export) name * (if unset, the original name is returned verbatim) */ -char *nfs_path(char **p, struct dentry *dentry, char *buffer, ssize_t buflen, - unsigned flags) +char *nfs_path(char **p, struct dentry *dentry_in, char *buffer, + ssize_t buflen_in, unsigned flags) { char *end; int namelen; unsigned seq; const char *base; + struct dentry *dentry; + ssize_t buflen;
rename_retry: + buflen = buflen_in; + dentry = dentry_in; end = buffer+buflen; *--end = '\0'; buflen--;
From: dmitry.torokhov@gmail.com dmitry.torokhov@gmail.com
commit 21988a8e51479ceffe7b0568b170effabb708dfe upstream.
The original intent of 84d3f6b76447 was to delay evaluating lid state until all drivers have been loaded, with input device being opened from userspace serving as a signal for this condition. Let's ensure that state updates happen even if userspace closed (or in the future inhibited) input device.
Note that if we go through suspend/resume cycle we assume the system has been fully initialized even if LID input device has not been opened yet.
This has a side-effect of fixing access to input->users outside of input->mutex protections by the way of eliminating said accesses and using driver private flag.
Fixes: 84d3f6b76447 ("ACPI / button: Delay acpi_lid_initialize_state() until first user space open") Signed-off-by: Dmitry Torokhov dmitry.torokhov@gmail.com Reviewed-by: Hans de Goede hdegoede@redhat.com Cc: 4.15+ stable@vger.kernel.org # 4.15+ Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/acpi/button.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-)
--- a/drivers/acpi/button.c +++ b/drivers/acpi/button.c @@ -153,6 +153,7 @@ struct acpi_button { int last_state; ktime_t last_time; bool suspended; + bool lid_state_initialized; };
static struct acpi_device *lid_device; @@ -383,6 +384,8 @@ static int acpi_lid_update_state(struct
static void acpi_lid_initialize_state(struct acpi_device *device) { + struct acpi_button *button = acpi_driver_data(device); + switch (lid_init_state) { case ACPI_BUTTON_LID_INIT_OPEN: (void)acpi_lid_notify_state(device, 1); @@ -394,13 +397,14 @@ static void acpi_lid_initialize_state(st default: break; } + + button->lid_state_initialized = true; }
static void acpi_button_notify(struct acpi_device *device, u32 event) { struct acpi_button *button = acpi_driver_data(device); struct input_dev *input; - int users;
switch (event) { case ACPI_FIXED_HARDWARE_EVENT: @@ -409,10 +413,7 @@ static void acpi_button_notify(struct ac case ACPI_BUTTON_NOTIFY_STATUS: input = button->input; if (button->type == ACPI_BUTTON_TYPE_LID) { - mutex_lock(&button->input->mutex); - users = button->input->users; - mutex_unlock(&button->input->mutex); - if (users) + if (button->lid_state_initialized) acpi_lid_update_state(device, true); } else { int keycode; @@ -457,7 +458,7 @@ static int acpi_button_resume(struct dev struct acpi_button *button = acpi_driver_data(device);
button->suspended = false; - if (button->type == ACPI_BUTTON_TYPE_LID && button->input->users) { + if (button->type == ACPI_BUTTON_TYPE_LID) { button->last_state = !!acpi_lid_evaluate_state(device); button->last_time = ktime_get(); acpi_lid_initialize_state(device);
From: Ben Hutchings ben@decadent.org.uk
commit 7cecb47f55e00282f972a1e0b09136c8cd938221 upstream.
extlog_init() uses rdmsrl() to read an MSR, which on older CPUs provokes a error message at boot:
unchecked MSR access error: RDMSR from 0x179 at rIP: 0xcd047307 (native_read_msr+0x7/0x40)
Use rdmsrl_safe() instead, and return -ENODEV if it fails.
Reported-by: jim@photojim.ca Cc: All applicable stable@vger.kernel.org Signed-off-by: Ben Hutchings ben@decadent.org.uk Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/acpi/acpi_extlog.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
--- a/drivers/acpi/acpi_extlog.c +++ b/drivers/acpi/acpi_extlog.c @@ -222,9 +222,9 @@ static int __init extlog_init(void) u64 cap; int rc;
- rdmsrl(MSR_IA32_MCG_CAP, cap); - - if (!(cap & MCG_ELOG_P) || !extlog_get_l1addr()) + if (rdmsrl_safe(MSR_IA32_MCG_CAP, &cap) || + !(cap & MCG_ELOG_P) || + !extlog_get_l1addr()) return -ENODEV;
rc = -EINVAL;
From: Alex Hung alex.hung@canonical.com
commit b226faab4e7890bbbccdf794e8b94276414f9058 upstream.
The default backlight interface is AMD's radeon_bl0 which does not work on this system, so use the ACPI backlight interface on it instead.
BugLink: https://bugs.launchpad.net/bugs/1894667 Cc: All applicable stable@vger.kernel.org Signed-off-by: Alex Hung alex.hung@canonical.com [ rjw: Changelog edits ] Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/acpi/video_detect.c | 9 +++++++++ 1 file changed, 9 insertions(+)
--- a/drivers/acpi/video_detect.c +++ b/drivers/acpi/video_detect.c @@ -282,6 +282,15 @@ static const struct dmi_system_id video_ DMI_MATCH(DMI_PRODUCT_NAME, "530U4E/540U4E"), }, }, + /* https://bugs.launchpad.net/bugs/1894667 */ + { + .callback = video_detect_force_video, + .ident = "HP 635 Notebook", + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), + DMI_MATCH(DMI_PRODUCT_NAME, "HP 635 Notebook PC"), + }, + },
/* Non win8 machines which need native backlight nevertheless */ {
From: Jamie Iles jamie@nuviainc.com
commit 0fada277147ffc6d694aa32162f51198d4f10d94 upstream.
If ACPI is disabled then loading the acpi_dbg module will result in the following splat when lock debugging is enabled.
DEBUG_LOCKS_WARN_ON(lock->magic != lock) WARNING: CPU: 0 PID: 1 at kernel/locking/mutex.c:938 __mutex_lock+0xa10/0x1290 Kernel panic - not syncing: panic_on_warn set ... CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.9.0-rc8+ #103 Hardware name: linux,dummy-virt (DT) Call trace: dump_backtrace+0x0/0x4d8 show_stack+0x34/0x48 dump_stack+0x174/0x1f8 panic+0x360/0x7a0 __warn+0x244/0x2ec report_bug+0x240/0x398 bug_handler+0x50/0xc0 call_break_hook+0x160/0x1d8 brk_handler+0x30/0xc0 do_debug_exception+0x184/0x340 el1_dbg+0x48/0xb0 el1_sync_handler+0x170/0x1c8 el1_sync+0x80/0x100 __mutex_lock+0xa10/0x1290 mutex_lock_nested+0x6c/0xc0 acpi_register_debugger+0x40/0x88 acpi_aml_init+0xc4/0x114 do_one_initcall+0x24c/0xb10 kernel_init_freeable+0x690/0x728 kernel_init+0x20/0x1e8 ret_from_fork+0x10/0x18
This is because acpi_debugger.lock has not been initialized as acpi_debugger_init() is not called when ACPI is disabled. Fail module loading to avoid this and any subsequent problems that might arise by trying to debug AML when ACPI is disabled.
Fixes: 8cfb0cdf07e2 ("ACPI / debugger: Add IO interface to access debugger functionalities") Reviewed-by: Hanjun Guo guohanjun@huawei.com Signed-off-by: Jamie Iles jamie@nuviainc.com Cc: 4.10+ stable@vger.kernel.org # 4.10+ Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/acpi/acpi_dbg.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/drivers/acpi/acpi_dbg.c +++ b/drivers/acpi/acpi_dbg.c @@ -749,6 +749,9 @@ static int __init acpi_aml_init(void) { int ret;
+ if (acpi_disabled) + return -ENODEV; + /* Initialize AML IO interface */ mutex_init(&acpi_aml_io.lock); init_waitqueue_head(&acpi_aml_io.wait);
From: Lukas Wunner lukas@wunner.de
commit c6e331312ebfb52b7186e5d82d517d68b4d2f2d8 upstream.
Recent laptops with dual AMD GPUs fail to suspend the discrete GPU, thus causing lockups on system sleep and high power consumption at runtime. The discrete GPU would normally be suspended to D3cold by turning off ACPI _PR3 Power Resources of the Root Port above the GPU.
However on affected systems, the Root Port is hotplug-capable and pci_bridge_d3_possible() only allows hotplug ports to go to D3 if they belong to a Thunderbolt device or if the Root Port possesses a "HotPlugSupportInD3" ACPI property. Neither is the case on affected laptops. The reason for whitelisting only specific, known to work hotplug ports for D3 is that there have been reports of SkyLake Xeon-SP systems raising Hardware Error NMIs upon suspending their hotplug ports: https://lore.kernel.org/linux-pci/20170503180426.GA4058@otc-nc-03/
But if a hotplug port is power manageable by ACPI (as can be detected through presence of Power Resources and corresponding _PS0 and _PS3 methods) then it ought to be safe to suspend it to D3. To this end, amend acpi_pci_bridge_d3() to whitelist such ports for D3.
Link: https://gitlab.freedesktop.org/drm/amd/-/issues/1222 Link: https://gitlab.freedesktop.org/drm/amd/-/issues/1252 Link: https://gitlab.freedesktop.org/drm/amd/-/issues/1304 Reported-and-tested-by: Arthur Borsboom arthurborsboom@gmail.com Reported-and-tested-by: matoro matoro@airmail.cc Reported-by: Aaron Zakhrov aaron.zakhrov@gmail.com Reported-by: Michal Rostecki mrostecki@suse.com Reported-by: Shai Coleman git@shaicoleman.com Signed-off-by: Lukas Wunner lukas@wunner.de Acked-by: Alex Deucher alexander.deucher@amd.com Cc: 5.4+ stable@vger.kernel.org # 5.4+ Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/pci/pci-acpi.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
--- a/drivers/pci/pci-acpi.c +++ b/drivers/pci/pci-acpi.c @@ -944,6 +944,16 @@ static bool acpi_pci_bridge_d3(struct pc if (!dev->is_hotplug_bridge) return false;
+ /* Assume D3 support if the bridge is power-manageable by ACPI. */ + adev = ACPI_COMPANION(&dev->dev); + if (!adev && !pci_dev_is_added(dev)) { + adev = acpi_pci_find_companion(&dev->dev); + ACPI_COMPANION_SET(&dev->dev, adev); + } + + if (adev && acpi_device_power_manageable(adev)) + return true; + /* * Look for a special _DSD property for the root port and if it * is set we know the hierarchy behind it supports D3 just fine.
From: Rafael J. Wysocki rafael.j.wysocki@intel.com
commit 5e92442bb4121562231e6daf8a2d1306cb5f8805 upstream.
Commit 607b9df63057 ("ACPI: EC: PM: Avoid flushing EC work when EC GPE is inactive") has been reported to cause some power button wakeup events to be missed on some systems, so modify acpi_ec_dispatch_gpe() to call acpi_ec_flush_work() unconditionally to effectively reverse the changes made by that commit.
Also note that the problem which prompted commit 607b9df63057 is not reproducible any more on the affected machine.
Fixes: 607b9df63057 ("ACPI: EC: PM: Avoid flushing EC work when EC GPE is inactive") Reported-by: Raymond Tan raymond.tan@intel.com Cc: 5.4+ stable@vger.kernel.org # 5.4+ Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/acpi/ec.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)
--- a/drivers/acpi/ec.c +++ b/drivers/acpi/ec.c @@ -2019,12 +2019,11 @@ bool acpi_ec_dispatch_gpe(void) * to allow the caller to process events properly after that. */ ret = acpi_dispatch_gpe(NULL, first_ec->gpe); - if (ret == ACPI_INTERRUPT_HANDLED) { + if (ret == ACPI_INTERRUPT_HANDLED) pm_pr_dbg("ACPI EC GPE dispatched\n");
- /* Flush the event and query workqueues. */ - acpi_ec_flush_work(); - } + /* Flush the event and query workqueues. */ + acpi_ec_flush_work();
return false; }
From: Rafael J. Wysocki rafael.j.wysocki@intel.com
commit e0e9ce390d7bc6a705653d4a8aa4ea92c9a65e53 upstream.
It turns out that in some cases there are EC events to flush in acpi_ec_dispatch_gpe() even though the ec_no_wakeup kernel parameter is set and the EC GPE is disabled while sleeping, so drop the ec_no_wakeup check that prevents those events from being processed from acpi_ec_dispatch_gpe().
Reported-by: Todd Brandt todd.e.brandt@linux.intel.com Cc: 5.4+ stable@vger.kernel.org # 5.4+ Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/acpi/ec.c | 3 --- 1 file changed, 3 deletions(-)
--- a/drivers/acpi/ec.c +++ b/drivers/acpi/ec.c @@ -2011,9 +2011,6 @@ bool acpi_ec_dispatch_gpe(void) if (acpi_any_gpe_status_set(first_ec->gpe)) return true;
- if (ec_no_wakeup) - return false; - /* * Dispatch the EC GPE in-band, but do not report wakeup in any case * to allow the caller to process events properly after that.
From: Wei Huang wei.huang2@amd.com
commit 5368512abe08a28525d9b24abbfc2a72493e8dba upstream.
acpi-cpufreq has a old quirk that overrides the _PSD table supplied by BIOS on AMD CPUs. However the _PSD table of new AMD CPUs (Family 19h+) now accurately reports the P-state dependency of CPU cores. Hence this quirk needs to be fixed in order to support new CPUs' frequency control.
Fixes: acd316248205 ("acpi-cpufreq: Add quirk to disable _PSD usage on all AMD CPUs") Signed-off-by: Wei Huang wei.huang2@amd.com [ rjw: Subject edit ] Cc: 3.10+ stable@vger.kernel.org # 3.10+ Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/cpufreq/acpi-cpufreq.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/cpufreq/acpi-cpufreq.c +++ b/drivers/cpufreq/acpi-cpufreq.c @@ -691,7 +691,8 @@ static int acpi_cpufreq_cpu_init(struct cpumask_copy(policy->cpus, topology_core_cpumask(cpu)); }
- if (check_amd_hwpstate_cpu(cpu) && !acpi_pstate_strict) { + if (check_amd_hwpstate_cpu(cpu) && boot_cpu_data.x86 < 0x19 && + !acpi_pstate_strict) { cpumask_clear(policy->cpus); cpumask_set_cpu(cpu, policy->cpus); cpumask_copy(data->freqdomain_cpus,
From: Jens Axboe axboe@kernel.dk
commit a8b595b22d31f83b715511f59012f152a269d83b upstream.
There was an assumption that kthread_create_on_node() would properly set NUMA affinities in terms of CPUs allowed, but it doesn't. Make sure we do this when creating an io-wq context on NUMA.
Cc: stable@vger.kernel.org Stefan Metzmacher metze@samba.org Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/io-wq.c | 1 + 1 file changed, 1 insertion(+)
--- a/fs/io-wq.c +++ b/fs/io-wq.c @@ -654,6 +654,7 @@ static bool create_io_worker(struct io_w kfree(worker); return false; } + kthread_bind_mask(worker->task, cpumask_of_node(wqe->node));
raw_spin_lock_irq(&wqe->lock); hlist_nulls_add_head_rcu(&worker->nulls_node, &wqe->free_list);
From: Martin Fuzzey martin.fuzzey@flowbird.group
commit c9723750a699c3bd465493ac2be8992b72ccb105 upstream.
On my platform (i.MX53) bus access sometimes fails with w1_search: max_slave_count 64 reached, will continue next search.
The reason is the use of jiffies to implement a 200us timeout in mxc_w1_ds2_touch_bit(). On some platforms the jiffies timer resolution is insufficient for this.
Fix by replacing jiffies by ktime_get().
For consistency apply the same change to the other use of jiffies in mxc_w1_ds2_reset_bus().
Fixes: f80b2581a706 ("w1: mxc_w1: Optimize mxc_w1_ds2_touch_bit()") Cc: stable stable@vger.kernel.org Signed-off-by: Martin Fuzzey martin.fuzzey@flowbird.group Link: https://lore.kernel.org/r/1601455030-6607-1-git-send-email-martin.fuzzey@flo... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/w1/masters/mxc_w1.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-)
--- a/drivers/w1/masters/mxc_w1.c +++ b/drivers/w1/masters/mxc_w1.c @@ -7,7 +7,7 @@ #include <linux/clk.h> #include <linux/delay.h> #include <linux/io.h> -#include <linux/jiffies.h> +#include <linux/ktime.h> #include <linux/module.h> #include <linux/mod_devicetable.h> #include <linux/platform_device.h> @@ -40,12 +40,12 @@ struct mxc_w1_device { static u8 mxc_w1_ds2_reset_bus(void *data) { struct mxc_w1_device *dev = data; - unsigned long timeout; + ktime_t timeout;
writeb(MXC_W1_CONTROL_RPP, dev->regs + MXC_W1_CONTROL);
/* Wait for reset sequence 511+512us, use 1500us for sure */ - timeout = jiffies + usecs_to_jiffies(1500); + timeout = ktime_add_us(ktime_get(), 1500);
udelay(511 + 512);
@@ -55,7 +55,7 @@ static u8 mxc_w1_ds2_reset_bus(void *dat /* PST bit is valid after the RPP bit is self-cleared */ if (!(ctrl & MXC_W1_CONTROL_RPP)) return !(ctrl & MXC_W1_CONTROL_PST); - } while (time_is_after_jiffies(timeout)); + } while (ktime_before(ktime_get(), timeout));
return 1; } @@ -68,12 +68,12 @@ static u8 mxc_w1_ds2_reset_bus(void *dat static u8 mxc_w1_ds2_touch_bit(void *data, u8 bit) { struct mxc_w1_device *dev = data; - unsigned long timeout; + ktime_t timeout;
writeb(MXC_W1_CONTROL_WR(bit), dev->regs + MXC_W1_CONTROL);
/* Wait for read/write bit (60us, Max 120us), use 200us for sure */ - timeout = jiffies + usecs_to_jiffies(200); + timeout = ktime_add_us(ktime_get(), 200);
udelay(60);
@@ -83,7 +83,7 @@ static u8 mxc_w1_ds2_touch_bit(void *dat /* RDST bit is valid after the WR1/RD bit is self-cleared */ if (!(ctrl & MXC_W1_CONTROL_WR(bit))) return !!(ctrl & MXC_W1_CONTROL_RDST); - } while (time_is_after_jiffies(timeout)); + } while (ktime_before(ktime_get(), timeout));
return 0; }
From: Kees Cook keescook@chromium.org
commit c307459b9d1fcb8bbf3ea5a4162979532322ef77 upstream.
FIRMWARE_PREALLOC_BUFFER is a "how", not a "what", and confuses the LSMs that are interested in filtering between types of things. The "how" should be an internal detail made uninteresting to the LSMs.
Fixes: a098ecd2fa7d ("firmware: support loading into a pre-allocated buffer") Fixes: fd90bc559bfb ("ima: based on policy verify firmware signatures (pre-allocated buffer)") Fixes: 4f0496d8ffa3 ("ima: based on policy warn about loading firmware (pre-allocated buffer)") Signed-off-by: Kees Cook keescook@chromium.org Reviewed-by: Mimi Zohar zohar@linux.ibm.com Reviewed-by: Luis Chamberlain mcgrof@kernel.org Acked-by: Scott Branden scott.branden@broadcom.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20201002173828.2099543-2-keescook@chromium.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/base/firmware_loader/main.c | 5 ++--- fs/exec.c | 7 ++++--- include/linux/fs.h | 1 - kernel/module.c | 2 +- security/integrity/digsig.c | 2 +- security/integrity/ima/ima_fs.c | 2 +- security/integrity/ima/ima_main.c | 6 ++---- 7 files changed, 11 insertions(+), 14 deletions(-)
--- a/drivers/base/firmware_loader/main.c +++ b/drivers/base/firmware_loader/main.c @@ -470,14 +470,12 @@ fw_get_filesystem_firmware(struct device int i, len; int rc = -ENOENT; char *path; - enum kernel_read_file_id id = READING_FIRMWARE; size_t msize = INT_MAX; void *buffer = NULL;
/* Already populated data member means we're loading into a buffer */ if (!decompress && fw_priv->data) { buffer = fw_priv->data; - id = READING_FIRMWARE_PREALLOC_BUFFER; msize = fw_priv->allocated_size; }
@@ -501,7 +499,8 @@ fw_get_filesystem_firmware(struct device
/* load firmware files from the mount namespace of init */ rc = kernel_read_file_from_path_initns(path, &buffer, - &size, msize, id); + &size, msize, + READING_FIRMWARE); if (rc) { if (rc != -ENOENT) dev_warn(device, "loading %s failed with error %d\n", --- a/fs/exec.c +++ b/fs/exec.c @@ -955,6 +955,7 @@ int kernel_read_file(struct file *file, { loff_t i_size, pos; ssize_t bytes = 0; + void *allocated = NULL; int ret;
if (!S_ISREG(file_inode(file)->i_mode) || max_size < 0) @@ -978,8 +979,8 @@ int kernel_read_file(struct file *file, goto out; }
- if (id != READING_FIRMWARE_PREALLOC_BUFFER) - *buf = vmalloc(i_size); + if (!*buf) + *buf = allocated = vmalloc(i_size); if (!*buf) { ret = -ENOMEM; goto out; @@ -1008,7 +1009,7 @@ int kernel_read_file(struct file *file,
out_free: if (ret < 0) { - if (id != READING_FIRMWARE_PREALLOC_BUFFER) { + if (allocated) { vfree(*buf); *buf = NULL; } --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -2861,7 +2861,6 @@ extern int do_pipe_flags(int *, int); #define __kernel_read_file_id(id) \ id(UNKNOWN, unknown) \ id(FIRMWARE, firmware) \ - id(FIRMWARE_PREALLOC_BUFFER, firmware) \ id(MODULE, kernel-module) \ id(KEXEC_IMAGE, kexec-image) \ id(KEXEC_INITRAMFS, kexec-initramfs) \ --- a/kernel/module.c +++ b/kernel/module.c @@ -4028,7 +4028,7 @@ SYSCALL_DEFINE3(finit_module, int, fd, c { struct load_info info = { }; loff_t size; - void *hdr; + void *hdr = NULL; int err;
err = may_init_module(); --- a/security/integrity/digsig.c +++ b/security/integrity/digsig.c @@ -169,7 +169,7 @@ int __init integrity_add_key(const unsig
int __init integrity_load_x509(const unsigned int id, const char *path) { - void *data; + void *data = NULL; loff_t size; int rc; key_perm_t perm; --- a/security/integrity/ima/ima_fs.c +++ b/security/integrity/ima/ima_fs.c @@ -272,7 +272,7 @@ static const struct file_operations ima_
static ssize_t ima_read_policy(char *path) { - void *data; + void *data = NULL; char *datap; loff_t size; int rc, pathlen = strlen(path); --- a/security/integrity/ima/ima_main.c +++ b/security/integrity/ima/ima_main.c @@ -621,19 +621,17 @@ void ima_post_path_mknod(struct dentry * int ima_read_file(struct file *file, enum kernel_read_file_id read_id) { /* - * READING_FIRMWARE_PREALLOC_BUFFER - * * Do devices using pre-allocated memory run the risk of the * firmware being accessible to the device prior to the completion * of IMA's signature verification any more than when using two - * buffers? + * buffers? It may be desirable to include the buffer address + * in this API and walk all the dma_map_single() mappings to check. */ return 0; }
const int read_idmap[READING_MAX_ID] = { [READING_FIRMWARE] = FIRMWARE_CHECK, - [READING_FIRMWARE_PREALLOC_BUFFER] = FIRMWARE_CHECK, [READING_MODULE] = MODULE_CHECK, [READING_KEXEC_IMAGE] = KEXEC_KERNEL_CHECK, [READING_KEXEC_INITRAMFS] = KEXEC_INITRAMFS_CHECK,
From: Helge Deller deller@gmx.de
commit 2f4843b172c2c0360ee7792ad98025fae7baefde upstream.
The mptscsih_remove() function triggers a kernel oops if the Scsi_Host pointer (ioc->sh) is NULL, as can be seen in this syslog:
ioc0: LSI53C1030 B2: Capabilities={Initiator,Target} Begin: Waiting for root file system ... scsi host2: error handler thread failed to spawn, error = -4 mptspi: ioc0: WARNING - Unable to register controller with SCSI subsystem Backtrace: [<000000001045b7cc>] mptspi_probe+0x248/0x3d0 [mptspi] [<0000000040946470>] pci_device_probe+0x1ac/0x2d8 [<0000000040add668>] really_probe+0x1bc/0x988 [<0000000040ade704>] driver_probe_device+0x160/0x218 [<0000000040adee24>] device_driver_attach+0x160/0x188 [<0000000040adef90>] __driver_attach+0x144/0x320 [<0000000040ad7c78>] bus_for_each_dev+0xd4/0x158 [<0000000040adc138>] driver_attach+0x4c/0x80 [<0000000040adb3ec>] bus_add_driver+0x3e0/0x498 [<0000000040ae0130>] driver_register+0xf4/0x298 [<00000000409450c4>] __pci_register_driver+0x78/0xa8 [<000000000007d248>] mptspi_init+0x18c/0x1c4 [mptspi]
This patch adds the necessary NULL-pointer checks. Successfully tested on a HP C8000 parisc workstation with buggy SCSI drives.
Link: https://lore.kernel.org/r/20201022090005.GA9000@ls3530.fritz.box Cc: stable@vger.kernel.org Signed-off-by: Helge Deller deller@gmx.de Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/message/fusion/mptscsih.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-)
--- a/drivers/message/fusion/mptscsih.c +++ b/drivers/message/fusion/mptscsih.c @@ -1176,8 +1176,10 @@ mptscsih_remove(struct pci_dev *pdev) MPT_SCSI_HOST *hd; int sz1;
- if((hd = shost_priv(host)) == NULL) - return; + if (host == NULL) + hd = NULL; + else + hd = shost_priv(host);
mptscsih_shutdown(pdev);
@@ -1193,14 +1195,15 @@ mptscsih_remove(struct pci_dev *pdev) "Free'd ScsiLookup (%d) memory\n", ioc->name, sz1));
- kfree(hd->info_kbuf); + if (hd) + kfree(hd->info_kbuf);
/* NULL the Scsi_Host pointer */ ioc->sh = NULL;
- scsi_host_put(host); - + if (host) + scsi_host_put(host); mpt_detach(pdev);
}
From: Arun Easi aeasi@marvell.com
commit 7a6cdbd5e87515ebf6231b762ad903c7cff87b9c upstream.
When printing the message:
"MPI Heartbeat stop. MPI reset is not needed.."
..the wrong register was checked leading to always printing that MPI reset is not needed, even when it is needed. Fix the MPI reset message.
Link: https://lore.kernel.org/r/20200929102152.32278-4-njavali@marvell.com Fixes: cbb01c2f2f63 ("scsi: qla2xxx: Fix MPI failure AEN (8200) handling") Cc: stable@vger.kernel.org Reviewed-by: Himanshu Madhani himanshu.madhani@oracle.com Signed-off-by: Arun Easi aeasi@marvell.com Signed-off-by: Nilesh Javali njavali@marvell.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/scsi/qla2xxx/qla_isr.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/scsi/qla2xxx/qla_isr.c +++ b/drivers/scsi/qla2xxx/qla_isr.c @@ -767,7 +767,7 @@ qla27xx_handle_8200_aen(scsi_qla_host_t ql_log(ql_log_warn, vha, 0x02f0, "MPI Heartbeat stop. MPI reset is%s needed. " "MB0[%xh] MB1[%xh] MB2[%xh] MB3[%xh]\n", - mb[0] & BIT_8 ? "" : " not", + mb[1] & BIT_8 ? "" : " not", mb[0], mb[1], mb[2], mb[3]);
if ((mb[1] & BIT_8) == 0)
From: Arun Easi aeasi@marvell.com
commit 3e6efab865ac943f4ec43913eb665695737112b0 upstream.
Normally, the MPI firmware is reset when an MPI dump is collected. If an unsaved MPI dump exists in the driver, though, an alternate mechanism is used. This mechanism, which was not fully correct, is not recommended and instead an MPI dump template walk is suggested to perform the MPI reset.
To allow for the MPI dump template walk, extra space is reserved in the MPI dump buffer which gets used only when there is already an MPI dump in place.
Link: https://lore.kernel.org/r/20200929102152.32278-5-njavali@marvell.com Fixes: cbb01c2f2f63 ("scsi: qla2xxx: Fix MPI failure AEN (8200) handling") Cc: stable@vger.kernel.org Reviewed-by: Himanshu Madhani himanshu.madhani@oracle.com Signed-off-by: Arun Easi aeasi@marvell.com Signed-off-by: Nilesh Javali njavali@marvell.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/scsi/qla2xxx/qla_attr.c | 10 ++++++-- drivers/scsi/qla2xxx/qla_gbl.h | 1 drivers/scsi/qla2xxx/qla_init.c | 2 + drivers/scsi/qla2xxx/qla_tmpl.c | 49 ++++++++++------------------------------ 4 files changed, 23 insertions(+), 39 deletions(-)
--- a/drivers/scsi/qla2xxx/qla_attr.c +++ b/drivers/scsi/qla2xxx/qla_attr.c @@ -157,6 +157,14 @@ qla2x00_sysfs_write_fw_dump(struct file vha->host_no); } break; + case 10: + if (IS_QLA27XX(ha) || IS_QLA28XX(ha)) { + ql_log(ql_log_info, vha, 0x70e9, + "Issuing MPI firmware dump on host#%ld.\n", + vha->host_no); + ha->isp_ops->mpi_fw_dump(vha, 0); + } + break; } return count; } @@ -744,8 +752,6 @@ qla2x00_sysfs_write_reset(struct file *f qla83xx_idc_audit(vha, IDC_AUDIT_TIMESTAMP); qla83xx_idc_unlock(vha, 0); break; - } else if (IS_QLA27XX(ha) || IS_QLA28XX(ha)) { - qla27xx_reset_mpi(vha); } else { /* Make sure FC side is not in reset */ WARN_ON_ONCE(qla2x00_wait_for_hba_online(vha) != --- a/drivers/scsi/qla2xxx/qla_gbl.h +++ b/drivers/scsi/qla2xxx/qla_gbl.h @@ -938,6 +938,5 @@ extern void qla24xx_process_purex_list(s
/* nvme.c */ void qla_nvme_unregister_remote_port(struct fc_port *fcport); -void qla27xx_reset_mpi(scsi_qla_host_t *vha); void qla_handle_els_plogi_done(scsi_qla_host_t *vha, struct event_arg *ea); #endif /* _QLA_GBL_H */ --- a/drivers/scsi/qla2xxx/qla_init.c +++ b/drivers/scsi/qla2xxx/qla_init.c @@ -3298,6 +3298,8 @@ qla2x00_alloc_fw_dump(scsi_qla_host_t *v j, fwdt->dump_size); dump_size += fwdt->dump_size; } + /* Add space for spare MPI fw dump. */ + dump_size += ha->fwdt[1].dump_size; } else { req_q_size = req->length * sizeof(request_t); rsp_q_size = rsp->length * sizeof(response_t); --- a/drivers/scsi/qla2xxx/qla_tmpl.c +++ b/drivers/scsi/qla2xxx/qla_tmpl.c @@ -12,33 +12,6 @@ #define IOBASE(vha) IOBAR(ISPREG(vha)) #define INVALID_ENTRY ((struct qla27xx_fwdt_entry *)0xffffffffffffffffUL)
-/* hardware_lock assumed held. */ -static void -qla27xx_write_remote_reg(struct scsi_qla_host *vha, - u32 addr, u32 data) -{ - struct device_reg_24xx __iomem *reg = &vha->hw->iobase->isp24; - - ql_dbg(ql_dbg_misc, vha, 0xd300, - "%s: addr/data = %xh/%xh\n", __func__, addr, data); - - wrt_reg_dword(®->iobase_addr, 0x40); - wrt_reg_dword(®->iobase_c4, data); - wrt_reg_dword(®->iobase_window, addr); -} - -void -qla27xx_reset_mpi(scsi_qla_host_t *vha) -{ - ql_dbg(ql_dbg_misc + ql_dbg_verbose, vha, 0xd301, - "Entered %s.\n", __func__); - - qla27xx_write_remote_reg(vha, 0x104050, 0x40004); - qla27xx_write_remote_reg(vha, 0x10405c, 0x4); - - vha->hw->stat.num_mpi_reset++; -} - static inline void qla27xx_insert16(uint16_t value, void *buf, ulong *len) { @@ -1028,7 +1001,6 @@ void qla27xx_mpi_fwdump(scsi_qla_host_t *vha, int hardware_locked) { ulong flags = 0; - bool need_mpi_reset = true;
#ifndef __CHECKER__ if (!hardware_locked) @@ -1036,14 +1008,20 @@ qla27xx_mpi_fwdump(scsi_qla_host_t *vha, #endif if (!vha->hw->mpi_fw_dump) { ql_log(ql_log_warn, vha, 0x02f3, "-> mpi_fwdump no buffer\n"); - } else if (vha->hw->mpi_fw_dumped) { - ql_log(ql_log_warn, vha, 0x02f4, - "-> MPI firmware already dumped (%p) -- ignoring request\n", - vha->hw->mpi_fw_dump); } else { struct fwdt *fwdt = &vha->hw->fwdt[1]; ulong len; void *buf = vha->hw->mpi_fw_dump; + bool walk_template_only = false; + + if (vha->hw->mpi_fw_dumped) { + /* Use the spare area for any further dumps. */ + buf += fwdt->dump_size; + walk_template_only = true; + ql_log(ql_log_warn, vha, 0x02f4, + "-> MPI firmware already dumped -- dump saving to temporary buffer %p.\n", + buf); + }
ql_log(ql_log_warn, vha, 0x02f5, "-> fwdt1 running...\n"); if (!fwdt->template) { @@ -1058,9 +1036,10 @@ qla27xx_mpi_fwdump(scsi_qla_host_t *vha, ql_log(ql_log_warn, vha, 0x02f7, "-> fwdt1 fwdump residual=%+ld\n", fwdt->dump_size - len); - } else { - need_mpi_reset = false; } + vha->hw->stat.num_mpi_reset++; + if (walk_template_only) + goto bailout;
vha->hw->mpi_fw_dump_len = len; vha->hw->mpi_fw_dumped = 1; @@ -1072,8 +1051,6 @@ qla27xx_mpi_fwdump(scsi_qla_host_t *vha, }
bailout: - if (need_mpi_reset) - qla27xx_reset_mpi(vha); #ifndef __CHECKER__ if (!hardware_locked) spin_unlock_irqrestore(&vha->hw->hardware_lock, flags);
From: Quinn Tran qutran@marvell.com
commit 50457dab670f396557e60c07f086358460876353 upstream.
On unload, session cleanup prematurely gave the signal for driver unload path to advance.
Link: https://lore.kernel.org/r/20200929102152.32278-6-njavali@marvell.com Fixes: 726b85487067 ("qla2xxx: Add framework for async fabric discovery") Cc: stable@vger.kernel.org Reviewed-by: Himanshu Madhani himanshu.madhani@oracle.com Signed-off-by: Quinn Tran qutran@marvell.com Signed-off-by: Nilesh Javali njavali@marvell.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/scsi/qla2xxx/qla_target.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-)
--- a/drivers/scsi/qla2xxx/qla_target.c +++ b/drivers/scsi/qla2xxx/qla_target.c @@ -1229,14 +1229,15 @@ void qlt_schedule_sess_for_deletion(stru case DSC_DELETE_PEND: return; case DSC_DELETED: - if (tgt && tgt->tgt_stop && (tgt->sess_count == 0)) - wake_up_all(&tgt->waitQ); - if (sess->vha->fcport_count == 0) - wake_up_all(&sess->vha->fcport_waitQ); - if (!sess->plogi_link[QLT_PLOGI_LINK_SAME_WWN] && - !sess->plogi_link[QLT_PLOGI_LINK_CONFLICT]) + !sess->plogi_link[QLT_PLOGI_LINK_CONFLICT]) { + if (tgt && tgt->tgt_stop && tgt->sess_count == 0) + wake_up_all(&tgt->waitQ); + + if (sess->vha->fcport_count == 0) + wake_up_all(&sess->vha->fcport_waitQ); return; + } break; case DSC_UPD_FCPORT: /*
From: Xiang Chen chenxiang66@hisilicon.com
commit d12544fb2aa9944b180c35914031a8384ab082c1 upstream.
To support runtime PM for hisi SAS driver (the driver is in directory drivers/scsi/hisi_sas), we add device link between scsi_device->sdev_gendev (consumer device) and hisi_hba->dev(supplier device) with flags DL_FLAG_PM_RUNTIME | DL_FLAG_RPM_ACTIVE.
After runtime suspended consumers and supplier, unload the dirver which causes a hung.
We found that it called function device_release_driver_internal() to release the supplier device (hisi_hba->dev), as the device link was busy, it set the device link state to DL_STATE_SUPPLIER_UNBIND, and then it called device_release_driver_internal() to release the consumer device (scsi_device->sdev_gendev).
Then it would try to call pm_runtime_get_sync() to resume the consumer device, but because consumer-supplier relation existed, it would try to resume the supplier first, but as the link state was already DL_STATE_SUPPLIER_UNBIND, so it skipped resuming the supplier and only resumed the consumer which hanged (it sends IOs to resume scsi_device while the SAS controller is suspended).
Simple flow is as follows:
device_release_driver_internal -> (supplier device) if device_links_busy -> device_links_unbind_consumers -> ... WRITE_ONCE(link->status, DL_STATE_SUPPLIER_UNBIND) device_release_driver_internal (consumer device) pm_runtime_get_sync -> (consumer device) ... __rpm_callback -> rpm_get_suppliers -> if link->state == DL_STATE_SUPPLIER_UNBIND -> skip the action of resuming the supplier ... pm_runtime_clean_up_links ...
Correct suspend/resume ordering between a supplier device and its consumer devices (resume the supplier device before resuming consumer devices, and suspend consumer devices before suspending the supplier device) should be guaranteed by runtime PM, but the state checks in rpm_get_supplier() and rpm_put_supplier() break this rule, so remove them.
Signed-off-by: Xiang Chen chenxiang66@hisilicon.com [ rjw: Subject and changelog edits ] Cc: All applicable stable@vger.kernel.org Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/base/power/runtime.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-)
--- a/drivers/base/power/runtime.c +++ b/drivers/base/power/runtime.c @@ -291,8 +291,7 @@ static int rpm_get_suppliers(struct devi device_links_read_lock_held()) { int retval;
- if (!(link->flags & DL_FLAG_PM_RUNTIME) || - READ_ONCE(link->status) == DL_STATE_SUPPLIER_UNBIND) + if (!(link->flags & DL_FLAG_PM_RUNTIME)) continue;
retval = pm_runtime_get_sync(link->supplier); @@ -312,8 +311,6 @@ static void rpm_put_suppliers(struct dev
list_for_each_entry_rcu(link, &dev->links.suppliers, c_node, device_links_read_lock_held()) { - if (READ_ONCE(link->status) == DL_STATE_SUPPLIER_UNBIND) - continue;
while (refcount_dec_not_one(&link->rpm_active)) pm_runtime_put(link->supplier);
From: Qu Wenruo wqu@suse.com
commit b4c5d8fdfff3e2b6c4fa4a5043e8946dff500f8c upstream.
For delayed inode facility, qgroup metadata is reserved for it, and later freed.
However we're freeing more bytes than we reserved. In btrfs_delayed_inode_reserve_metadata():
num_bytes = btrfs_calc_metadata_size(fs_info, 1); ... ret = btrfs_qgroup_reserve_meta_prealloc(root, fs_info->nodesize, true); ... if (!ret) { node->bytes_reserved = num_bytes;
But in btrfs_delayed_inode_release_metadata():
if (qgroup_free) btrfs_qgroup_free_meta_prealloc(node->root, node->bytes_reserved); else btrfs_qgroup_convert_reserved_meta(node->root, node->bytes_reserved);
This means, we're always releasing more qgroup metadata rsv than we have reserved.
This won't trigger selftest warning, as btrfs qgroup metadata rsv has extra protection against cases like quota enabled half-way.
But we still need to fix this problem any way.
This patch will use the same num_bytes for qgroup metadata rsv so we could handle it correctly.
Fixes: f218ea6c4792 ("btrfs: delayed-inode: Remove wrong qgroup meta reservation calls") CC: stable@vger.kernel.org # 4.19+ Reviewed-by: Josef Bacik josef@toxicpanda.com Signed-off-by: Qu Wenruo wqu@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/delayed-inode.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
--- a/fs/btrfs/delayed-inode.c +++ b/fs/btrfs/delayed-inode.c @@ -627,8 +627,7 @@ static int btrfs_delayed_inode_reserve_m */ if (!src_rsv || (!trans->bytes_reserved && src_rsv->type != BTRFS_BLOCK_RSV_DELALLOC)) { - ret = btrfs_qgroup_reserve_meta_prealloc(root, - fs_info->nodesize, true); + ret = btrfs_qgroup_reserve_meta_prealloc(root, num_bytes, true); if (ret < 0) return ret; ret = btrfs_block_rsv_add(root, dst_rsv, num_bytes,
From: Anand Jain anand.jain@oracle.com
commit 79dae17d8d44b2d15779e332180080af45df5352 upstream.
Systems booting without the initramfs seems to scan an unusual kind of device path (/dev/root). And at a later time, the device is updated to the correct path. We generally print the process name and PID of the process scanning the device but we don't capture the same information if the device path is rescanned with a different pathname.
The current message is too long, so drop the unnecessary UUID and add process name and PID.
While at this also update the duplicate device warning to include the process name and PID so the messages are consistent
CC: stable@vger.kernel.org # 4.19+ Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=89721 Signed-off-by: Anand Jain anand.jain@oracle.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/volumes.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-)
--- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -942,16 +942,18 @@ static noinline struct btrfs_device *dev bdput(path_bdev); mutex_unlock(&fs_devices->device_list_mutex); btrfs_warn_in_rcu(device->fs_info, - "duplicate device fsid:devid for %pU:%llu old:%s new:%s", - disk_super->fsid, devid, - rcu_str_deref(device->name), path); + "duplicate device %s devid %llu generation %llu scanned by %s (%d)", + path, devid, found_transid, + current->comm, + task_pid_nr(current)); return ERR_PTR(-EEXIST); } bdput(path_bdev); btrfs_info_in_rcu(device->fs_info, - "device fsid %pU devid %llu moved old:%s new:%s", - disk_super->fsid, devid, - rcu_str_deref(device->name), path); + "devid %llu device path %s changed to %s scanned by %s (%d)", + devid, rcu_str_deref(device->name), + path, current->comm, + task_pid_nr(current)); }
name = rcu_string_strdup(path, GFP_NOFS);
From: Qu Wenruo wqu@suse.com
commit e85fde5162bf1b242cbd6daf7dba0f9b457d592b upstream.
[BUG] When quota is enabled for TEST_DEV, generic/013 sometimes fails like this:
generic/013 14s ... _check_dmesg: something found in dmesg (see xfstests-dev/results//generic/013.dmesg)
And with the following metadata leak:
BTRFS warning (device dm-3): qgroup 0/1370 has unreleased space, type 2 rsv 49152 ------------[ cut here ]------------ WARNING: CPU: 2 PID: 47912 at fs/btrfs/disk-io.c:4078 close_ctree+0x1dc/0x323 [btrfs] Call Trace: btrfs_put_super+0x15/0x17 [btrfs] generic_shutdown_super+0x72/0x110 kill_anon_super+0x18/0x30 btrfs_kill_super+0x17/0x30 [btrfs] deactivate_locked_super+0x3b/0xa0 deactivate_super+0x40/0x50 cleanup_mnt+0x135/0x190 __cleanup_mnt+0x12/0x20 task_work_run+0x64/0xb0 __prepare_exit_to_usermode+0x1bc/0x1c0 __syscall_return_slowpath+0x47/0x230 do_syscall_64+0x64/0xb0 entry_SYSCALL_64_after_hwframe+0x44/0xa9 ---[ end trace a6cfd45ba80e4e06 ]--- BTRFS error (device dm-3): qgroup reserved space leaked BTRFS info (device dm-3): disk space caching is enabled BTRFS info (device dm-3): has skinny extents
[CAUSE] The qgroup preallocated meta rsv operations of that offending root are:
btrfs_delayed_inode_reserve_metadata: rsv_meta_prealloc root=1370 num_bytes=131072 btrfs_delayed_inode_reserve_metadata: rsv_meta_prealloc root=1370 num_bytes=131072 btrfs_subvolume_reserve_metadata: rsv_meta_prealloc root=1370 num_bytes=49152 btrfs_delayed_inode_release_metadata: convert_meta_prealloc root=1370 num_bytes=-131072 btrfs_delayed_inode_release_metadata: convert_meta_prealloc root=1370 num_bytes=-131072
It's pretty obvious that, we reserve qgroup meta rsv in btrfs_subvolume_reserve_metadata(), but doesn't have corresponding release/convert calls in btrfs_subvolume_release_metadata().
This leads to the leakage.
[FIX] To fix this bug, we should follow what we're doing in btrfs_delalloc_reserve_metadata(), where we reserve qgroup space, and add it to block_rsv->qgroup_rsv_reserved.
And free the qgroup reserved metadata space when releasing the block_rsv.
To do this, we need to change the btrfs_subvolume_release_metadata() to accept btrfs_root, and record the qgroup_to_release number, and call btrfs_qgroup_convert_reserved_meta() for it.
Fixes: 733e03a0b26a ("btrfs: qgroup: Split meta rsv type into meta_prealloc and meta_pertrans") CC: stable@vger.kernel.org # 4.19+ Reviewed-by: Josef Bacik josef@toxicpanda.com Signed-off-by: Qu Wenruo wqu@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/ctree.h | 2 +- fs/btrfs/inode.c | 2 +- fs/btrfs/ioctl.c | 6 +++--- fs/btrfs/root-tree.c | 13 +++++++++++-- 4 files changed, 16 insertions(+), 7 deletions(-)
--- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -2619,7 +2619,7 @@ enum btrfs_flush_state { int btrfs_subvolume_reserve_metadata(struct btrfs_root *root, struct btrfs_block_rsv *rsv, int nitems, bool use_global_rsv); -void btrfs_subvolume_release_metadata(struct btrfs_fs_info *fs_info, +void btrfs_subvolume_release_metadata(struct btrfs_root *root, struct btrfs_block_rsv *rsv); void btrfs_delalloc_release_extents(struct btrfs_inode *inode, u64 num_bytes);
--- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -4051,7 +4051,7 @@ out_end_trans: err = ret; inode->i_flags |= S_DEAD; out_release: - btrfs_subvolume_release_metadata(fs_info, &block_rsv); + btrfs_subvolume_release_metadata(root, &block_rsv); out_up_write: up_write(&fs_info->subvol_sem); if (err) { --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -618,7 +618,7 @@ static noinline int create_subvol(struct trans = btrfs_start_transaction(root, 0); if (IS_ERR(trans)) { ret = PTR_ERR(trans); - btrfs_subvolume_release_metadata(fs_info, &block_rsv); + btrfs_subvolume_release_metadata(root, &block_rsv); goto fail_free; } trans->block_rsv = &block_rsv; @@ -742,7 +742,7 @@ fail: kfree(root_item); trans->block_rsv = NULL; trans->bytes_reserved = 0; - btrfs_subvolume_release_metadata(fs_info, &block_rsv); + btrfs_subvolume_release_metadata(root, &block_rsv);
err = btrfs_commit_transaction(trans); if (err && !ret) @@ -856,7 +856,7 @@ fail: if (ret && pending_snapshot->snap) pending_snapshot->snap->anon_dev = 0; btrfs_put_root(pending_snapshot->snap); - btrfs_subvolume_release_metadata(fs_info, &pending_snapshot->block_rsv); + btrfs_subvolume_release_metadata(root, &pending_snapshot->block_rsv); free_pending: if (pending_snapshot->anon_dev) free_anon_bdev(pending_snapshot->anon_dev); --- a/fs/btrfs/root-tree.c +++ b/fs/btrfs/root-tree.c @@ -512,11 +512,20 @@ int btrfs_subvolume_reserve_metadata(str if (ret && qgroup_num_bytes) btrfs_qgroup_free_meta_prealloc(root, qgroup_num_bytes);
+ if (!ret) { + spin_lock(&rsv->lock); + rsv->qgroup_rsv_reserved += qgroup_num_bytes; + spin_unlock(&rsv->lock); + } return ret; }
-void btrfs_subvolume_release_metadata(struct btrfs_fs_info *fs_info, +void btrfs_subvolume_release_metadata(struct btrfs_root *root, struct btrfs_block_rsv *rsv) { - btrfs_block_rsv_release(fs_info, rsv, (u64)-1, NULL); + struct btrfs_fs_info *fs_info = root->fs_info; + u64 qgroup_to_release; + + btrfs_block_rsv_release(fs_info, rsv, (u64)-1, &qgroup_to_release); + btrfs_qgroup_convert_reserved_meta(root, qgroup_to_release); }
From: Josef Bacik josef@toxicpanda.com
commit ca10845a56856fff4de3804c85e6424d0f6d0cde upstream.
While running btrfs/061, btrfs/073, btrfs/078, or btrfs/178 we hit the following lockdep splat:
====================================================== WARNING: possible circular locking dependency detected 5.9.0-rc3+ #4 Not tainted ------------------------------------------------------ kswapd0/100 is trying to acquire lock: ffff96ecc22ef4a0 (&delayed_node->mutex){+.+.}-{3:3}, at: __btrfs_release_delayed_node.part.0+0x3f/0x330
but task is already holding lock: ffffffff8dd74700 (fs_reclaim){+.+.}-{0:0}, at: __fs_reclaim_acquire+0x5/0x30
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #3 (fs_reclaim){+.+.}-{0:0}: fs_reclaim_acquire+0x65/0x80 slab_pre_alloc_hook.constprop.0+0x20/0x200 kmem_cache_alloc+0x37/0x270 alloc_inode+0x82/0xb0 iget_locked+0x10d/0x2c0 kernfs_get_inode+0x1b/0x130 kernfs_get_tree+0x136/0x240 sysfs_get_tree+0x16/0x40 vfs_get_tree+0x28/0xc0 path_mount+0x434/0xc00 __x64_sys_mount+0xe3/0x120 do_syscall_64+0x33/0x40 entry_SYSCALL_64_after_hwframe+0x44/0xa9
-> #2 (kernfs_mutex){+.+.}-{3:3}: __mutex_lock+0x7e/0x7e0 kernfs_add_one+0x23/0x150 kernfs_create_link+0x63/0xa0 sysfs_do_create_link_sd+0x5e/0xd0 btrfs_sysfs_add_devices_dir+0x81/0x130 btrfs_init_new_device+0x67f/0x1250 btrfs_ioctl+0x1ef/0x2e20 __x64_sys_ioctl+0x83/0xb0 do_syscall_64+0x33/0x40 entry_SYSCALL_64_after_hwframe+0x44/0xa9
-> #1 (&fs_info->chunk_mutex){+.+.}-{3:3}: __mutex_lock+0x7e/0x7e0 btrfs_chunk_alloc+0x125/0x3a0 find_free_extent+0xdf6/0x1210 btrfs_reserve_extent+0xb3/0x1b0 btrfs_alloc_tree_block+0xb0/0x310 alloc_tree_block_no_bg_flush+0x4a/0x60 __btrfs_cow_block+0x11a/0x530 btrfs_cow_block+0x104/0x220 btrfs_search_slot+0x52e/0x9d0 btrfs_insert_empty_items+0x64/0xb0 btrfs_insert_delayed_items+0x90/0x4f0 btrfs_commit_inode_delayed_items+0x93/0x140 btrfs_log_inode+0x5de/0x2020 btrfs_log_inode_parent+0x429/0xc90 btrfs_log_new_name+0x95/0x9b btrfs_rename2+0xbb9/0x1800 vfs_rename+0x64f/0x9f0 do_renameat2+0x320/0x4e0 __x64_sys_rename+0x1f/0x30 do_syscall_64+0x33/0x40 entry_SYSCALL_64_after_hwframe+0x44/0xa9
-> #0 (&delayed_node->mutex){+.+.}-{3:3}: __lock_acquire+0x119c/0x1fc0 lock_acquire+0xa7/0x3d0 __mutex_lock+0x7e/0x7e0 __btrfs_release_delayed_node.part.0+0x3f/0x330 btrfs_evict_inode+0x24c/0x500 evict+0xcf/0x1f0 dispose_list+0x48/0x70 prune_icache_sb+0x44/0x50 super_cache_scan+0x161/0x1e0 do_shrink_slab+0x178/0x3c0 shrink_slab+0x17c/0x290 shrink_node+0x2b2/0x6d0 balance_pgdat+0x30a/0x670 kswapd+0x213/0x4c0 kthread+0x138/0x160 ret_from_fork+0x1f/0x30
other info that might help us debug this:
Chain exists of: &delayed_node->mutex --> kernfs_mutex --> fs_reclaim
Possible unsafe locking scenario:
CPU0 CPU1 ---- ---- lock(fs_reclaim); lock(kernfs_mutex); lock(fs_reclaim); lock(&delayed_node->mutex);
*** DEADLOCK ***
3 locks held by kswapd0/100: #0: ffffffff8dd74700 (fs_reclaim){+.+.}-{0:0}, at: __fs_reclaim_acquire+0x5/0x30 #1: ffffffff8dd65c50 (shrinker_rwsem){++++}-{3:3}, at: shrink_slab+0x115/0x290 #2: ffff96ed2ade30e0 (&type->s_umount_key#36){++++}-{3:3}, at: super_cache_scan+0x38/0x1e0
stack backtrace: CPU: 0 PID: 100 Comm: kswapd0 Not tainted 5.9.0-rc3+ #4 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.13.0-2.fc32 04/01/2014 Call Trace: dump_stack+0x8b/0xb8 check_noncircular+0x12d/0x150 __lock_acquire+0x119c/0x1fc0 lock_acquire+0xa7/0x3d0 ? __btrfs_release_delayed_node.part.0+0x3f/0x330 __mutex_lock+0x7e/0x7e0 ? __btrfs_release_delayed_node.part.0+0x3f/0x330 ? __btrfs_release_delayed_node.part.0+0x3f/0x330 ? lock_acquire+0xa7/0x3d0 ? find_held_lock+0x2b/0x80 __btrfs_release_delayed_node.part.0+0x3f/0x330 btrfs_evict_inode+0x24c/0x500 evict+0xcf/0x1f0 dispose_list+0x48/0x70 prune_icache_sb+0x44/0x50 super_cache_scan+0x161/0x1e0 do_shrink_slab+0x178/0x3c0 shrink_slab+0x17c/0x290 shrink_node+0x2b2/0x6d0 balance_pgdat+0x30a/0x670 kswapd+0x213/0x4c0 ? _raw_spin_unlock_irqrestore+0x41/0x50 ? add_wait_queue_exclusive+0x70/0x70 ? balance_pgdat+0x670/0x670 kthread+0x138/0x160 ? kthread_create_worker_on_cpu+0x40/0x40 ret_from_fork+0x1f/0x30
This happens because we are holding the chunk_mutex at the time of adding in a new device. However we only need to hold the device_list_mutex, as we're going to iterate over the fs_devices devices. Move the sysfs init stuff outside of the chunk_mutex to get rid of this lockdep splat.
CC: stable@vger.kernel.org # 4.4.x: f3cd2c58110dad14e: btrfs: sysfs, rename device_link add/remove functions CC: stable@vger.kernel.org # 4.4.x Reported-by: David Sterba dsterba@suse.com Signed-off-by: Josef Bacik josef@toxicpanda.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/volumes.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
--- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -2613,9 +2613,6 @@ int btrfs_init_new_device(struct btrfs_f btrfs_set_super_num_devices(fs_info->super_copy, orig_super_num_devices + 1);
- /* add sysfs device entry */ - btrfs_sysfs_add_devices_dir(fs_devices, device); - /* * we've got more storage, clear any full flags on the space * infos @@ -2623,6 +2620,10 @@ int btrfs_init_new_device(struct btrfs_f btrfs_clear_space_info_full(fs_info);
mutex_unlock(&fs_info->chunk_mutex); + + /* Add sysfs device entry */ + btrfs_sysfs_add_devices_dir(fs_devices, device); + mutex_unlock(&fs_devices->device_list_mutex);
if (seeding_dev) {
From: Qu Wenruo wqu@suse.com
commit 437490fed3b0c9ae21af8f70e0f338d34560842b upstream.
The current trace event always output result like this:
find_free_extent: root=2(EXTENT_TREE) len=16384 empty_size=0 flags=4(METADATA) find_free_extent: root=2(EXTENT_TREE) len=16384 empty_size=0 flags=4(METADATA) find_free_extent: root=2(EXTENT_TREE) len=8192 empty_size=0 flags=1(DATA) find_free_extent: root=2(EXTENT_TREE) len=8192 empty_size=0 flags=1(DATA) find_free_extent: root=2(EXTENT_TREE) len=4096 empty_size=0 flags=1(DATA) find_free_extent: root=2(EXTENT_TREE) len=4096 empty_size=0 flags=1(DATA)
T's saying we're allocating data extent for EXTENT tree, which is not even possible.
It's because we always use EXTENT tree as the owner for trace_find_free_extent() without using the @root from btrfs_reserve_extent().
This patch will change the parameter to use proper @root for trace_find_free_extent():
Now it looks much better:
find_free_extent: root=5(FS_TREE) len=16384 empty_size=0 flags=36(METADATA|DUP) find_free_extent: root=5(FS_TREE) len=8192 empty_size=0 flags=1(DATA) find_free_extent: root=5(FS_TREE) len=16384 empty_size=0 flags=1(DATA) find_free_extent: root=5(FS_TREE) len=4096 empty_size=0 flags=1(DATA) find_free_extent: root=5(FS_TREE) len=8192 empty_size=0 flags=1(DATA) find_free_extent: root=5(FS_TREE) len=16384 empty_size=0 flags=36(METADATA|DUP) find_free_extent: root=7(CSUM_TREE) len=16384 empty_size=0 flags=36(METADATA|DUP) find_free_extent: root=2(EXTENT_TREE) len=16384 empty_size=0 flags=36(METADATA|DUP) find_free_extent: root=1(ROOT_TREE) len=16384 empty_size=0 flags=36(METADATA|DUP)
Reported-by: Hans van Kranenburg hans@knorrie.org CC: stable@vger.kernel.org # 5.4+ Signed-off-by: Qu Wenruo wqu@suse.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/extent-tree.c | 7 ++++--- include/trace/events/btrfs.h | 10 ++++++---- 2 files changed, 10 insertions(+), 7 deletions(-)
--- a/fs/btrfs/extent-tree.c +++ b/fs/btrfs/extent-tree.c @@ -3918,11 +3918,12 @@ static int prepare_allocation(struct btr * |- Push harder to find free extents * |- If not found, re-iterate all block groups */ -static noinline int find_free_extent(struct btrfs_fs_info *fs_info, +static noinline int find_free_extent(struct btrfs_root *root, u64 ram_bytes, u64 num_bytes, u64 empty_size, u64 hint_byte_orig, struct btrfs_key *ins, u64 flags, int delalloc) { + struct btrfs_fs_info *fs_info = root->fs_info; int ret = 0; int cache_block_group_error = 0; struct btrfs_block_group *block_group = NULL; @@ -3954,7 +3955,7 @@ static noinline int find_free_extent(str ins->objectid = 0; ins->offset = 0;
- trace_find_free_extent(fs_info, num_bytes, empty_size, flags); + trace_find_free_extent(root, num_bytes, empty_size, flags);
space_info = btrfs_find_space_info(fs_info, flags); if (!space_info) { @@ -4203,7 +4204,7 @@ int btrfs_reserve_extent(struct btrfs_ro flags = get_alloc_profile_by_root(root, is_data); again: WARN_ON(num_bytes < fs_info->sectorsize); - ret = find_free_extent(fs_info, ram_bytes, num_bytes, empty_size, + ret = find_free_extent(root, ram_bytes, num_bytes, empty_size, hint_byte, ins, flags, delalloc); if (!ret && !is_data) { btrfs_dec_block_group_reservations(fs_info, ins->objectid); --- a/include/trace/events/btrfs.h +++ b/include/trace/events/btrfs.h @@ -1176,25 +1176,27 @@ DEFINE_EVENT(btrfs__reserved_extent, bt
TRACE_EVENT(find_free_extent,
- TP_PROTO(const struct btrfs_fs_info *fs_info, u64 num_bytes, + TP_PROTO(const struct btrfs_root *root, u64 num_bytes, u64 empty_size, u64 data),
- TP_ARGS(fs_info, num_bytes, empty_size, data), + TP_ARGS(root, num_bytes, empty_size, data),
TP_STRUCT__entry_btrfs( + __field( u64, root_objectid ) __field( u64, num_bytes ) __field( u64, empty_size ) __field( u64, data ) ),
- TP_fast_assign_btrfs(fs_info, + TP_fast_assign_btrfs(root->fs_info, + __entry->root_objectid = root->root_key.objectid; __entry->num_bytes = num_bytes; __entry->empty_size = empty_size; __entry->data = data; ),
TP_printk_btrfs("root=%llu(%s) len=%llu empty_size=%llu flags=%llu(%s)", - show_root_type(BTRFS_EXTENT_TREE_OBJECTID), + show_root_type(__entry->root_objectid), __entry->num_bytes, __entry->empty_size, __entry->data, __print_flags((unsigned long)__entry->data, "|", BTRFS_GROUP_FLAGS))
From: Filipe Manana fdmanana@suse.com
commit bb56f02f26fe23798edb1b2175707419b28c752a upstream.
Logging directories with many entries can take a significant amount of time, and in some cases monopolize a cpu/core for a long time if the logging task doesn't happen to block often enough.
Johannes and Lu Fengqi reported test case generic/041 triggering a soft lockup when the kernel has CONFIG_SOFTLOCKUP_DETECTOR=y. For this test case we log an inode with 3002 hard links, and because the test removed one hard link before fsyncing the file, the inode logging causes the parent directory do be logged as well, which has 6004 directory items to log (3002 BTRFS_DIR_ITEM_KEY items plus 3002 BTRFS_DIR_INDEX_KEY items), so it can take a significant amount of time and trigger the soft lockup.
So just make tree-log.c:log_dir_items() reschedule when necessary, releasing the current search path before doing so and then resume from where it was before the reschedule.
The stack trace produced when the soft lockup happens is the following:
[10480.277653] watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [xfs_io:28172] [10480.279418] Modules linked in: dm_thin_pool dm_persistent_data (...) [10480.284915] irq event stamp: 29646366 [10480.285987] hardirqs last enabled at (29646365): [<ffffffff85249b66>] __slab_alloc.constprop.0+0x56/0x60 [10480.288482] hardirqs last disabled at (29646366): [<ffffffff8579b00d>] irqentry_enter+0x1d/0x50 [10480.290856] softirqs last enabled at (4612): [<ffffffff85a00323>] __do_softirq+0x323/0x56c [10480.293615] softirqs last disabled at (4483): [<ffffffff85800dbf>] asm_call_on_stack+0xf/0x20 [10480.296428] CPU: 2 PID: 28172 Comm: xfs_io Not tainted 5.9.0-rc4-default+ #1248 [10480.298948] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.0-59-gc9ba527-rebuilt.opensuse.org 04/01/2014 [10480.302455] RIP: 0010:__slab_alloc.constprop.0+0x19/0x60 [10480.304151] Code: 86 e8 31 75 21 00 66 66 2e 0f 1f 84 00 00 00 (...) [10480.309558] RSP: 0018:ffffadbe09397a58 EFLAGS: 00000282 [10480.311179] RAX: ffff8a495ab92840 RBX: 0000000000000282 RCX: 0000000000000006 [10480.313242] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffff85249b66 [10480.315260] RBP: ffff8a497d04b740 R08: 0000000000000001 R09: 0000000000000001 [10480.317229] R10: ffff8a497d044800 R11: ffff8a495ab93c40 R12: 0000000000000000 [10480.319169] R13: 0000000000000000 R14: 0000000000000c40 R15: ffffffffc01daf70 [10480.321104] FS: 00007fa1dc5c0e40(0000) GS:ffff8a497da00000(0000) knlGS:0000000000000000 [10480.323559] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [10480.325235] CR2: 00007fa1dc5befb8 CR3: 0000000004f8a006 CR4: 0000000000170ea0 [10480.327259] Call Trace: [10480.328286] ? overwrite_item+0x1f0/0x5a0 [btrfs] [10480.329784] __kmalloc+0x831/0xa20 [10480.331009] ? btrfs_get_32+0xb0/0x1d0 [btrfs] [10480.332464] overwrite_item+0x1f0/0x5a0 [btrfs] [10480.333948] log_dir_items+0x2ee/0x570 [btrfs] [10480.335413] log_directory_changes+0x82/0xd0 [btrfs] [10480.336926] btrfs_log_inode+0xc9b/0xda0 [btrfs] [10480.338374] ? init_once+0x20/0x20 [btrfs] [10480.339711] btrfs_log_inode_parent+0x8d3/0xd10 [btrfs] [10480.341257] ? dget_parent+0x97/0x2e0 [10480.342480] btrfs_log_dentry_safe+0x3a/0x50 [btrfs] [10480.343977] btrfs_sync_file+0x24b/0x5e0 [btrfs] [10480.345381] do_fsync+0x38/0x70 [10480.346483] __x64_sys_fsync+0x10/0x20 [10480.347703] do_syscall_64+0x2d/0x70 [10480.348891] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [10480.350444] RIP: 0033:0x7fa1dc80970b [10480.351642] Code: 0f 05 48 3d 00 f0 ff ff 77 45 c3 0f 1f 40 00 48 (...) [10480.356952] RSP: 002b:00007fffb3d081d0 EFLAGS: 00000293 ORIG_RAX: 000000000000004a [10480.359458] RAX: ffffffffffffffda RBX: 0000562d93d45e40 RCX: 00007fa1dc80970b [10480.361426] RDX: 0000562d93d44ab0 RSI: 0000562d93d45e60 RDI: 0000000000000003 [10480.363367] RBP: 0000000000000001 R08: 0000000000000000 R09: 00007fa1dc7b2a40 [10480.365317] R10: 0000562d93d0e366 R11: 0000000000000293 R12: 0000000000000001 [10480.367299] R13: 0000562d93d45290 R14: 0000562d93d45e40 R15: 0000562d93d45e60
Link: https://lore.kernel.org/linux-btrfs/20180713090216.GC575@fnst.localdomain/ Reported-by: Johannes Thumshirn johannes.thumshirn@wdc.com CC: stable@vger.kernel.org # 4.4+ Tested-by: Johannes Thumshirn johannes.thumshirn@wdc.com Reviewed-by: Johannes Thumshirn johannes.thumshirn@wdc.com Signed-off-by: Filipe Manana fdmanana@suse.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/tree-log.c | 8 ++++++++ 1 file changed, 8 insertions(+)
--- a/fs/btrfs/tree-log.c +++ b/fs/btrfs/tree-log.c @@ -3615,6 +3615,7 @@ static noinline int log_dir_items(struct * search and this search we'll not find the key again and can just * bail. */ +search: ret = btrfs_search_slot(NULL, root, &min_key, path, 0, 0); if (ret != 0) goto done; @@ -3634,6 +3635,13 @@ static noinline int log_dir_items(struct
if (min_key.objectid != ino || min_key.type != key_type) goto done; + + if (need_resched()) { + btrfs_release_path(path); + cond_resched(); + goto search; + } + ret = overwrite_item(trans, log, dst_path, src, i, &min_key); if (ret) {
From: Filipe Manana fdmanana@suse.com
commit 98272bb77bf4cc20ed1ffca89832d713e70ebf09 upstream.
When doing an incremental send it is possible that when processing the new references for an inode we end up issuing rename or link operations that have an invalid path, which contains the orphanized name of a directory before we actually orphanized it, causing the receiver to fail.
The following reproducer triggers such scenario:
$ cat reproducer.sh #!/bin/bash
mkfs.btrfs -f /dev/sdi >/dev/null mount /dev/sdi /mnt/sdi
touch /mnt/sdi/a touch /mnt/sdi/b mkdir /mnt/sdi/testdir # We want "a" to have a lower inode number then "testdir" (257 vs 259). mv /mnt/sdi/a /mnt/sdi/testdir/a
# Filesystem looks like: # # . (ino 256) # |----- testdir/ (ino 259) # | |----- a (ino 257) # | # |----- b (ino 258)
btrfs subvolume snapshot -r /mnt/sdi /mnt/sdi/snap1 btrfs send -f /tmp/snap1.send /mnt/sdi/snap1
# Now rename 259 to "testdir_2", then change the name of 257 to # "testdir" and make it a direct descendant of the root inode (256). # Also create a new link for inode 257 with the old name of inode 258. # By swapping the names and location of several inodes and create a # nasty dependency chain of rename and link operations. mv /mnt/sdi/testdir/a /mnt/sdi/a2 touch /mnt/sdi/testdir/a mv /mnt/sdi/b /mnt/sdi/b2 ln /mnt/sdi/a2 /mnt/sdi/b mv /mnt/sdi/testdir /mnt/sdi/testdir_2 mv /mnt/sdi/a2 /mnt/sdi/testdir
# Filesystem now looks like: # # . (ino 256) # |----- testdir_2/ (ino 259) # | |----- a (ino 260) # | # |----- testdir (ino 257) # |----- b (ino 257) # |----- b2 (ino 258)
btrfs subvolume snapshot -r /mnt/sdi /mnt/sdi/snap2 btrfs send -f /tmp/snap2.send -p /mnt/sdi/snap1 /mnt/sdi/snap2
mkfs.btrfs -f /dev/sdj >/dev/null mount /dev/sdj /mnt/sdj
btrfs receive -f /tmp/snap1.send /mnt/sdj btrfs receive -f /tmp/snap2.send /mnt/sdj
umount /mnt/sdi umount /mnt/sdj
When running the reproducer, the receive of the incremental send stream fails:
$ ./reproducer.sh Create a readonly snapshot of '/mnt/sdi' in '/mnt/sdi/snap1' At subvol /mnt/sdi/snap1 Create a readonly snapshot of '/mnt/sdi' in '/mnt/sdi/snap2' At subvol /mnt/sdi/snap2 At subvol snap1 At snapshot snap2 ERROR: link b -> o259-6-0/a failed: No such file or directory
The problem happens because of the following:
1) Before we start iterating the list of new references for inode 257, we generate its current path and store it at @valid_path, done at the very beginning of process_recorded_refs(). The generated path is "o259-6-0/a", containing the orphanized name for inode 259;
2) Then we iterate over the list of new references, which has the references "b" and "testdir" in that specific order;
3) We process reference "b" first, because it is in the list before reference "testdir". We then issue a link operation to create the new reference "b" using a target path corresponding to the content at @valid_path, which corresponds to "o259-6-0/a". However we haven't yet orphanized inode 259, its name is still "testdir", and not "o259-6-0". The orphanization of 259 did not happen yet because we will process the reference named "testdir" for inode 257 only in the next iteration of the loop that goes over the list of new references.
Fix the issue by having a preliminar iteration over all the new references at process_recorded_refs(). This iteration is responsible only for doing the orphanization of other inodes that have and old reference that conflicts with one of the new references of the inode we are currently processing. The emission of rename and link operations happen now in the next iteration of the new references.
A test case for fstests will follow soon.
CC: stable@vger.kernel.org # 4.4+ Reviewed-by: Josef Bacik josef@toxicpanda.com Signed-off-by: Filipe Manana fdmanana@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/send.c | 127 ++++++++++++++++++++++++++++++++++++++------------------ 1 file changed, 87 insertions(+), 40 deletions(-)
--- a/fs/btrfs/send.c +++ b/fs/btrfs/send.c @@ -3880,52 +3880,56 @@ static int process_recorded_refs(struct goto out; }
+ /* + * Before doing any rename and link operations, do a first pass on the + * new references to orphanize any unprocessed inodes that may have a + * reference that conflicts with one of the new references of the current + * inode. This needs to happen first because a new reference may conflict + * with the old reference of a parent directory, so we must make sure + * that the path used for link and rename commands don't use an + * orphanized name when an ancestor was not yet orphanized. + * + * Example: + * + * Parent snapshot: + * + * . (ino 256) + * |----- testdir/ (ino 259) + * | |----- a (ino 257) + * | + * |----- b (ino 258) + * + * Send snapshot: + * + * . (ino 256) + * |----- testdir_2/ (ino 259) + * | |----- a (ino 260) + * | + * |----- testdir (ino 257) + * |----- b (ino 257) + * |----- b2 (ino 258) + * + * Processing the new reference for inode 257 with name "b" may happen + * before processing the new reference with name "testdir". If so, we + * must make sure that by the time we send a link command to create the + * hard link "b", inode 259 was already orphanized, since the generated + * path in "valid_path" already contains the orphanized name for 259. + * We are processing inode 257, so only later when processing 259 we do + * the rename operation to change its temporary (orphanized) name to + * "testdir_2". + */ list_for_each_entry(cur, &sctx->new_refs, list) { - /* - * We may have refs where the parent directory does not exist - * yet. This happens if the parent directories inum is higher - * than the current inum. To handle this case, we create the - * parent directory out of order. But we need to check if this - * did already happen before due to other refs in the same dir. - */ ret = get_cur_inode_state(sctx, cur->dir, cur->dir_gen); if (ret < 0) goto out; - if (ret == inode_state_will_create) { - ret = 0; - /* - * First check if any of the current inodes refs did - * already create the dir. - */ - list_for_each_entry(cur2, &sctx->new_refs, list) { - if (cur == cur2) - break; - if (cur2->dir == cur->dir) { - ret = 1; - break; - } - } - - /* - * If that did not happen, check if a previous inode - * did already create the dir. - */ - if (!ret) - ret = did_create_dir(sctx, cur->dir); - if (ret < 0) - goto out; - if (!ret) { - ret = send_create_inode(sctx, cur->dir); - if (ret < 0) - goto out; - } - } + if (ret == inode_state_will_create) + continue;
/* - * Check if this new ref would overwrite the first ref of - * another unprocessed inode. If yes, orphanize the - * overwritten inode. If we find an overwritten ref that is - * not the first ref, simply unlink it. + * Check if this new ref would overwrite the first ref of another + * unprocessed inode. If yes, orphanize the overwritten inode. + * If we find an overwritten ref that is not the first ref, + * simply unlink it. */ ret = will_overwrite_ref(sctx, cur->dir, cur->dir_gen, cur->name, cur->name_len, @@ -4002,6 +4006,49 @@ static int process_recorded_refs(struct if (ret < 0) goto out; } + } + + } + + list_for_each_entry(cur, &sctx->new_refs, list) { + /* + * We may have refs where the parent directory does not exist + * yet. This happens if the parent directories inum is higher + * than the current inum. To handle this case, we create the + * parent directory out of order. But we need to check if this + * did already happen before due to other refs in the same dir. + */ + ret = get_cur_inode_state(sctx, cur->dir, cur->dir_gen); + if (ret < 0) + goto out; + if (ret == inode_state_will_create) { + ret = 0; + /* + * First check if any of the current inodes refs did + * already create the dir. + */ + list_for_each_entry(cur2, &sctx->new_refs, list) { + if (cur == cur2) + break; + if (cur2->dir == cur->dir) { + ret = 1; + break; + } + } + + /* + * If that did not happen, check if a previous inode + * did already create the dir. + */ + if (!ret) + ret = did_create_dir(sctx, cur->dir); + if (ret < 0) + goto out; + if (!ret) { + ret = send_create_inode(sctx, cur->dir); + if (ret < 0) + goto out; + } }
if (S_ISDIR(sctx->cur_inode_mode) && sctx->parent_root) {
From: Filipe Manana fdmanana@suse.com
commit 9c2b4e0347067396ceb3ae929d6888c81d610259 upstream.
During an incremental send, when an inode has multiple new references we might end up emitting rename operations for orphanizations that have a source path that is no longer valid due to a previous orphanization of some directory inode. This causes the receiver to fail since it tries to rename a path that does not exists.
Example reproducer:
$ cat reproducer.sh #!/bin/bash
mkfs.btrfs -f /dev/sdi >/dev/null mount /dev/sdi /mnt/sdi
touch /mnt/sdi/f1 touch /mnt/sdi/f2 mkdir /mnt/sdi/d1 mkdir /mnt/sdi/d1/d2
# Filesystem looks like: # # . (ino 256) # |----- f1 (ino 257) # |----- f2 (ino 258) # |----- d1/ (ino 259) # |----- d2/ (ino 260)
btrfs subvolume snapshot -r /mnt/sdi /mnt/sdi/snap1 btrfs send -f /tmp/snap1.send /mnt/sdi/snap1
# Now do a series of changes such that: # # *) inode 258 has one new hardlink and the previous name changed # # *) both names conflict with the old names of two other inodes: # # 1) the new name "d1" conflicts with the old name of inode 259, # under directory inode 256 (root) # # 2) the new name "d2" conflicts with the old name of inode 260 # under directory inode 259 # # *) inodes 259 and 260 now have the old names of inode 258 # # *) inode 257 is now located under inode 260 - an inode with a number # smaller than the inode (258) for which we created a second hard # link and swapped its names with inodes 259 and 260 # ln /mnt/sdi/f2 /mnt/sdi/d1/f2_link mv /mnt/sdi/f1 /mnt/sdi/d1/d2/f1
# Swap d1 and f2. mv /mnt/sdi/d1 /mnt/sdi/tmp mv /mnt/sdi/f2 /mnt/sdi/d1 mv /mnt/sdi/tmp /mnt/sdi/f2
# Swap d2 and f2_link mv /mnt/sdi/f2/d2 /mnt/sdi/tmp mv /mnt/sdi/f2/f2_link /mnt/sdi/f2/d2 mv /mnt/sdi/tmp /mnt/sdi/f2/f2_link
# Filesystem now looks like: # # . (ino 256) # |----- d1 (ino 258) # |----- f2/ (ino 259) # |----- f2_link/ (ino 260) # | |----- f1 (ino 257) # | # |----- d2 (ino 258)
btrfs subvolume snapshot -r /mnt/sdi /mnt/sdi/snap2 btrfs send -f /tmp/snap2.send -p /mnt/sdi/snap1 /mnt/sdi/snap2
mkfs.btrfs -f /dev/sdj >/dev/null mount /dev/sdj /mnt/sdj
btrfs receive -f /tmp/snap1.send /mnt/sdj btrfs receive -f /tmp/snap2.send /mnt/sdj
umount /mnt/sdi umount /mnt/sdj
When executed the receive of the incremental stream fails:
$ ./reproducer.sh Create a readonly snapshot of '/mnt/sdi' in '/mnt/sdi/snap1' At subvol /mnt/sdi/snap1 Create a readonly snapshot of '/mnt/sdi' in '/mnt/sdi/snap2' At subvol /mnt/sdi/snap2 At subvol snap1 At snapshot snap2 ERROR: rename d1/d2 -> o260-6-0 failed: No such file or directory
This happens because:
1) When processing inode 257 we end up computing the name for inode 259 because it is an ancestor in the send snapshot, and at that point it still has its old name, "d1", from the parent snapshot because inode 259 was not yet processed. We then cache that name, which is valid until we start processing inode 259 (or set the progress to 260 after processing its references);
2) Later we start processing inode 258 and collecting all its new references into the list sctx->new_refs. The first reference in the list happens to be the reference for name "d1" while the reference for name "d2" is next (the last element of the list). We compute the full path "d1/d2" for this second reference and store it in the reference (its ->full_path member). The path used for the new parent directory was "d1" and not "f2" because inode 259, the new parent, was not yet processed;
3) When we start processing the new references at process_recorded_refs() we start with the first reference in the list, for the new name "d1". Because there is a conflicting inode that was not yet processed, which is directory inode 259, we orphanize it, renaming it from "d1" to "o259-6-0";
4) Then we start processing the new reference for name "d2", and we realize it conflicts with the reference of inode 260 in the parent snapshot. So we issue an orphanization operation for inode 260 by emitting a rename operation with a destination path of "o260-6-0" and a source path of "d1/d2" - this source path is the value we stored in the reference earlier at step 2), corresponding to the ->full_path member of the reference, however that path is no longer valid due to the orphanization of the directory inode 259 in step 3). This makes the receiver fail since the path does not exists, it should have been "o259-6-0/d2".
Fix this by recomputing the full path of a reference before emitting an orphanization if we previously orphanized any directory, since that directory could be a parent in the new path. This is a rare scenario so keeping it simple and not checking if that previously orphanized directory is in fact an ancestor of the inode we are trying to orphanize.
A test case for fstests follows soon.
CC: stable@vger.kernel.org # 4.4+ Reviewed-by: Josef Bacik josef@toxicpanda.com Signed-off-by: Filipe Manana fdmanana@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/send.c | 72 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 72 insertions(+)
--- a/fs/btrfs/send.c +++ b/fs/btrfs/send.c @@ -3813,6 +3813,72 @@ static int update_ref_path(struct send_c }
/* + * When processing the new references for an inode we may orphanize an existing + * directory inode because its old name conflicts with one of the new references + * of the current inode. Later, when processing another new reference of our + * inode, we might need to orphanize another inode, but the path we have in the + * reference reflects the pre-orphanization name of the directory we previously + * orphanized. For example: + * + * parent snapshot looks like: + * + * . (ino 256) + * |----- f1 (ino 257) + * |----- f2 (ino 258) + * |----- d1/ (ino 259) + * |----- d2/ (ino 260) + * + * send snapshot looks like: + * + * . (ino 256) + * |----- d1 (ino 258) + * |----- f2/ (ino 259) + * |----- f2_link/ (ino 260) + * | |----- f1 (ino 257) + * | + * |----- d2 (ino 258) + * + * When processing inode 257 we compute the name for inode 259 as "d1", and we + * cache it in the name cache. Later when we start processing inode 258, when + * collecting all its new references we set a full path of "d1/d2" for its new + * reference with name "d2". When we start processing the new references we + * start by processing the new reference with name "d1", and this results in + * orphanizing inode 259, since its old reference causes a conflict. Then we + * move on the next new reference, with name "d2", and we find out we must + * orphanize inode 260, as its old reference conflicts with ours - but for the + * orphanization we use a source path corresponding to the path we stored in the + * new reference, which is "d1/d2" and not "o259-6-0/d2" - this makes the + * receiver fail since the path component "d1/" no longer exists, it was renamed + * to "o259-6-0/" when processing the previous new reference. So in this case we + * must recompute the path in the new reference and use it for the new + * orphanization operation. + */ +static int refresh_ref_path(struct send_ctx *sctx, struct recorded_ref *ref) +{ + char *name; + int ret; + + name = kmemdup(ref->name, ref->name_len, GFP_KERNEL); + if (!name) + return -ENOMEM; + + fs_path_reset(ref->full_path); + ret = get_cur_path(sctx, ref->dir, ref->dir_gen, ref->full_path); + if (ret < 0) + goto out; + + ret = fs_path_add(ref->full_path, name, ref->name_len); + if (ret < 0) + goto out; + + /* Update the reference's base name pointer. */ + set_ref_path(ref, ref->full_path); +out: + kfree(name); + return ret; +} + +/* * This does all the move/link/unlink/rmdir magic. */ static int process_recorded_refs(struct send_ctx *sctx, int *pending_move) @@ -3946,6 +4012,12 @@ static int process_recorded_refs(struct struct name_cache_entry *nce; struct waiting_dir_move *wdm;
+ if (orphanized_dir) { + ret = refresh_ref_path(sctx, cur); + if (ret < 0) + goto out; + } + ret = orphanize_inode(sctx, ow_inode, ow_gen, cur->full_path); if (ret < 0)
From: Denis Efremov efremov@linux.com
commit 8eb2fd00153a3a96a19c62ac9c6d48c2efebe5e8 upstream.
btrfs_ioctl_send() used open-coded kvzalloc implementation earlier. The code was accidentally replaced with kzalloc() call [1]. Restore the original code by using kvzalloc() to allocate sctx->clone_roots.
[1] https://patchwork.kernel.org/patch/9757891/#20529627
Fixes: 818e010bf9d0 ("btrfs: replace opencoded kvzalloc with the helper") CC: stable@vger.kernel.org # 4.14+ Signed-off-by: Denis Efremov efremov@linux.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/send.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/btrfs/send.c +++ b/fs/btrfs/send.c @@ -7300,7 +7300,7 @@ long btrfs_ioctl_send(struct file *mnt_f
alloc_size = sizeof(struct clone_root) * (arg->clone_sources_count + 1);
- sctx->clone_roots = kzalloc(alloc_size, GFP_KERNEL); + sctx->clone_roots = kvzalloc(alloc_size, GFP_KERNEL); if (!sctx->clone_roots) { ret = -ENOMEM; goto out;
From: Qu Wenruo wqu@suse.com
commit 1465af12e254a68706e110846f59cf0f09683184 upstream.
Commit 259ee7754b67 ("btrfs: tree-checker: Add ROOT_ITEM check") introduced btrfs root item size check, however btrfs root item has two versions, the legacy one which just ends before generation_v2 member, is smaller than current btrfs root item size.
This caused btrfs kernel to reject valid but old tree root leaves.
Fix this problem by also allowing legacy root item, since kernel can already handle them pretty well and upgrade to newer root item format when needed.
Reported-by: Martin Steigerwald martin@lichtvoll.de Fixes: 259ee7754b67 ("btrfs: tree-checker: Add ROOT_ITEM check") CC: stable@vger.kernel.org # 5.4+ Tested-By: Martin Steigerwald martin@lichtvoll.de Reviewed-by: Josef Bacik josef@toxicpanda.com Signed-off-by: Qu Wenruo wqu@suse.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/tree-checker.c | 17 ++++++++++++----- include/uapi/linux/btrfs_tree.h | 14 ++++++++++++++ 2 files changed, 26 insertions(+), 5 deletions(-)
--- a/fs/btrfs/tree-checker.c +++ b/fs/btrfs/tree-checker.c @@ -1035,7 +1035,7 @@ static int check_root_item(struct extent int slot) { struct btrfs_fs_info *fs_info = leaf->fs_info; - struct btrfs_root_item ri; + struct btrfs_root_item ri = { 0 }; const u64 valid_root_flags = BTRFS_ROOT_SUBVOL_RDONLY | BTRFS_ROOT_SUBVOL_DEAD; int ret; @@ -1044,14 +1044,21 @@ static int check_root_item(struct extent if (ret < 0) return ret;
- if (btrfs_item_size_nr(leaf, slot) != sizeof(ri)) { + if (btrfs_item_size_nr(leaf, slot) != sizeof(ri) && + btrfs_item_size_nr(leaf, slot) != btrfs_legacy_root_item_size()) { generic_err(leaf, slot, - "invalid root item size, have %u expect %zu", - btrfs_item_size_nr(leaf, slot), sizeof(ri)); + "invalid root item size, have %u expect %zu or %u", + btrfs_item_size_nr(leaf, slot), sizeof(ri), + btrfs_legacy_root_item_size()); }
+ /* + * For legacy root item, the members starting at generation_v2 will be + * all filled with 0. + * And since we allow geneartion_v2 as 0, it will still pass the check. + */ read_extent_buffer(leaf, &ri, btrfs_item_ptr_offset(leaf, slot), - sizeof(ri)); + btrfs_item_size_nr(leaf, slot));
/* Generation related */ if (btrfs_root_generation(&ri) > --- a/include/uapi/linux/btrfs_tree.h +++ b/include/uapi/linux/btrfs_tree.h @@ -4,6 +4,11 @@
#include <linux/btrfs.h> #include <linux/types.h> +#ifdef __KERNEL__ +#include <linux/stddef.h> +#else +#include <stddef.h> +#endif
/* * This header contains the structure definitions and constants used @@ -645,6 +650,15 @@ struct btrfs_root_item { } __attribute__ ((__packed__));
/* + * Btrfs root item used to be smaller than current size. The old format ends + * at where member generation_v2 is. + */ +static inline __u32 btrfs_legacy_root_item_size(void) +{ + return offsetof(struct btrfs_root_item, generation_v2); +} + +/* * this is used for both forward and backward root refs */ struct btrfs_root_ref {
From: Johannes Thumshirn johannes.thumshirn@wdc.com
commit 6b613cc97f0ace77f92f7bc112b8f6ad3f52baf8 upstream.
We have several occurrences of a soft lockup from fstest's generic/175 testcase, which look more or less like this one:
watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [xfs_io:10030] Kernel panic - not syncing: softlockup: hung tasks CPU: 0 PID: 10030 Comm: xfs_io Tainted: G L 5.9.0-rc5+ #768 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.13.0-0-gf21b5a4-rebuilt.opensuse.org 04/01/2014 Call Trace: <IRQ> dump_stack+0x77/0xa0 panic+0xfa/0x2cb watchdog_timer_fn.cold+0x85/0xa5 ? lockup_detector_update_enable+0x50/0x50 __hrtimer_run_queues+0x99/0x4c0 ? recalibrate_cpu_khz+0x10/0x10 hrtimer_run_queues+0x9f/0xb0 update_process_times+0x28/0x80 tick_handle_periodic+0x1b/0x60 __sysvec_apic_timer_interrupt+0x76/0x210 asm_call_on_stack+0x12/0x20 </IRQ> sysvec_apic_timer_interrupt+0x7f/0x90 asm_sysvec_apic_timer_interrupt+0x12/0x20 RIP: 0010:btrfs_tree_unlock+0x91/0x1a0 [btrfs] RSP: 0018:ffffc90007123a58 EFLAGS: 00000282 RAX: ffff8881cea2fbe0 RBX: ffff8881cea2fbe0 RCX: 0000000000000000 RDX: ffff8881d23fd200 RSI: ffffffff82045220 RDI: ffff8881cea2fba0 RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000000000032 R10: 0000160000000000 R11: 0000000000001000 R12: 0000000000001000 R13: ffff8882357fd5b0 R14: ffff88816fa76e70 R15: ffff8881cea2fad0 ? btrfs_tree_unlock+0x15b/0x1a0 [btrfs] btrfs_release_path+0x67/0x80 [btrfs] btrfs_insert_replace_extent+0x177/0x2c0 [btrfs] btrfs_replace_file_extents+0x472/0x7c0 [btrfs] btrfs_clone+0x9ba/0xbd0 [btrfs] btrfs_clone_files.isra.0+0xeb/0x140 [btrfs] ? file_update_time+0xcd/0x120 btrfs_remap_file_range+0x322/0x3b0 [btrfs] do_clone_file_range+0xb7/0x1e0 vfs_clone_file_range+0x30/0xa0 ioctl_file_clone+0x8a/0xc0 do_vfs_ioctl+0x5b2/0x6f0 __x64_sys_ioctl+0x37/0xa0 do_syscall_64+0x33/0x40 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x7f87977fc247 RSP: 002b:00007ffd51a2f6d8 EFLAGS: 00000206 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f87977fc247 RDX: 00007ffd51a2f710 RSI: 000000004020940d RDI: 0000000000000003 RBP: 0000000000000004 R08: 00007ffd51a79080 R09: 0000000000000000 R10: 00005621f11352f2 R11: 0000000000000206 R12: 0000000000000000 R13: 0000000000000000 R14: 00005621f128b958 R15: 0000000080000000 Kernel Offset: disabled ---[ end Kernel panic - not syncing: softlockup: hung tasks ]---
All of these lockup reports have the call chain btrfs_clone_files() -> btrfs_clone() in common. btrfs_clone_files() calls btrfs_clone() with both source and destination extents locked and loops over the source extent to create the clones.
Conditionally reschedule in the btrfs_clone() loop, to give some time back to other processes.
CC: stable@vger.kernel.org # 4.4+ Reviewed-by: Josef Bacik josef@toxicpanda.com Signed-off-by: Johannes Thumshirn johannes.thumshirn@wdc.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/reflink.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/fs/btrfs/reflink.c +++ b/fs/btrfs/reflink.c @@ -520,6 +520,8 @@ process_slot: ret = -EINTR; goto out; } + + cond_resched(); } ret = 0;
From: Josef Bacik josef@toxicpanda.com
commit 572c83acdcdafeb04e70aa46be1fa539310be20c upstream.
In fstest btrfs/064 a transaction abort in __btrfs_cow_block could lead to a system lockup. It gets stuck trying to write back inodes, and the write back thread was trying to lock an extent buffer:
$ cat /proc/2143497/stack [<0>] __btrfs_tree_lock+0x108/0x250 [<0>] lock_extent_buffer_for_io+0x35e/0x3a0 [<0>] btree_write_cache_pages+0x15a/0x3b0 [<0>] do_writepages+0x28/0xb0 [<0>] __writeback_single_inode+0x54/0x5c0 [<0>] writeback_sb_inodes+0x1e8/0x510 [<0>] wb_writeback+0xcc/0x440 [<0>] wb_workfn+0xd7/0x650 [<0>] process_one_work+0x236/0x560 [<0>] worker_thread+0x55/0x3c0 [<0>] kthread+0x13a/0x150 [<0>] ret_from_fork+0x1f/0x30
This is because we got an error while COWing a block, specifically here
if (test_bit(BTRFS_ROOT_SHAREABLE, &root->state)) { ret = btrfs_reloc_cow_block(trans, root, buf, cow); if (ret) { btrfs_abort_transaction(trans, ret); return ret; } }
[16402.241552] BTRFS: Transaction aborted (error -2) [16402.242362] WARNING: CPU: 1 PID: 2563188 at fs/btrfs/ctree.c:1074 __btrfs_cow_block+0x376/0x540 [16402.249469] CPU: 1 PID: 2563188 Comm: fsstress Not tainted 5.9.0-rc6+ #8 [16402.249936] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.13.0-2.fc32 04/01/2014 [16402.250525] RIP: 0010:__btrfs_cow_block+0x376/0x540 [16402.252417] RSP: 0018:ffff9cca40e578b0 EFLAGS: 00010282 [16402.252787] RAX: 0000000000000025 RBX: 0000000000000002 RCX: ffff9132bbd19388 [16402.253278] RDX: 00000000ffffffd8 RSI: 0000000000000027 RDI: ffff9132bbd19380 [16402.254063] RBP: ffff9132b41a49c0 R08: 0000000000000000 R09: 0000000000000000 [16402.254887] R10: 0000000000000000 R11: ffff91324758b080 R12: ffff91326ef17ce0 [16402.255694] R13: ffff91325fc0f000 R14: ffff91326ef176b0 R15: ffff9132815e2000 [16402.256321] FS: 00007f542c6d7b80(0000) GS:ffff9132bbd00000(0000) knlGS:0000000000000000 [16402.256973] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [16402.257374] CR2: 00007f127b83f250 CR3: 0000000133480002 CR4: 0000000000370ee0 [16402.257867] Call Trace: [16402.258072] btrfs_cow_block+0x109/0x230 [16402.258356] btrfs_search_slot+0x530/0x9d0 [16402.258655] btrfs_lookup_file_extent+0x37/0x40 [16402.259155] __btrfs_drop_extents+0x13c/0xd60 [16402.259628] ? btrfs_block_rsv_migrate+0x4f/0xb0 [16402.259949] btrfs_replace_file_extents+0x190/0x820 [16402.260873] btrfs_clone+0x9ae/0xc00 [16402.261139] btrfs_extent_same_range+0x66/0x90 [16402.261771] btrfs_remap_file_range+0x353/0x3b1 [16402.262333] vfs_dedupe_file_range_one.part.0+0xd5/0x140 [16402.262821] vfs_dedupe_file_range+0x189/0x220 [16402.263150] do_vfs_ioctl+0x552/0x700 [16402.263662] __x64_sys_ioctl+0x62/0xb0 [16402.264023] do_syscall_64+0x33/0x40 [16402.264364] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [16402.264862] RIP: 0033:0x7f542c7d15cb [16402.266901] RSP: 002b:00007ffd35944ea8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [16402.267627] RAX: ffffffffffffffda RBX: 00000000009d1968 RCX: 00007f542c7d15cb [16402.268298] RDX: 00000000009d2490 RSI: 00000000c0189436 RDI: 0000000000000003 [16402.268958] RBP: 00000000009d2520 R08: 0000000000000036 R09: 00000000009d2e64 [16402.269726] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000002 [16402.270659] R13: 000000000001f000 R14: 00000000009d1970 R15: 00000000009d2e80 [16402.271498] irq event stamp: 0 [16402.271846] hardirqs last enabled at (0): [<0000000000000000>] 0x0 [16402.272497] hardirqs last disabled at (0): [<ffffffff910dbf59>] copy_process+0x6b9/0x1ba0 [16402.273343] softirqs last enabled at (0): [<ffffffff910dbf59>] copy_process+0x6b9/0x1ba0 [16402.273905] softirqs last disabled at (0): [<0000000000000000>] 0x0 [16402.274338] ---[ end trace 737874a5a41a8236 ]--- [16402.274669] BTRFS: error (device dm-9) in __btrfs_cow_block:1074: errno=-2 No such entry [16402.276179] BTRFS info (device dm-9): forced readonly [16402.277046] BTRFS: error (device dm-9) in btrfs_replace_file_extents:2723: errno=-2 No such entry [16402.278744] BTRFS: error (device dm-9) in __btrfs_cow_block:1074: errno=-2 No such entry [16402.279968] BTRFS: error (device dm-9) in __btrfs_cow_block:1074: errno=-2 No such entry [16402.280582] BTRFS info (device dm-9): balance: ended with status: -30
The problem here is that as soon as we allocate the new block it is locked and marked dirty in the btree inode. This means that we could attempt to writeback this block and need to lock the extent buffer. However we're not unlocking it here and thus we deadlock.
Fix this by unlocking the cow block if we have any errors inside of __btrfs_cow_block, and also free it so we do not leak it.
CC: stable@vger.kernel.org # 4.4+ Reviewed-by: Filipe Manana fdmanana@suse.com Signed-off-by: Josef Bacik josef@toxicpanda.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/ctree.c | 6 ++++++ 1 file changed, 6 insertions(+)
--- a/fs/btrfs/ctree.c +++ b/fs/btrfs/ctree.c @@ -1061,6 +1061,8 @@ static noinline int __btrfs_cow_block(st
ret = update_ref_for_cow(trans, root, buf, cow, &last_ref); if (ret) { + btrfs_tree_unlock(cow); + free_extent_buffer(cow); btrfs_abort_transaction(trans, ret); return ret; } @@ -1068,6 +1070,8 @@ static noinline int __btrfs_cow_block(st if (test_bit(BTRFS_ROOT_SHAREABLE, &root->state)) { ret = btrfs_reloc_cow_block(trans, root, buf, cow); if (ret) { + btrfs_tree_unlock(cow); + free_extent_buffer(cow); btrfs_abort_transaction(trans, ret); return ret; } @@ -1100,6 +1104,8 @@ static noinline int __btrfs_cow_block(st if (last_ref) { ret = tree_mod_log_free_eb(buf); if (ret) { + btrfs_tree_unlock(cow); + free_extent_buffer(cow); btrfs_abort_transaction(trans, ret); return ret; }
From: Anand Jain anand.jain@oracle.com
commit 96c2e067ed3e3e004580a643c76f58729206b829 upstream.
Many things can happen after the device is scanned and before the device is mounted. One such thing is losing the BTRFS_MAGIC on the device. If it happens we still won't free that device from the memory and cause the userland confusion.
For example: As the BTRFS_IOC_DEV_INFO still carries the device path which does not have the BTRFS_MAGIC, 'btrfs fi show' still lists device which does not belong to the filesystem anymore:
$ mkfs.btrfs -fq -draid1 -mraid1 /dev/sda /dev/sdb $ wipefs -a /dev/sdb # /dev/sdb does not contain magic signature $ mount -o degraded /dev/sda /btrfs $ btrfs fi show -m Label: none uuid: 470ec6fb-646b-4464-b3cb-df1b26c527bd Total devices 2 FS bytes used 128.00KiB devid 1 size 3.00GiB used 571.19MiB path /dev/sda devid 2 size 3.00GiB used 571.19MiB path /dev/sdb
We need to distinguish the missing signature and invalid superblock, so add a specific error code ENODATA for that. This also fixes failure of fstest btrfs/198.
CC: stable@vger.kernel.org # 4.19+ Reviewed-by: Josef Bacik josef@toxicpanda.com Signed-off-by: Anand Jain anand.jain@oracle.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/disk-io.c | 8 ++++++-- fs/btrfs/volumes.c | 18 ++++++++++++------ 2 files changed, 18 insertions(+), 8 deletions(-)
--- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -3482,8 +3482,12 @@ struct btrfs_super_block *btrfs_read_dev return ERR_CAST(page);
super = page_address(page); - if (btrfs_super_bytenr(super) != bytenr || - btrfs_super_magic(super) != BTRFS_MAGIC) { + if (btrfs_super_magic(super) != BTRFS_MAGIC) { + btrfs_release_disk_super(super); + return ERR_PTR(-ENODATA); + } + + if (btrfs_super_bytenr(super) != bytenr) { btrfs_release_disk_super(super); return ERR_PTR(-EINVAL); } --- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -1200,17 +1200,23 @@ static int open_fs_devices(struct btrfs_ { struct btrfs_device *device; struct btrfs_device *latest_dev = NULL; + struct btrfs_device *tmp_device;
flags |= FMODE_EXCL;
- list_for_each_entry(device, &fs_devices->devices, dev_list) { - /* Just open everything we can; ignore failures here */ - if (btrfs_open_one_device(fs_devices, device, flags, holder)) - continue; + list_for_each_entry_safe(device, tmp_device, &fs_devices->devices, + dev_list) { + int ret;
- if (!latest_dev || - device->generation > latest_dev->generation) + ret = btrfs_open_one_device(fs_devices, device, flags, holder); + if (ret == 0 && + (!latest_dev || device->generation > latest_dev->generation)) { latest_dev = device; + } else if (ret == -ENODATA) { + fs_devices->num_devices--; + list_del(&device->dev_list); + btrfs_free_device(device); + } } if (fs_devices->open_devices == 0) return -EINVAL;
From: Daniel Xu dxu@dxuuu.xyz
commit 85d07fbe09efd1c529ff3e025e2f0d2c6c96a1b7 upstream.
If there's no parity and num_stripes < ncopies, a crafted image can trigger a division by zero in calc_stripe_length().
The image was generated through fuzzing.
CC: stable@vger.kernel.org # 5.4+ Reviewed-by: Qu Wenruo wqu@suse.com Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=209587 Signed-off-by: Daniel Xu dxu@dxuuu.xyz Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/tree-checker.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+)
--- a/fs/btrfs/tree-checker.c +++ b/fs/btrfs/tree-checker.c @@ -760,18 +760,36 @@ int btrfs_check_chunk_valid(struct exten u64 type; u64 features; bool mixed = false; + int raid_index; + int nparity; + int ncopies;
length = btrfs_chunk_length(leaf, chunk); stripe_len = btrfs_chunk_stripe_len(leaf, chunk); num_stripes = btrfs_chunk_num_stripes(leaf, chunk); sub_stripes = btrfs_chunk_sub_stripes(leaf, chunk); type = btrfs_chunk_type(leaf, chunk); + raid_index = btrfs_bg_flags_to_raid_index(type); + ncopies = btrfs_raid_array[raid_index].ncopies; + nparity = btrfs_raid_array[raid_index].nparity;
if (!num_stripes) { chunk_err(leaf, chunk, logical, "invalid chunk num_stripes, have %u", num_stripes); return -EUCLEAN; } + if (num_stripes < ncopies) { + chunk_err(leaf, chunk, logical, + "invalid chunk num_stripes < ncopies, have %u < %d", + num_stripes, ncopies); + return -EUCLEAN; + } + if (nparity && num_stripes == nparity) { + chunk_err(leaf, chunk, logical, + "invalid chunk num_stripes == nparity, have %u == %d", + num_stripes, nparity); + return -EUCLEAN; + } if (!IS_ALIGNED(logical, fs_info->sectorsize)) { chunk_err(leaf, chunk, logical, "invalid chunk logical, have %llu should aligned to %u",
From: Filipe Manana fdmanana@suse.com
commit 83bc1560e02e25c6439341352024ebe8488f4fbd upstream.
If we fail to find suitable zones for a new readahead extent, we end up leaving a stale pointer in the global readahead extents radix tree (fs_info->reada_tree), which can trigger the following trace later on:
[13367.696354] BUG: kernel NULL pointer dereference, address: 00000000000000b0 [13367.696802] #PF: supervisor read access in kernel mode [13367.697249] #PF: error_code(0x0000) - not-present page [13367.697721] PGD 0 P4D 0 [13367.698171] Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC PTI [13367.698632] CPU: 6 PID: 851214 Comm: btrfs Tainted: G W 5.9.0-rc6-btrfs-next-69 #1 [13367.699100] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014 [13367.700069] RIP: 0010:__lock_acquire+0x20a/0x3970 [13367.700562] Code: ff 1f 0f b7 c0 48 0f (...) [13367.701609] RSP: 0018:ffffb14448f57790 EFLAGS: 00010046 [13367.702140] RAX: 0000000000000000 RBX: 29b935140c15e8cf RCX: 0000000000000000 [13367.702698] RDX: 0000000000000002 RSI: ffffffffb3d66bd0 RDI: 0000000000000046 [13367.703240] RBP: ffff8a52ba8ac040 R08: 00000c2866ad9288 R09: 0000000000000001 [13367.703783] R10: 0000000000000001 R11: 00000000b66d9b53 R12: ffff8a52ba8ac9b0 [13367.704330] R13: 0000000000000000 R14: ffff8a532b6333e8 R15: 0000000000000000 [13367.704880] FS: 00007fe1df6b5700(0000) GS:ffff8a5376600000(0000) knlGS:0000000000000000 [13367.705438] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [13367.705995] CR2: 00000000000000b0 CR3: 000000022cca8004 CR4: 00000000003706e0 [13367.706565] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [13367.707127] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [13367.707686] Call Trace: [13367.708246] ? ___slab_alloc+0x395/0x740 [13367.708820] ? reada_add_block+0xae/0xee0 [btrfs] [13367.709383] lock_acquire+0xb1/0x480 [13367.709955] ? reada_add_block+0xe0/0xee0 [btrfs] [13367.710537] ? reada_add_block+0xae/0xee0 [btrfs] [13367.711097] ? rcu_read_lock_sched_held+0x5d/0x90 [13367.711659] ? kmem_cache_alloc_trace+0x8d2/0x990 [13367.712221] ? lock_acquired+0x33b/0x470 [13367.712784] _raw_spin_lock+0x34/0x80 [13367.713356] ? reada_add_block+0xe0/0xee0 [btrfs] [13367.713966] reada_add_block+0xe0/0xee0 [btrfs] [13367.714529] ? btrfs_root_node+0x15/0x1f0 [btrfs] [13367.715077] btrfs_reada_add+0x117/0x170 [btrfs] [13367.715620] scrub_stripe+0x21e/0x10d0 [btrfs] [13367.716141] ? kvm_sched_clock_read+0x5/0x10 [13367.716657] ? __lock_acquire+0x41e/0x3970 [13367.717184] ? scrub_chunk+0x60/0x140 [btrfs] [13367.717697] ? find_held_lock+0x32/0x90 [13367.718254] ? scrub_chunk+0x60/0x140 [btrfs] [13367.718773] ? lock_acquired+0x33b/0x470 [13367.719278] ? scrub_chunk+0xcd/0x140 [btrfs] [13367.719786] scrub_chunk+0xcd/0x140 [btrfs] [13367.720291] scrub_enumerate_chunks+0x270/0x5c0 [btrfs] [13367.720787] ? finish_wait+0x90/0x90 [13367.721281] btrfs_scrub_dev+0x1ee/0x620 [btrfs] [13367.721762] ? rcu_read_lock_any_held+0x8e/0xb0 [13367.722235] ? preempt_count_add+0x49/0xa0 [13367.722710] ? __sb_start_write+0x19b/0x290 [13367.723192] btrfs_ioctl+0x7f5/0x36f0 [btrfs] [13367.723660] ? __fget_files+0x101/0x1d0 [13367.724118] ? find_held_lock+0x32/0x90 [13367.724559] ? __fget_files+0x101/0x1d0 [13367.724982] ? __x64_sys_ioctl+0x83/0xb0 [13367.725399] __x64_sys_ioctl+0x83/0xb0 [13367.725802] do_syscall_64+0x33/0x80 [13367.726188] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [13367.726574] RIP: 0033:0x7fe1df7add87 [13367.726948] Code: 00 00 00 48 8b 05 09 91 (...) [13367.727763] RSP: 002b:00007fe1df6b4d48 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [13367.728179] RAX: ffffffffffffffda RBX: 000055ce1fb596a0 RCX: 00007fe1df7add87 [13367.728604] RDX: 000055ce1fb596a0 RSI: 00000000c400941b RDI: 0000000000000003 [13367.729021] RBP: 0000000000000000 R08: 00007fe1df6b5700 R09: 0000000000000000 [13367.729431] R10: 00007fe1df6b5700 R11: 0000000000000246 R12: 00007ffd922b07de [13367.729842] R13: 00007ffd922b07df R14: 00007fe1df6b4e40 R15: 0000000000802000 [13367.730275] Modules linked in: btrfs blake2b_generic xor (...) [13367.732638] CR2: 00000000000000b0 [13367.733166] ---[ end trace d298b6805556acd9 ]---
What happens is the following:
1) At reada_find_extent() we don't find any existing readahead extent for the metadata extent starting at logical address X;
2) So we proceed to create a new one. We then call btrfs_map_block() to get information about which stripes contain extent X;
3) After that we iterate over the stripes and create only one zone for the readahead extent - only one because reada_find_zone() returned NULL for all iterations except for one, either because a memory allocation failed or it couldn't find the block group of the extent (it may have just been deleted);
4) We then add the new readahead extent to the readahead extents radix tree at fs_info->reada_tree;
5) Then we iterate over each zone of the new readahead extent, and find that the device used for that zone no longer exists, because it was removed or it was the source device of a device replace operation. Since this left 'have_zone' set to 0, after finishing the loop we jump to the 'error' label, call kfree() on the new readahead extent and return without removing it from the radix tree at fs_info->reada_tree;
6) Any future call to reada_find_extent() for the logical address X will find the stale pointer in the readahead extents radix tree, increment its reference counter, which can trigger the use-after-free right away or return it to the caller reada_add_block() that results in the use-after-free of the example trace above.
So fix this by making sure we delete the readahead extent from the radix tree if we fail to setup zones for it (when 'have_zone = 0').
Fixes: 319450211842ba ("btrfs: reada: bypass adding extent when all zone failed") CC: stable@vger.kernel.org # 4.9+ Reviewed-by: Johannes Thumshirn johannes.thumshirn@wdc.com Reviewed-by: Josef Bacik josef@toxicpanda.com Signed-off-by: Filipe Manana fdmanana@suse.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/reada.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/fs/btrfs/reada.c +++ b/fs/btrfs/reada.c @@ -445,6 +445,8 @@ static struct reada_extent *reada_find_e } have_zone = 1; } + if (!have_zone) + radix_tree_delete(&fs_info->reada_tree, index); spin_unlock(&fs_info->reada_lock); up_read(&fs_info->dev_replace.rwsem);
From: Filipe Manana fdmanana@suse.com
commit 66d204a16c94f24ad08290a7663ab67e7fc04e82 upstream.
Very sporadically I had test case btrfs/069 from fstests hanging (for years, it is not a recent regression), with the following traces in dmesg/syslog:
[162301.160628] BTRFS info (device sdc): dev_replace from /dev/sdd (devid 2) to /dev/sdg started [162301.181196] BTRFS info (device sdc): scrub: finished on devid 4 with status: 0 [162301.287162] BTRFS info (device sdc): dev_replace from /dev/sdd (devid 2) to /dev/sdg finished [162513.513792] INFO: task btrfs-transacti:1356167 blocked for more than 120 seconds. [162513.514318] Not tainted 5.9.0-rc6-btrfs-next-69 #1 [162513.514522] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [162513.514747] task:btrfs-transacti state:D stack: 0 pid:1356167 ppid: 2 flags:0x00004000 [162513.514751] Call Trace: [162513.514761] __schedule+0x5ce/0xd00 [162513.514765] ? _raw_spin_unlock_irqrestore+0x3c/0x60 [162513.514771] schedule+0x46/0xf0 [162513.514844] wait_current_trans+0xde/0x140 [btrfs] [162513.514850] ? finish_wait+0x90/0x90 [162513.514864] start_transaction+0x37c/0x5f0 [btrfs] [162513.514879] transaction_kthread+0xa4/0x170 [btrfs] [162513.514891] ? btrfs_cleanup_transaction+0x660/0x660 [btrfs] [162513.514894] kthread+0x153/0x170 [162513.514897] ? kthread_stop+0x2c0/0x2c0 [162513.514902] ret_from_fork+0x22/0x30 [162513.514916] INFO: task fsstress:1356184 blocked for more than 120 seconds. [162513.515192] Not tainted 5.9.0-rc6-btrfs-next-69 #1 [162513.515431] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [162513.515680] task:fsstress state:D stack: 0 pid:1356184 ppid:1356177 flags:0x00004000 [162513.515682] Call Trace: [162513.515688] __schedule+0x5ce/0xd00 [162513.515691] ? _raw_spin_unlock_irqrestore+0x3c/0x60 [162513.515697] schedule+0x46/0xf0 [162513.515712] wait_current_trans+0xde/0x140 [btrfs] [162513.515716] ? finish_wait+0x90/0x90 [162513.515729] start_transaction+0x37c/0x5f0 [btrfs] [162513.515743] btrfs_attach_transaction_barrier+0x1f/0x50 [btrfs] [162513.515753] btrfs_sync_fs+0x61/0x1c0 [btrfs] [162513.515758] ? __ia32_sys_fdatasync+0x20/0x20 [162513.515761] iterate_supers+0x87/0xf0 [162513.515765] ksys_sync+0x60/0xb0 [162513.515768] __do_sys_sync+0xa/0x10 [162513.515771] do_syscall_64+0x33/0x80 [162513.515774] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [162513.515781] RIP: 0033:0x7f5238f50bd7 [162513.515782] Code: Bad RIP value. [162513.515784] RSP: 002b:00007fff67b978e8 EFLAGS: 00000206 ORIG_RAX: 00000000000000a2 [162513.515786] RAX: ffffffffffffffda RBX: 000055b1fad2c560 RCX: 00007f5238f50bd7 [162513.515788] RDX: 00000000ffffffff RSI: 000000000daf0e74 RDI: 000000000000003a [162513.515789] RBP: 0000000000000032 R08: 000000000000000a R09: 00007f5239019be0 [162513.515791] R10: fffffffffffff24f R11: 0000000000000206 R12: 000000000000003a [162513.515792] R13: 00007fff67b97950 R14: 00007fff67b97906 R15: 000055b1fad1a340 [162513.515804] INFO: task fsstress:1356185 blocked for more than 120 seconds. [162513.516064] Not tainted 5.9.0-rc6-btrfs-next-69 #1 [162513.516329] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [162513.516617] task:fsstress state:D stack: 0 pid:1356185 ppid:1356177 flags:0x00000000 [162513.516620] Call Trace: [162513.516625] __schedule+0x5ce/0xd00 [162513.516628] ? _raw_spin_unlock_irqrestore+0x3c/0x60 [162513.516634] schedule+0x46/0xf0 [162513.516647] wait_current_trans+0xde/0x140 [btrfs] [162513.516650] ? finish_wait+0x90/0x90 [162513.516662] start_transaction+0x4d7/0x5f0 [btrfs] [162513.516679] btrfs_setxattr_trans+0x3c/0x100 [btrfs] [162513.516686] __vfs_setxattr+0x66/0x80 [162513.516691] __vfs_setxattr_noperm+0x70/0x200 [162513.516697] vfs_setxattr+0x6b/0x120 [162513.516703] setxattr+0x125/0x240 [162513.516709] ? lock_acquire+0xb1/0x480 [162513.516712] ? mnt_want_write+0x20/0x50 [162513.516721] ? rcu_read_lock_any_held+0x8e/0xb0 [162513.516723] ? preempt_count_add+0x49/0xa0 [162513.516725] ? __sb_start_write+0x19b/0x290 [162513.516727] ? preempt_count_add+0x49/0xa0 [162513.516732] path_setxattr+0xba/0xd0 [162513.516739] __x64_sys_setxattr+0x27/0x30 [162513.516741] do_syscall_64+0x33/0x80 [162513.516743] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [162513.516745] RIP: 0033:0x7f5238f56d5a [162513.516746] Code: Bad RIP value. [162513.516748] RSP: 002b:00007fff67b97868 EFLAGS: 00000202 ORIG_RAX: 00000000000000bc [162513.516750] RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007f5238f56d5a [162513.516751] RDX: 000055b1fbb0d5a0 RSI: 00007fff67b978a0 RDI: 000055b1fbb0d470 [162513.516753] RBP: 000055b1fbb0d5a0 R08: 0000000000000001 R09: 00007fff67b97700 [162513.516754] R10: 0000000000000004 R11: 0000000000000202 R12: 0000000000000004 [162513.516756] R13: 0000000000000024 R14: 0000000000000001 R15: 00007fff67b978a0 [162513.516767] INFO: task fsstress:1356196 blocked for more than 120 seconds. [162513.517064] Not tainted 5.9.0-rc6-btrfs-next-69 #1 [162513.517365] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [162513.517763] task:fsstress state:D stack: 0 pid:1356196 ppid:1356177 flags:0x00004000 [162513.517780] Call Trace: [162513.517786] __schedule+0x5ce/0xd00 [162513.517789] ? _raw_spin_unlock_irqrestore+0x3c/0x60 [162513.517796] schedule+0x46/0xf0 [162513.517810] wait_current_trans+0xde/0x140 [btrfs] [162513.517814] ? finish_wait+0x90/0x90 [162513.517829] start_transaction+0x37c/0x5f0 [btrfs] [162513.517845] btrfs_attach_transaction_barrier+0x1f/0x50 [btrfs] [162513.517857] btrfs_sync_fs+0x61/0x1c0 [btrfs] [162513.517862] ? __ia32_sys_fdatasync+0x20/0x20 [162513.517865] iterate_supers+0x87/0xf0 [162513.517869] ksys_sync+0x60/0xb0 [162513.517872] __do_sys_sync+0xa/0x10 [162513.517875] do_syscall_64+0x33/0x80 [162513.517878] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [162513.517881] RIP: 0033:0x7f5238f50bd7 [162513.517883] Code: Bad RIP value. [162513.517885] RSP: 002b:00007fff67b978e8 EFLAGS: 00000206 ORIG_RAX: 00000000000000a2 [162513.517887] RAX: ffffffffffffffda RBX: 000055b1fad2c560 RCX: 00007f5238f50bd7 [162513.517889] RDX: 0000000000000000 RSI: 000000007660add2 RDI: 0000000000000053 [162513.517891] RBP: 0000000000000032 R08: 0000000000000067 R09: 00007f5239019be0 [162513.517893] R10: fffffffffffff24f R11: 0000000000000206 R12: 0000000000000053 [162513.517895] R13: 00007fff67b97950 R14: 00007fff67b97906 R15: 000055b1fad1a340 [162513.517908] INFO: task fsstress:1356197 blocked for more than 120 seconds. [162513.518298] Not tainted 5.9.0-rc6-btrfs-next-69 #1 [162513.518672] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [162513.519157] task:fsstress state:D stack: 0 pid:1356197 ppid:1356177 flags:0x00000000 [162513.519160] Call Trace: [162513.519165] __schedule+0x5ce/0xd00 [162513.519168] ? _raw_spin_unlock_irqrestore+0x3c/0x60 [162513.519174] schedule+0x46/0xf0 [162513.519190] wait_current_trans+0xde/0x140 [btrfs] [162513.519193] ? finish_wait+0x90/0x90 [162513.519206] start_transaction+0x4d7/0x5f0 [btrfs] [162513.519222] btrfs_create+0x57/0x200 [btrfs] [162513.519230] lookup_open+0x522/0x650 [162513.519246] path_openat+0x2b8/0xa50 [162513.519270] do_filp_open+0x91/0x100 [162513.519275] ? find_held_lock+0x32/0x90 [162513.519280] ? lock_acquired+0x33b/0x470 [162513.519285] ? do_raw_spin_unlock+0x4b/0xc0 [162513.519287] ? _raw_spin_unlock+0x29/0x40 [162513.519295] do_sys_openat2+0x20d/0x2d0 [162513.519300] do_sys_open+0x44/0x80 [162513.519304] do_syscall_64+0x33/0x80 [162513.519307] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [162513.519309] RIP: 0033:0x7f5238f4a903 [162513.519310] Code: Bad RIP value. [162513.519312] RSP: 002b:00007fff67b97758 EFLAGS: 00000246 ORIG_RAX: 0000000000000055 [162513.519314] RAX: ffffffffffffffda RBX: 00000000ffffffff RCX: 00007f5238f4a903 [162513.519316] RDX: 0000000000000000 RSI: 00000000000001b6 RDI: 000055b1fbb0d470 [162513.519317] RBP: 00007fff67b978c0 R08: 0000000000000001 R09: 0000000000000002 [162513.519319] R10: 00007fff67b974f7 R11: 0000000000000246 R12: 0000000000000013 [162513.519320] R13: 00000000000001b6 R14: 00007fff67b97906 R15: 000055b1fad1c620 [162513.519332] INFO: task btrfs:1356211 blocked for more than 120 seconds. [162513.519727] Not tainted 5.9.0-rc6-btrfs-next-69 #1 [162513.520115] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [162513.520508] task:btrfs state:D stack: 0 pid:1356211 ppid:1356178 flags:0x00004002 [162513.520511] Call Trace: [162513.520516] __schedule+0x5ce/0xd00 [162513.520519] ? _raw_spin_unlock_irqrestore+0x3c/0x60 [162513.520525] schedule+0x46/0xf0 [162513.520544] btrfs_scrub_pause+0x11f/0x180 [btrfs] [162513.520548] ? finish_wait+0x90/0x90 [162513.520562] btrfs_commit_transaction+0x45a/0xc30 [btrfs] [162513.520574] ? start_transaction+0xe0/0x5f0 [btrfs] [162513.520596] btrfs_dev_replace_finishing+0x6d8/0x711 [btrfs] [162513.520619] btrfs_dev_replace_by_ioctl.cold+0x1cc/0x1fd [btrfs] [162513.520639] btrfs_ioctl+0x2a25/0x36f0 [btrfs] [162513.520643] ? do_sigaction+0xf3/0x240 [162513.520645] ? find_held_lock+0x32/0x90 [162513.520648] ? do_sigaction+0xf3/0x240 [162513.520651] ? lock_acquired+0x33b/0x470 [162513.520655] ? _raw_spin_unlock_irq+0x24/0x50 [162513.520657] ? lockdep_hardirqs_on+0x7d/0x100 [162513.520660] ? _raw_spin_unlock_irq+0x35/0x50 [162513.520662] ? do_sigaction+0xf3/0x240 [162513.520671] ? __x64_sys_ioctl+0x83/0xb0 [162513.520672] __x64_sys_ioctl+0x83/0xb0 [162513.520677] do_syscall_64+0x33/0x80 [162513.520679] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [162513.520681] RIP: 0033:0x7fc3cd307d87 [162513.520682] Code: Bad RIP value. [162513.520684] RSP: 002b:00007ffe30a56bb8 EFLAGS: 00000202 ORIG_RAX: 0000000000000010 [162513.520686] RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007fc3cd307d87 [162513.520687] RDX: 00007ffe30a57a30 RSI: 00000000ca289435 RDI: 0000000000000003 [162513.520689] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000 [162513.520690] R10: 0000000000000008 R11: 0000000000000202 R12: 0000000000000003 [162513.520692] R13: 0000557323a212e0 R14: 00007ffe30a5a520 R15: 0000000000000001 [162513.520703] Showing all locks held in the system: [162513.520712] 1 lock held by khungtaskd/54: [162513.520713] #0: ffffffffb40a91a0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x15/0x197 [162513.520728] 1 lock held by in:imklog/596: [162513.520729] #0: ffff8f3f0d781400 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0x4d/0x60 [162513.520782] 1 lock held by btrfs-transacti/1356167: [162513.520784] #0: ffff8f3d810cc848 (&fs_info->transaction_kthread_mutex){+.+.}-{3:3}, at: transaction_kthread+0x4a/0x170 [btrfs] [162513.520798] 1 lock held by btrfs/1356190: [162513.520800] #0: ffff8f3d57644470 (sb_writers#15){.+.+}-{0:0}, at: mnt_want_write_file+0x22/0x60 [162513.520805] 1 lock held by fsstress/1356184: [162513.520806] #0: ffff8f3d576440e8 (&type->s_umount_key#62){++++}-{3:3}, at: iterate_supers+0x6f/0xf0 [162513.520811] 3 locks held by fsstress/1356185: [162513.520812] #0: ffff8f3d57644470 (sb_writers#15){.+.+}-{0:0}, at: mnt_want_write+0x20/0x50 [162513.520815] #1: ffff8f3d80a650b8 (&type->i_mutex_dir_key#10){++++}-{3:3}, at: vfs_setxattr+0x50/0x120 [162513.520820] #2: ffff8f3d57644690 (sb_internal#2){.+.+}-{0:0}, at: start_transaction+0x40e/0x5f0 [btrfs] [162513.520833] 1 lock held by fsstress/1356196: [162513.520834] #0: ffff8f3d576440e8 (&type->s_umount_key#62){++++}-{3:3}, at: iterate_supers+0x6f/0xf0 [162513.520838] 3 locks held by fsstress/1356197: [162513.520839] #0: ffff8f3d57644470 (sb_writers#15){.+.+}-{0:0}, at: mnt_want_write+0x20/0x50 [162513.520843] #1: ffff8f3d506465e8 (&type->i_mutex_dir_key#10){++++}-{3:3}, at: path_openat+0x2a7/0xa50 [162513.520846] #2: ffff8f3d57644690 (sb_internal#2){.+.+}-{0:0}, at: start_transaction+0x40e/0x5f0 [btrfs] [162513.520858] 2 locks held by btrfs/1356211: [162513.520859] #0: ffff8f3d810cde30 (&fs_info->dev_replace.lock_finishing_cancel_unmount){+.+.}-{3:3}, at: btrfs_dev_replace_finishing+0x52/0x711 [btrfs] [162513.520877] #1: ffff8f3d57644690 (sb_internal#2){.+.+}-{0:0}, at: start_transaction+0x40e/0x5f0 [btrfs]
This was weird because the stack traces show that a transaction commit, triggered by a device replace operation, is blocking trying to pause any running scrubs but there are no stack traces of blocked tasks doing a scrub.
After poking around with drgn, I noticed there was a scrub task that was constantly running and blocking for shorts periods of time:
t = find_task(prog, 1356190) prog.stack_trace(t)
#0 __schedule+0x5ce/0xcfc #1 schedule+0x46/0xe4 #2 schedule_timeout+0x1df/0x475 #3 btrfs_reada_wait+0xda/0x132 #4 scrub_stripe+0x2a8/0x112f #5 scrub_chunk+0xcd/0x134 #6 scrub_enumerate_chunks+0x29e/0x5ee #7 btrfs_scrub_dev+0x2d5/0x91b #8 btrfs_ioctl+0x7f5/0x36e7 #9 __x64_sys_ioctl+0x83/0xb0 #10 do_syscall_64+0x33/0x77 #11 entry_SYSCALL_64+0x7c/0x156
Which corresponds to:
int btrfs_reada_wait(void *handle) { struct reada_control *rc = handle; struct btrfs_fs_info *fs_info = rc->fs_info;
while (atomic_read(&rc->elems)) { if (!atomic_read(&fs_info->reada_works_cnt)) reada_start_machine(fs_info); wait_event_timeout(rc->wait, atomic_read(&rc->elems) == 0, (HZ + 9) / 10); } (...)
So the counter "rc->elems" was set to 1 and never decreased to 0, causing the scrub task to loop forever in that function. Then I used the following script for drgn to check the readahead requests:
$ cat dump_reada.py import sys import drgn from drgn import NULL, Object, cast, container_of, execscript, \ reinterpret, sizeof from drgn.helpers.linux import *
mnt_path = b"/home/fdmanana/btrfs-tests/scratch_1"
mnt = None for mnt in for_each_mount(prog, dst = mnt_path): pass
if mnt is None: sys.stderr.write(f'Error: mount point {mnt_path} not found\n') sys.exit(1)
fs_info = cast('struct btrfs_fs_info *', mnt.mnt.mnt_sb.s_fs_info)
def dump_re(re): nzones = re.nzones.value_() print(f're at {hex(re.value_())}') print(f'\t logical {re.logical.value_()}') print(f'\t refcnt {re.refcnt.value_()}') print(f'\t nzones {nzones}') for i in range(nzones): dev = re.zones[i].device name = dev.name.str.string_() print(f'\t\t dev id {dev.devid.value_()} name {name}') print()
for _, e in radix_tree_for_each(fs_info.reada_tree): re = cast('struct reada_extent *', e) dump_re(re)
$ drgn dump_reada.py re at 0xffff8f3da9d25ad8 logical 38928384 refcnt 1 nzones 1 dev id 0 name b'/dev/sdd' $
So there was one readahead extent with a single zone corresponding to the source device of that last device replace operation logged in dmesg/syslog. Also the ID of that zone's device was 0 which is a special value set in the source device of a device replace operation when the operation finishes (constant BTRFS_DEV_REPLACE_DEVID set at btrfs_dev_replace_finishing()), confirming again that device /dev/sdd was the source of a device replace operation.
Normally there should be as many zones in the readahead extent as there are devices, and I wasn't expecting the extent to be in a block group with a 'single' profile, so I went and confirmed with the following drgn script that there weren't any single profile block groups:
$ cat dump_block_groups.py import sys import drgn from drgn import NULL, Object, cast, container_of, execscript, \ reinterpret, sizeof from drgn.helpers.linux import *
mnt_path = b"/home/fdmanana/btrfs-tests/scratch_1"
mnt = None for mnt in for_each_mount(prog, dst = mnt_path): pass
if mnt is None: sys.stderr.write(f'Error: mount point {mnt_path} not found\n') sys.exit(1)
fs_info = cast('struct btrfs_fs_info *', mnt.mnt.mnt_sb.s_fs_info)
BTRFS_BLOCK_GROUP_DATA = (1 << 0) BTRFS_BLOCK_GROUP_SYSTEM = (1 << 1) BTRFS_BLOCK_GROUP_METADATA = (1 << 2) BTRFS_BLOCK_GROUP_RAID0 = (1 << 3) BTRFS_BLOCK_GROUP_RAID1 = (1 << 4) BTRFS_BLOCK_GROUP_DUP = (1 << 5) BTRFS_BLOCK_GROUP_RAID10 = (1 << 6) BTRFS_BLOCK_GROUP_RAID5 = (1 << 7) BTRFS_BLOCK_GROUP_RAID6 = (1 << 8) BTRFS_BLOCK_GROUP_RAID1C3 = (1 << 9) BTRFS_BLOCK_GROUP_RAID1C4 = (1 << 10)
def bg_flags_string(bg): flags = bg.flags.value_() ret = '' if flags & BTRFS_BLOCK_GROUP_DATA: ret = 'data' if flags & BTRFS_BLOCK_GROUP_METADATA: if len(ret) > 0: ret += '|' ret += 'meta' if flags & BTRFS_BLOCK_GROUP_SYSTEM: if len(ret) > 0: ret += '|' ret += 'system' if flags & BTRFS_BLOCK_GROUP_RAID0: ret += ' raid0' elif flags & BTRFS_BLOCK_GROUP_RAID1: ret += ' raid1' elif flags & BTRFS_BLOCK_GROUP_DUP: ret += ' dup' elif flags & BTRFS_BLOCK_GROUP_RAID10: ret += ' raid10' elif flags & BTRFS_BLOCK_GROUP_RAID5: ret += ' raid5' elif flags & BTRFS_BLOCK_GROUP_RAID6: ret += ' raid6' elif flags & BTRFS_BLOCK_GROUP_RAID1C3: ret += ' raid1c3' elif flags & BTRFS_BLOCK_GROUP_RAID1C4: ret += ' raid1c4' else: ret += ' single'
return ret
def dump_bg(bg): print() print(f'block group at {hex(bg.value_())}') print(f'\t start {bg.start.value_()} length {bg.length.value_()}') print(f'\t flags {bg.flags.value_()} - {bg_flags_string(bg)}')
bg_root = fs_info.block_group_cache_tree.address_of_() for bg in rbtree_inorder_for_each_entry('struct btrfs_block_group', bg_root, 'cache_node'): dump_bg(bg)
$ drgn dump_block_groups.py
block group at 0xffff8f3d673b0400 start 22020096 length 16777216 flags 258 - system raid6
block group at 0xffff8f3d53ddb400 start 38797312 length 536870912 flags 260 - meta raid6
block group at 0xffff8f3d5f4d9c00 start 575668224 length 2147483648 flags 257 - data raid6
block group at 0xffff8f3d08189000 start 2723151872 length 67108864 flags 258 - system raid6
block group at 0xffff8f3db70ff000 start 2790260736 length 1073741824 flags 260 - meta raid6
block group at 0xffff8f3d5f4dd800 start 3864002560 length 67108864 flags 258 - system raid6
block group at 0xffff8f3d67037000 start 3931111424 length 2147483648 flags 257 - data raid6 $
So there were only 2 reasons left for having a readahead extent with a single zone: reada_find_zone(), called when creating a readahead extent, returned NULL either because we failed to find the corresponding block group or because a memory allocation failed. With some additional and custom tracing I figured out that on every further ocurrence of the problem the block group had just been deleted when we were looping to create the zones for the readahead extent (at reada_find_extent()), so we ended up with only one zone in the readahead extent, corresponding to a device that ends up getting replaced.
So after figuring that out it became obvious why the hang happens:
1) Task A starts a scrub on any device of the filesystem, except for device /dev/sdd;
2) Task B starts a device replace with /dev/sdd as the source device;
3) Task A calls btrfs_reada_add() from scrub_stripe() and it is currently starting to scrub a stripe from block group X. This call to btrfs_reada_add() is the one for the extent tree. When btrfs_reada_add() calls reada_add_block(), it passes the logical address of the extent tree's root node as its 'logical' argument - a value of 38928384;
4) Task A then enters reada_find_extent(), called from reada_add_block(). It finds there isn't any existing readahead extent for the logical address 38928384, so it proceeds to the path of creating a new one.
It calls btrfs_map_block() to find out which stripes exist for the block group X. On the first iteration of the for loop that iterates over the stripes, it finds the stripe for device /dev/sdd, so it creates one zone for that device and adds it to the readahead extent. Before getting into the second iteration of the loop, the cleanup kthread deletes block group X because it was empty. So in the iterations for the remaining stripes it does not add more zones to the readahead extent, because the calls to reada_find_zone() returned NULL because they couldn't find block group X anymore.
As a result the new readahead extent has a single zone, corresponding to the device /dev/sdd;
4) Before task A returns to btrfs_reada_add() and queues the readahead job for the readahead work queue, task B finishes the device replace and at btrfs_dev_replace_finishing() swaps the device /dev/sdd with the new device /dev/sdg;
5) Task A returns to reada_add_block(), which increments the counter "->elems" of the reada_control structure allocated at btrfs_reada_add().
Then it returns back to btrfs_reada_add() and calls reada_start_machine(). This queues a job in the readahead work queue to run the function reada_start_machine_worker(), which calls __reada_start_machine().
At __reada_start_machine() we take the device list mutex and for each device found in the current device list, we call reada_start_machine_dev() to start the readahead work. However at this point the device /dev/sdd was already freed and is not in the device list anymore.
This means the corresponding readahead for the extent at 38928384 is never started, and therefore the "->elems" counter of the reada_control structure allocated at btrfs_reada_add() never goes down to 0, causing the call to btrfs_reada_wait(), done by the scrub task, to wait forever.
Note that the readahead request can be made either after the device replace started or before it started, however in pratice it is very unlikely that a device replace is able to start after a readahead request is made and is able to complete before the readahead request completes - maybe only on a very small and nearly empty filesystem.
This hang however is not the only problem we can have with readahead and device removals. When the readahead extent has other zones other than the one corresponding to the device that is being removed (either by a device replace or a device remove operation), we risk having a use-after-free on the device when dropping the last reference of the readahead extent.
For example if we create a readahead extent with two zones, one for the device /dev/sdd and one for the device /dev/sde:
1) Before the readahead worker starts, the device /dev/sdd is removed, and the corresponding btrfs_device structure is freed. However the readahead extent still has the zone pointing to the device structure;
2) When the readahead worker starts, it only finds device /dev/sde in the current device list of the filesystem;
3) It starts the readahead work, at reada_start_machine_dev(), using the device /dev/sde;
4) Then when it finishes reading the extent from device /dev/sde, it calls __readahead_hook() which ends up dropping the last reference on the readahead extent through the last call to reada_extent_put();
5) At reada_extent_put() it iterates over each zone of the readahead extent and attempts to delete an element from the device's 'reada_extents' radix tree, resulting in a use-after-free, as the device pointer of the zone for /dev/sdd is now stale. We can also access the device after dropping the last reference of a zone, through reada_zone_release(), also called by reada_extent_put().
And a device remove suffers the same problem, however since it shrinks the device size down to zero before removing the device, it is very unlikely to still have readahead requests not completed by the time we free the device, the only possibility is if the device has a very little space allocated.
While the hang problem is exclusive to scrub, since it is currently the only user of btrfs_reada_add() and btrfs_reada_wait(), the use-after-free problem affects any path that triggers readhead, which includes btree_readahead_hook() and __readahead_hook() (a readahead worker can trigger readahed for the children of a node) for example - any path that ends up calling reada_add_block() can trigger the use-after-free after a device is removed.
So fix this by waiting for any readahead requests for a device to complete before removing a device, ensuring that while waiting for existing ones no new ones can be made.
This problem has been around for a very long time - the readahead code was added in 2011, device remove exists since 2008 and device replace was introduced in 2013, hard to pick a specific commit for a git Fixes tag.
CC: stable@vger.kernel.org # 4.4+ Reviewed-by: Josef Bacik josef@toxicpanda.com Signed-off-by: Filipe Manana fdmanana@suse.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/ctree.h | 2 ++ fs/btrfs/dev-replace.c | 5 +++++ fs/btrfs/reada.c | 45 +++++++++++++++++++++++++++++++++++++++++++++ fs/btrfs/volumes.c | 3 +++ fs/btrfs/volumes.h | 1 + 5 files changed, 56 insertions(+)
--- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -3517,6 +3517,8 @@ struct reada_control *btrfs_reada_add(st int btrfs_reada_wait(void *handle); void btrfs_reada_detach(void *handle); int btree_readahead_hook(struct extent_buffer *eb, int err); +void btrfs_reada_remove_dev(struct btrfs_device *dev); +void btrfs_reada_undo_remove_dev(struct btrfs_device *dev);
static inline int is_fstree(u64 rootid) { --- a/fs/btrfs/dev-replace.c +++ b/fs/btrfs/dev-replace.c @@ -668,6 +668,9 @@ static int btrfs_dev_replace_finishing(s } btrfs_wait_ordered_roots(fs_info, U64_MAX, 0, (u64)-1);
+ if (!scrub_ret) + btrfs_reada_remove_dev(src_device); + /* * We have to use this loop approach because at this point src_device * has to be available for transaction commit to complete, yet new @@ -676,6 +679,7 @@ static int btrfs_dev_replace_finishing(s while (1) { trans = btrfs_start_transaction(root, 0); if (IS_ERR(trans)) { + btrfs_reada_undo_remove_dev(src_device); mutex_unlock(&dev_replace->lock_finishing_cancel_unmount); return PTR_ERR(trans); } @@ -726,6 +730,7 @@ error: up_write(&dev_replace->rwsem); mutex_unlock(&fs_info->chunk_mutex); mutex_unlock(&fs_info->fs_devices->device_list_mutex); + btrfs_reada_undo_remove_dev(src_device); btrfs_rm_dev_replace_blocked(fs_info); if (tgt_device) btrfs_destroy_dev_replace_tgtdev(tgt_device); --- a/fs/btrfs/reada.c +++ b/fs/btrfs/reada.c @@ -421,6 +421,9 @@ static struct reada_extent *reada_find_e if (!dev->bdev) continue;
+ if (test_bit(BTRFS_DEV_STATE_NO_READA, &dev->dev_state)) + continue; + if (dev_replace_is_ongoing && dev == fs_info->dev_replace.tgtdev) { /* @@ -1014,3 +1017,45 @@ void btrfs_reada_detach(void *handle)
kref_put(&rc->refcnt, reada_control_release); } + +/* + * Before removing a device (device replace or device remove ioctls), call this + * function to wait for all existing readahead requests on the device and to + * make sure no one queues more readahead requests for the device. + * + * Must be called without holding neither the device list mutex nor the device + * replace semaphore, otherwise it will deadlock. + */ +void btrfs_reada_remove_dev(struct btrfs_device *dev) +{ + struct btrfs_fs_info *fs_info = dev->fs_info; + + /* Serialize with readahead extent creation at reada_find_extent(). */ + spin_lock(&fs_info->reada_lock); + set_bit(BTRFS_DEV_STATE_NO_READA, &dev->dev_state); + spin_unlock(&fs_info->reada_lock); + + /* + * There might be readahead requests added to the radix trees which + * were not yet added to the readahead work queue. We need to start + * them and wait for their completion, otherwise we can end up with + * use-after-free problems when dropping the last reference on the + * readahead extents and their zones, as they need to access the + * device structure. + */ + reada_start_machine(fs_info); + btrfs_flush_workqueue(fs_info->readahead_workers); +} + +/* + * If when removing a device (device replace or device remove ioctls) an error + * happens after calling btrfs_reada_remove_dev(), call this to undo what that + * function did. This is safe to call even if btrfs_reada_remove_dev() was not + * called before. + */ +void btrfs_reada_undo_remove_dev(struct btrfs_device *dev) +{ + spin_lock(&dev->fs_info->reada_lock); + clear_bit(BTRFS_DEV_STATE_NO_READA, &dev->dev_state); + spin_unlock(&dev->fs_info->reada_lock); +} --- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -2104,6 +2104,8 @@ int btrfs_rm_device(struct btrfs_fs_info
mutex_unlock(&uuid_mutex); ret = btrfs_shrink_device(device, 0); + if (!ret) + btrfs_reada_remove_dev(device); mutex_lock(&uuid_mutex); if (ret) goto error_undo; @@ -2191,6 +2193,7 @@ out: return ret;
error_undo: + btrfs_reada_undo_remove_dev(device); if (test_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state)) { mutex_lock(&fs_info->chunk_mutex); list_add(&device->dev_alloc_list, --- a/fs/btrfs/volumes.h +++ b/fs/btrfs/volumes.h @@ -50,6 +50,7 @@ struct btrfs_io_geometry { #define BTRFS_DEV_STATE_MISSING (2) #define BTRFS_DEV_STATE_REPLACE_TGT (3) #define BTRFS_DEV_STATE_FLUSH_SENT (4) +#define BTRFS_DEV_STATE_NO_READA (5)
struct btrfs_device { struct list_head dev_list; /* device_list_mutex */
From: Josef Bacik josef@toxicpanda.com
commit 7837fa88704a66257404bb14144c9e4ab631a28a upstream.
Dave reported a problem with my rwsem conversion patch where we got the following lockdep splat:
====================================================== WARNING: possible circular locking dependency detected 5.9.0-default+ #1297 Not tainted ------------------------------------------------------ kswapd0/76 is trying to acquire lock: ffff9d5d25df2530 (&delayed_node->mutex){+.+.}-{3:3}, at: __btrfs_release_delayed_node.part.0+0x3f/0x320 [btrfs]
but task is already holding lock: ffffffffa40cbba0 (fs_reclaim){+.+.}-{0:0}, at: __fs_reclaim_acquire+0x5/0x30
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #4 (fs_reclaim){+.+.}-{0:0}: __lock_acquire+0x582/0xac0 lock_acquire+0xca/0x430 fs_reclaim_acquire.part.0+0x25/0x30 kmem_cache_alloc+0x30/0x9c0 alloc_inode+0x81/0x90 iget_locked+0xcd/0x1a0 kernfs_get_inode+0x1b/0x130 kernfs_get_tree+0x136/0x210 sysfs_get_tree+0x1a/0x50 vfs_get_tree+0x1d/0xb0 path_mount+0x70f/0xa80 do_mount+0x75/0x90 __x64_sys_mount+0x8e/0xd0 do_syscall_64+0x2d/0x70 entry_SYSCALL_64_after_hwframe+0x44/0xa9
-> #3 (kernfs_mutex){+.+.}-{3:3}: __lock_acquire+0x582/0xac0 lock_acquire+0xca/0x430 __mutex_lock+0xa0/0xaf0 kernfs_add_one+0x23/0x150 kernfs_create_dir_ns+0x58/0x80 sysfs_create_dir_ns+0x70/0xd0 kobject_add_internal+0xbb/0x2d0 kobject_add+0x7a/0xd0 btrfs_sysfs_add_block_group_type+0x141/0x1d0 [btrfs] btrfs_read_block_groups+0x1f1/0x8c0 [btrfs] open_ctree+0x981/0x1108 [btrfs] btrfs_mount_root.cold+0xe/0xb0 [btrfs] legacy_get_tree+0x2d/0x60 vfs_get_tree+0x1d/0xb0 fc_mount+0xe/0x40 vfs_kern_mount.part.0+0x71/0x90 btrfs_mount+0x13b/0x3e0 [btrfs] legacy_get_tree+0x2d/0x60 vfs_get_tree+0x1d/0xb0 path_mount+0x70f/0xa80 do_mount+0x75/0x90 __x64_sys_mount+0x8e/0xd0 do_syscall_64+0x2d/0x70 entry_SYSCALL_64_after_hwframe+0x44/0xa9
-> #2 (btrfs-extent-00){++++}-{3:3}: __lock_acquire+0x582/0xac0 lock_acquire+0xca/0x430 down_read_nested+0x45/0x220 __btrfs_tree_read_lock+0x35/0x1c0 [btrfs] __btrfs_read_lock_root_node+0x3a/0x50 [btrfs] btrfs_search_slot+0x6d4/0xfd0 [btrfs] check_committed_ref+0x69/0x200 [btrfs] btrfs_cross_ref_exist+0x65/0xb0 [btrfs] run_delalloc_nocow+0x446/0x9b0 [btrfs] btrfs_run_delalloc_range+0x61/0x6a0 [btrfs] writepage_delalloc+0xae/0x160 [btrfs] __extent_writepage+0x262/0x420 [btrfs] extent_write_cache_pages+0x2b6/0x510 [btrfs] extent_writepages+0x43/0x90 [btrfs] do_writepages+0x40/0xe0 __writeback_single_inode+0x62/0x610 writeback_sb_inodes+0x20f/0x500 wb_writeback+0xef/0x4a0 wb_do_writeback+0x49/0x2e0 wb_workfn+0x81/0x340 process_one_work+0x233/0x5d0 worker_thread+0x50/0x3b0 kthread+0x137/0x150 ret_from_fork+0x1f/0x30
-> #1 (btrfs-fs-00){++++}-{3:3}: __lock_acquire+0x582/0xac0 lock_acquire+0xca/0x430 down_read_nested+0x45/0x220 __btrfs_tree_read_lock+0x35/0x1c0 [btrfs] __btrfs_read_lock_root_node+0x3a/0x50 [btrfs] btrfs_search_slot+0x6d4/0xfd0 [btrfs] btrfs_lookup_inode+0x3a/0xc0 [btrfs] __btrfs_update_delayed_inode+0x93/0x2c0 [btrfs] __btrfs_commit_inode_delayed_items+0x7de/0x850 [btrfs] __btrfs_run_delayed_items+0x8e/0x140 [btrfs] btrfs_commit_transaction+0x367/0xbc0 [btrfs] btrfs_mksubvol+0x2db/0x470 [btrfs] btrfs_mksnapshot+0x7b/0xb0 [btrfs] __btrfs_ioctl_snap_create+0x16f/0x1a0 [btrfs] btrfs_ioctl_snap_create_v2+0xb0/0xf0 [btrfs] btrfs_ioctl+0xd0b/0x2690 [btrfs] __x64_sys_ioctl+0x6f/0xa0 do_syscall_64+0x2d/0x70 entry_SYSCALL_64_after_hwframe+0x44/0xa9
-> #0 (&delayed_node->mutex){+.+.}-{3:3}: check_prev_add+0x91/0xc60 validate_chain+0xa6e/0x2a20 __lock_acquire+0x582/0xac0 lock_acquire+0xca/0x430 __mutex_lock+0xa0/0xaf0 __btrfs_release_delayed_node.part.0+0x3f/0x320 [btrfs] btrfs_evict_inode+0x3cc/0x560 [btrfs] evict+0xd6/0x1c0 dispose_list+0x48/0x70 prune_icache_sb+0x54/0x80 super_cache_scan+0x121/0x1a0 do_shrink_slab+0x16d/0x3b0 shrink_slab+0xb1/0x2e0 shrink_node+0x230/0x6a0 balance_pgdat+0x325/0x750 kswapd+0x206/0x4d0 kthread+0x137/0x150 ret_from_fork+0x1f/0x30
other info that might help us debug this:
Chain exists of: &delayed_node->mutex --> kernfs_mutex --> fs_reclaim
Possible unsafe locking scenario:
CPU0 CPU1 ---- ---- lock(fs_reclaim); lock(kernfs_mutex); lock(fs_reclaim); lock(&delayed_node->mutex);
*** DEADLOCK ***
3 locks held by kswapd0/76: #0: ffffffffa40cbba0 (fs_reclaim){+.+.}-{0:0}, at: __fs_reclaim_acquire+0x5/0x30 #1: ffffffffa40b8b58 (shrinker_rwsem){++++}-{3:3}, at: shrink_slab+0x54/0x2e0 #2: ffff9d5d322390e8 (&type->s_umount_key#26){++++}-{3:3}, at: trylock_super+0x16/0x50
stack backtrace: CPU: 2 PID: 76 Comm: kswapd0 Not tainted 5.9.0-default+ #1297 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.0-59-gc9ba527-rebuilt.opensuse.org 04/01/2014 Call Trace: dump_stack+0x77/0x97 check_noncircular+0xff/0x110 ? save_trace+0x50/0x470 check_prev_add+0x91/0xc60 validate_chain+0xa6e/0x2a20 ? save_trace+0x50/0x470 __lock_acquire+0x582/0xac0 lock_acquire+0xca/0x430 ? __btrfs_release_delayed_node.part.0+0x3f/0x320 [btrfs] __mutex_lock+0xa0/0xaf0 ? __btrfs_release_delayed_node.part.0+0x3f/0x320 [btrfs] ? __lock_acquire+0x582/0xac0 ? __btrfs_release_delayed_node.part.0+0x3f/0x320 [btrfs] ? btrfs_evict_inode+0x30b/0x560 [btrfs] ? __btrfs_release_delayed_node.part.0+0x3f/0x320 [btrfs] __btrfs_release_delayed_node.part.0+0x3f/0x320 [btrfs] btrfs_evict_inode+0x3cc/0x560 [btrfs] evict+0xd6/0x1c0 dispose_list+0x48/0x70 prune_icache_sb+0x54/0x80 super_cache_scan+0x121/0x1a0 do_shrink_slab+0x16d/0x3b0 shrink_slab+0xb1/0x2e0 shrink_node+0x230/0x6a0 balance_pgdat+0x325/0x750 kswapd+0x206/0x4d0 ? finish_wait+0x90/0x90 ? balance_pgdat+0x750/0x750 kthread+0x137/0x150 ? kthread_mod_delayed_work+0xc0/0xc0 ret_from_fork+0x1f/0x30
This happens because we are still holding the path open when we start adding the sysfs files for the block groups, which creates a dependency on fs_reclaim via the tree lock. Fix this by dropping the path before we start doing anything with sysfs.
Reported-by: David Sterba dsterba@suse.com CC: stable@vger.kernel.org # 5.8+ Reviewed-by: Anand Jain anand.jain@oracle.com Reviewed-by: Filipe Manana fdmanana@suse.com Signed-off-by: Josef Bacik josef@toxicpanda.com Reviewed-by: David Sterba dsterba@suse.com Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/block-group.c | 1 + 1 file changed, 1 insertion(+)
--- a/fs/btrfs/block-group.c +++ b/fs/btrfs/block-group.c @@ -2034,6 +2034,7 @@ int btrfs_read_block_groups(struct btrfs key.offset = 0; btrfs_release_path(path); } + btrfs_release_path(path);
rcu_read_lock(); list_for_each_entry_rcu(space_info, &info->space_info, list) {
From: Sandeep Singh sandeep.singh@amd.com
commit 2a632815683d2d34df52b701a36fe5ac6654e719 upstream.
On some platform of AMD, S3 fails with HCE and SRE errors. To fix this, need to disable a bit which is enable in sparse controller.
Cc: stable@vger.kernel.org #v4.19+ Signed-off-by: Sanket Goswami Sanket.Goswami@amd.com Signed-off-by: Sandeep Singh sandeep.singh@amd.com Signed-off-by: Mathias Nyman mathias.nyman@linux.intel.com Link: https://lore.kernel.org/r/20201028203124.375344-3-mathias.nyman@linux.intel.... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/usb/host/xhci-pci.c | 17 +++++++++++++++++ drivers/usb/host/xhci.h | 1 + 2 files changed, 18 insertions(+)
--- a/drivers/usb/host/xhci-pci.c +++ b/drivers/usb/host/xhci-pci.c @@ -22,6 +22,8 @@ #define SSIC_PORT_CFG2_OFFSET 0x30 #define PROG_DONE (1 << 30) #define SSIC_PORT_UNUSED (1 << 31) +#define SPARSE_DISABLE_BIT 17 +#define SPARSE_CNTL_ENABLE 0xC12C
/* Device for a quirk */ #define PCI_VENDOR_ID_FRESCO_LOGIC 0x1b73 @@ -160,6 +162,9 @@ static void xhci_pci_quirks(struct devic (pdev->device == 0x15e0 || pdev->device == 0x15e1)) xhci->quirks |= XHCI_SNPS_BROKEN_SUSPEND;
+ if (pdev->vendor == PCI_VENDOR_ID_AMD && pdev->device == 0x15e5) + xhci->quirks |= XHCI_DISABLE_SPARSE; + if (pdev->vendor == PCI_VENDOR_ID_AMD) xhci->quirks |= XHCI_TRUST_TX_LENGTH;
@@ -490,6 +495,15 @@ static void xhci_pme_quirk(struct usb_hc readl(reg); }
+static void xhci_sparse_control_quirk(struct usb_hcd *hcd) +{ + u32 reg; + + reg = readl(hcd->regs + SPARSE_CNTL_ENABLE); + reg &= ~BIT(SPARSE_DISABLE_BIT); + writel(reg, hcd->regs + SPARSE_CNTL_ENABLE); +} + static int xhci_pci_suspend(struct usb_hcd *hcd, bool do_wakeup) { struct xhci_hcd *xhci = hcd_to_xhci(hcd); @@ -509,6 +523,9 @@ static int xhci_pci_suspend(struct usb_h if (xhci->quirks & XHCI_SSIC_PORT_UNUSED) xhci_ssic_port_unused_quirk(hcd, true);
+ if (xhci->quirks & XHCI_DISABLE_SPARSE) + xhci_sparse_control_quirk(hcd); + ret = xhci_suspend(xhci, do_wakeup); if (ret && (xhci->quirks & XHCI_SSIC_PORT_UNUSED)) xhci_ssic_port_unused_quirk(hcd, false); --- a/drivers/usb/host/xhci.h +++ b/drivers/usb/host/xhci.h @@ -1874,6 +1874,7 @@ struct xhci_hcd { #define XHCI_RESET_PLL_ON_DISCONNECT BIT_ULL(34) #define XHCI_SNPS_BROKEN_SUSPEND BIT_ULL(35) #define XHCI_RENESAS_FW_QUIRK BIT_ULL(36) +#define XHCI_DISABLE_SPARSE BIT_ULL(38)
unsigned int num_active_eps; unsigned int limit_active_eps;
From: Raymond Tan raymond.tan@intel.com
commit a609ce2a13360d639b384b6ca783b38c1247f2db upstream.
Similar to some other IA platforms, Elkhart Lake too depends on the PMU register write to request transition of Dx power state.
Thus, we add the PCI_DEVICE_ID_INTEL_EHLLP to the list of devices that shall execute the ACPI _DSM method during D0/D3 sequence.
[heikki.krogerus@linux.intel.com: included Fixes tag]
Fixes: dbb0569de852 ("usb: dwc3: pci: Add Support for Intel Elkhart Lake Devices") Cc: stable@vger.kernel.org Signed-off-by: Raymond Tan raymond.tan@intel.com Signed-off-by: Heikki Krogerus heikki.krogerus@linux.intel.com Signed-off-by: Felipe Balbi balbi@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/usb/dwc3/dwc3-pci.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/usb/dwc3/dwc3-pci.c +++ b/drivers/usb/dwc3/dwc3-pci.c @@ -147,7 +147,8 @@ static int dwc3_pci_quirks(struct dwc3_p
if (pdev->vendor == PCI_VENDOR_ID_INTEL) { if (pdev->device == PCI_DEVICE_ID_INTEL_BXT || - pdev->device == PCI_DEVICE_ID_INTEL_BXT_M) { + pdev->device == PCI_DEVICE_ID_INTEL_BXT_M || + pdev->device == PCI_DEVICE_ID_INTEL_EHLLP) { guid_parse(PCI_INTEL_BXT_DSM_GUID, &dwc->guid); dwc->has_dsm_for_pm = true; }
From: Thinh Nguyen Thinh.Nguyen@synopsys.com
commit 66706077dc89c66a4777a4c6298273816afb848c upstream.
The current ZLP handling for ep0 requests is only for control IN requests. For OUT direction, DWC3 needs to check and setup for MPS alignment.
Usually, control OUT requests can indicate its transfer size via the wLength field of the control message. So usb_request->zero is usually not needed for OUT direction. To handle ZLP OUT for control endpoint, make sure the TRB is MPS size.
Cc: stable@vger.kernel.org Fixes: c7fcdeb2627c ("usb: dwc3: ep0: simplify EP0 state machine") Fixes: d6e5a549cc4d ("usb: dwc3: simplify ZLP handling") Signed-off-by: Thinh Nguyen Thinh.Nguyen@synopsys.com Signed-off-by: Felipe Balbi balbi@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/usb/dwc3/ep0.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-)
--- a/drivers/usb/dwc3/ep0.c +++ b/drivers/usb/dwc3/ep0.c @@ -942,12 +942,16 @@ static void dwc3_ep0_xfer_complete(struc static void __dwc3_ep0_do_control_data(struct dwc3 *dwc, struct dwc3_ep *dep, struct dwc3_request *req) { + unsigned int trb_length = 0; int ret;
req->direction = !!dep->number;
if (req->request.length == 0) { - dwc3_ep0_prepare_one_trb(dep, dwc->ep0_trb_addr, 0, + if (!req->direction) + trb_length = dep->endpoint.maxpacket; + + dwc3_ep0_prepare_one_trb(dep, dwc->bounce_addr, trb_length, DWC3_TRBCTL_CONTROL_DATA, false); ret = dwc3_ep0_start_trans(dep); } else if (!IS_ALIGNED(req->request.length, dep->endpoint.maxpacket) @@ -994,9 +998,12 @@ static void __dwc3_ep0_do_control_data(s
req->trb = &dwc->ep0_trb[dep->trb_enqueue - 1];
+ if (!req->direction) + trb_length = dep->endpoint.maxpacket; + /* Now prepare one extra TRB to align transfer size */ dwc3_ep0_prepare_one_trb(dep, dwc->bounce_addr, - 0, DWC3_TRBCTL_CONTROL_DATA, + trb_length, DWC3_TRBCTL_CONTROL_DATA, false); ret = dwc3_ep0_start_trans(dep); } else {
From: Thinh Nguyen Thinh.Nguyen@synopsys.com
commit ca3df3468eec87f6374662f7de425bc44c3810c1 upstream.
When preparing for SG, not all the entries are prepared at once. When resume, don't use the remaining request length to calculate for MPS alignment. Use the entire request->length to do that.
Cc: stable@vger.kernel.org Fixes: 5d187c0454ef ("usb: dwc3: gadget: Don't setup more than requested") Signed-off-by: Thinh Nguyen Thinh.Nguyen@synopsys.com Signed-off-by: Felipe Balbi balbi@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/usb/dwc3/gadget.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/usb/dwc3/gadget.c +++ b/drivers/usb/dwc3/gadget.c @@ -1095,6 +1095,8 @@ static void dwc3_prepare_one_trb_sg(stru struct scatterlist *s; int i; unsigned int length = req->request.length; + unsigned int maxp = usb_endpoint_maxp(dep->endpoint.desc); + unsigned int rem = length % maxp; unsigned int remaining = req->request.num_mapped_sgs - req->num_queued_sgs;
@@ -1106,8 +1108,6 @@ static void dwc3_prepare_one_trb_sg(stru length -= sg_dma_len(s);
for_each_sg(sg, s, remaining, i) { - unsigned int maxp = usb_endpoint_maxp(dep->endpoint.desc); - unsigned int rem = length % maxp; unsigned int trb_length; unsigned chain = true;
From: Thinh Nguyen Thinh.Nguyen@synopsys.com
commit 690e5c2dc29f8891fcfd30da67e0d5837c2c9df5 upstream.
An SG request may be partially completed (due to no available TRBs). Don't reclaim extra TRBs and clear the needs_extra_trb flag until the request is fully completed. Otherwise, the driver will reclaim the wrong TRB.
Cc: stable@vger.kernel.org Fixes: 1f512119a08c ("usb: dwc3: gadget: add remaining sg entries to ring") Signed-off-by: Thinh Nguyen Thinh.Nguyen@synopsys.com Signed-off-by: Felipe Balbi balbi@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/usb/dwc3/gadget.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)
--- a/drivers/usb/dwc3/gadget.c +++ b/drivers/usb/dwc3/gadget.c @@ -2732,6 +2732,11 @@ static int dwc3_gadget_ep_cleanup_comple ret = dwc3_gadget_ep_reclaim_trb_linear(dep, req, event, status);
+ req->request.actual = req->request.length - req->remaining; + + if (!dwc3_gadget_ep_request_completed(req)) + goto out; + if (req->needs_extra_trb) { unsigned int maxp = usb_endpoint_maxp(dep->endpoint.desc);
@@ -2747,11 +2752,6 @@ static int dwc3_gadget_ep_cleanup_comple req->needs_extra_trb = false; }
- req->request.actual = req->request.length - req->remaining; - - if (!dwc3_gadget_ep_request_completed(req)) - goto out; - dwc3_gadget_giveback(dep, req, status);
out:
From: Li Jun jun.li@nxp.com
commit 03c1fd622f72c7624c81b64fdba4a567ae5ee9cb upstream.
Add the phy cleanup if dwc3 mode init fail, which is the missing part of de-init for dwc3 core init.
Fixes: c499ff71ff2a ("usb: dwc3: core: re-factor init and exit paths") Cc: stable@vger.kernel.org Signed-off-by: Li Jun jun.li@nxp.com Signed-off-by: Felipe Balbi balbi@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/usb/dwc3/core.c | 11 +++++++++++ 1 file changed, 11 insertions(+)
--- a/drivers/usb/dwc3/core.c +++ b/drivers/usb/dwc3/core.c @@ -1564,6 +1564,17 @@ static int dwc3_probe(struct platform_de
err5: dwc3_event_buffers_cleanup(dwc); + + usb_phy_shutdown(dwc->usb2_phy); + usb_phy_shutdown(dwc->usb3_phy); + phy_exit(dwc->usb2_generic_phy); + phy_exit(dwc->usb3_generic_phy); + + usb_phy_set_suspend(dwc->usb2_phy, 1); + usb_phy_set_suspend(dwc->usb3_phy, 1); + phy_power_off(dwc->usb2_generic_phy); + phy_power_off(dwc->usb3_generic_phy); + dwc3_ulpi_exit(dwc);
err4:
From: Li Jun jun.li@nxp.com
commit 266d0493900ac5d6a21cdbe6b1624ed2da94d47a upstream.
No need to trigger runtime pm in driver removal, otherwise if user disable auto suspend via sys file, runtime suspend may be entered, which will call dwc3_core_exit() again and there will be clock disable not balance warning:
[ 2026.820154] xhci-hcd xhci-hcd.0.auto: remove, state 4 [ 2026.825268] usb usb2: USB disconnect, device number 1 [ 2026.831017] xhci-hcd xhci-hcd.0.auto: USB bus 2 deregistered [ 2026.836806] xhci-hcd xhci-hcd.0.auto: remove, state 4 [ 2026.842029] usb usb1: USB disconnect, device number 1 [ 2026.848029] xhci-hcd xhci-hcd.0.auto: USB bus 1 deregistered [ 2026.865889] ------------[ cut here ]------------ [ 2026.870506] usb2_ctrl_root_clk already disabled [ 2026.875082] WARNING: CPU: 0 PID: 731 at drivers/clk/clk.c:958 clk_core_disable+0xa0/0xa8 [ 2026.883170] Modules linked in: dwc3(-) phy_fsl_imx8mq_usb [last unloaded: dwc3] [ 2026.890488] CPU: 0 PID: 731 Comm: rmmod Not tainted 5.8.0-rc7-00280-g9d08cca-dirty #245 [ 2026.898489] Hardware name: NXP i.MX8MQ EVK (DT) [ 2026.903020] pstate: 20000085 (nzCv daIf -PAN -UAO BTYPE=--) [ 2026.908594] pc : clk_core_disable+0xa0/0xa8 [ 2026.912777] lr : clk_core_disable+0xa0/0xa8 [ 2026.916958] sp : ffff8000121b39a0 [ 2026.920271] x29: ffff8000121b39a0 x28: ffff0000b11f3700 [ 2026.925583] x27: 0000000000000000 x26: ffff0000b539c700 [ 2026.930895] x25: 000001d7e44e1232 x24: ffff0000b76fa800 [ 2026.936208] x23: ffff0000b76fa6f8 x22: ffff800008d01040 [ 2026.941520] x21: ffff0000b539ce00 x20: ffff0000b7105000 [ 2026.946832] x19: ffff0000b7105000 x18: 0000000000000010 [ 2026.952144] x17: 0000000000000001 x16: 0000000000000000 [ 2026.957456] x15: ffff0000b11f3b70 x14: ffffffffffffffff [ 2026.962768] x13: ffff8000921b36f7 x12: ffff8000121b36ff [ 2026.968080] x11: ffff8000119e1000 x10: ffff800011bf26d0 [ 2026.973392] x9 : 0000000000000000 x8 : ffff800011bf3000 [ 2026.978704] x7 : ffff800010695d68 x6 : 0000000000000252 [ 2026.984016] x5 : ffff0000bb9881f0 x4 : 0000000000000000 [ 2026.989327] x3 : 0000000000000027 x2 : 0000000000000023 [ 2026.994639] x1 : ac2fa471aa7cab00 x0 : 0000000000000000 [ 2026.999951] Call trace: [ 2027.002401] clk_core_disable+0xa0/0xa8 [ 2027.006238] clk_core_disable_lock+0x20/0x38 [ 2027.010508] clk_disable+0x1c/0x28 [ 2027.013911] clk_bulk_disable+0x34/0x50 [ 2027.017758] dwc3_core_exit+0xec/0x110 [dwc3] [ 2027.022122] dwc3_suspend_common+0x84/0x188 [dwc3] [ 2027.026919] dwc3_runtime_suspend+0x74/0x9c [dwc3] [ 2027.031712] pm_generic_runtime_suspend+0x28/0x40 [ 2027.036419] genpd_runtime_suspend+0xa0/0x258 [ 2027.040777] __rpm_callback+0x88/0x140 [ 2027.044526] rpm_callback+0x20/0x80 [ 2027.048015] rpm_suspend+0xd0/0x418 [ 2027.051503] __pm_runtime_suspend+0x58/0xa0 [ 2027.055693] dwc3_runtime_idle+0x7c/0x90 [dwc3] [ 2027.060224] __rpm_callback+0x88/0x140 [ 2027.063973] rpm_idle+0x78/0x150 [ 2027.067201] __pm_runtime_idle+0x58/0xa0 [ 2027.071130] dwc3_remove+0x64/0xc0 [dwc3] [ 2027.075140] platform_drv_remove+0x28/0x48 [ 2027.079239] device_release_driver_internal+0xf4/0x1c0 [ 2027.084377] driver_detach+0x4c/0xd8 [ 2027.087954] bus_remove_driver+0x54/0xa8 [ 2027.091877] driver_unregister+0x2c/0x58 [ 2027.095799] platform_driver_unregister+0x10/0x18 [ 2027.100509] dwc3_driver_exit+0x14/0x1408 [dwc3] [ 2027.105129] __arm64_sys_delete_module+0x178/0x218 [ 2027.109922] el0_svc_common.constprop.0+0x68/0x160 [ 2027.114714] do_el0_svc+0x20/0x80 [ 2027.118031] el0_sync_handler+0x88/0x190 [ 2027.121953] el0_sync+0x140/0x180 [ 2027.125267] ---[ end trace 027f4f8189958f1f ]--- [ 2027.129976] ------------[ cut here ]------------
Fixes: fc8bb91bc83e ("usb: dwc3: implement runtime PM") Cc: stable@vger.kernel.org Signed-off-by: Li Jun jun.li@nxp.com Signed-off-by: Felipe Balbi balbi@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/usb/dwc3/core.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/usb/dwc3/core.c +++ b/drivers/usb/dwc3/core.c @@ -1610,9 +1610,9 @@ static int dwc3_remove(struct platform_d dwc3_core_exit(dwc); dwc3_ulpi_exit(dwc);
- pm_runtime_put_sync(&pdev->dev); - pm_runtime_allow(&pdev->dev); pm_runtime_disable(&pdev->dev); + pm_runtime_put_noidle(&pdev->dev); + pm_runtime_set_suspended(&pdev->dev);
dwc3_free_event_buffers(dwc); dwc3_free_scratch_buffers(dwc);
From: Thinh Nguyen Thinh.Nguyen@synopsys.com
commit c503672abe1348f10f5a54a662336358c6e1a297 upstream.
The function driver may queue new requests right after halting the endpoint (i.e. queue new requests while the endpoint is stalled). There's no restriction preventing it from doing so. However, dwc3 currently drops those requests after CLEAR_STALL. The driver should only drop started requests. Keep the pending requests in the pending list to resume and process them after the host issues ClearFeature(Halt) to the endpoint.
Cc: stable@vger.kernel.org Fixes: cb11ea56f37a ("usb: dwc3: gadget: Properly handle ClearFeature(halt)") Signed-off-by: Thinh Nguyen thinhn@synopsys.com Signed-off-by: Felipe Balbi balbi@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/usb/dwc3/gadget.c | 22 ++++++++++++++-------- 1 file changed, 14 insertions(+), 8 deletions(-)
--- a/drivers/usb/dwc3/gadget.c +++ b/drivers/usb/dwc3/gadget.c @@ -1628,8 +1628,13 @@ static int __dwc3_gadget_ep_queue(struct if (dep->flags & DWC3_EP_WAIT_TRANSFER_COMPLETE) return 0;
- /* Start the transfer only after the END_TRANSFER is completed */ - if (dep->flags & DWC3_EP_END_TRANSFER_PENDING) { + /* + * Start the transfer only after the END_TRANSFER is completed + * and endpoint STALL is cleared. + */ + if ((dep->flags & DWC3_EP_END_TRANSFER_PENDING) || + (dep->flags & DWC3_EP_WEDGE) || + (dep->flags & DWC3_EP_STALL)) { dep->flags |= DWC3_EP_DELAY_START; return 0; } @@ -1836,13 +1841,14 @@ int __dwc3_gadget_ep_set_halt(struct dwc list_for_each_entry_safe(req, tmp, &dep->started_list, list) dwc3_gadget_move_cancelled_request(req);
- list_for_each_entry_safe(req, tmp, &dep->pending_list, list) - dwc3_gadget_move_cancelled_request(req); - - if (!(dep->flags & DWC3_EP_END_TRANSFER_PENDING)) { - dep->flags &= ~DWC3_EP_DELAY_START; + if (!(dep->flags & DWC3_EP_END_TRANSFER_PENDING)) dwc3_gadget_ep_cleanup_cancelled_requests(dep); - } + + if ((dep->flags & DWC3_EP_DELAY_START) && + !usb_endpoint_xfer_isoc(dep->endpoint.desc)) + __dwc3_gadget_kick_transfer(dep); + + dep->flags &= ~DWC3_EP_DELAY_START; }
return ret;
From: Thinh Nguyen Thinh.Nguyen@synopsys.com
commit d97c78a1908e59a1fdbcbece87cd0440b5d7a1f2 upstream.
According the programming guide (for all DWC3 IPs), when the driver handles ClearFeature(halt) request, it should issue CLEAR_STALL command _after_ the END_TRANSFER command completes. The END_TRANSFER command may take some time to complete. So, delay the ClearFeature(halt) request control status stage and wait for END_TRANSFER command completion interrupt. Only after END_TRANSFER command completes that the driver may issue CLEAR_STALL command.
Cc: stable@vger.kernel.org Fixes: cb11ea56f37a ("usb: dwc3: gadget: Properly handle ClearFeature(halt)") Signed-off-by: Thinh Nguyen thinhn@synopsys.com Signed-off-by: Felipe Balbi balbi@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/usb/dwc3/core.h | 1 + drivers/usb/dwc3/ep0.c | 16 ++++++++++++++++ drivers/usb/dwc3/gadget.c | 40 ++++++++++++++++++++++++++++++++-------- drivers/usb/dwc3/gadget.h | 1 + 4 files changed, 50 insertions(+), 8 deletions(-)
--- a/drivers/usb/dwc3/core.h +++ b/drivers/usb/dwc3/core.h @@ -710,6 +710,7 @@ struct dwc3_ep { #define DWC3_EP_IGNORE_NEXT_NOSTREAM BIT(8) #define DWC3_EP_FORCE_RESTART_STREAM BIT(9) #define DWC3_EP_FIRST_STREAM_PRIMED BIT(10) +#define DWC3_EP_PENDING_CLEAR_STALL BIT(11)
/* This last one is specific to EP0 */ #define DWC3_EP0_DIR_IN BIT(31) --- a/drivers/usb/dwc3/ep0.c +++ b/drivers/usb/dwc3/ep0.c @@ -524,6 +524,11 @@ static int dwc3_ep0_handle_endpoint(stru ret = __dwc3_gadget_ep_set_halt(dep, set, true); if (ret) return -EINVAL; + + /* ClearFeature(Halt) may need delayed status */ + if (!set && (dep->flags & DWC3_EP_END_TRANSFER_PENDING)) + return USB_GADGET_DELAYED_STATUS; + break; default: return -EINVAL; @@ -1049,6 +1054,17 @@ static void dwc3_ep0_do_control_status(s __dwc3_ep0_do_control_status(dwc, dep); }
+void dwc3_ep0_send_delayed_status(struct dwc3 *dwc) +{ + unsigned int direction = !dwc->ep0_expect_in; + + if (dwc->ep0state != EP0_STATUS_PHASE) + return; + + dwc->delayed_status = false; + __dwc3_ep0_do_control_status(dwc, dwc->eps[direction]); +} + static void dwc3_ep0_end_control_data(struct dwc3 *dwc, struct dwc3_ep *dep) { struct dwc3_gadget_ep_cmd_params params; --- a/drivers/usb/dwc3/gadget.c +++ b/drivers/usb/dwc3/gadget.c @@ -1827,6 +1827,18 @@ int __dwc3_gadget_ep_set_halt(struct dwc return 0; }
+ dwc3_stop_active_transfer(dep, true, true); + + list_for_each_entry_safe(req, tmp, &dep->started_list, list) + dwc3_gadget_move_cancelled_request(req); + + if (dep->flags & DWC3_EP_END_TRANSFER_PENDING) { + dep->flags |= DWC3_EP_PENDING_CLEAR_STALL; + return 0; + } + + dwc3_gadget_ep_cleanup_cancelled_requests(dep); + ret = dwc3_send_clear_stall_ep_cmd(dep); if (ret) { dev_err(dwc->dev, "failed to clear STALL on %s\n", @@ -1836,14 +1848,6 @@ int __dwc3_gadget_ep_set_halt(struct dwc
dep->flags &= ~(DWC3_EP_STALL | DWC3_EP_WEDGE);
- dwc3_stop_active_transfer(dep, true, true); - - list_for_each_entry_safe(req, tmp, &dep->started_list, list) - dwc3_gadget_move_cancelled_request(req); - - if (!(dep->flags & DWC3_EP_END_TRANSFER_PENDING)) - dwc3_gadget_ep_cleanup_cancelled_requests(dep); - if ((dep->flags & DWC3_EP_DELAY_START) && !usb_endpoint_xfer_isoc(dep->endpoint.desc)) __dwc3_gadget_kick_transfer(dep); @@ -3003,6 +3007,26 @@ static void dwc3_endpoint_interrupt(stru dep->flags &= ~DWC3_EP_END_TRANSFER_PENDING; dep->flags &= ~DWC3_EP_TRANSFER_STARTED; dwc3_gadget_ep_cleanup_cancelled_requests(dep); + + if (dep->flags & DWC3_EP_PENDING_CLEAR_STALL) { + struct dwc3 *dwc = dep->dwc; + + dep->flags &= ~DWC3_EP_PENDING_CLEAR_STALL; + if (dwc3_send_clear_stall_ep_cmd(dep)) { + struct usb_ep *ep0 = &dwc->eps[0]->endpoint; + + dev_err(dwc->dev, "failed to clear STALL on %s\n", + dep->name); + if (dwc->delayed_status) + __dwc3_gadget_ep0_set_halt(ep0, 1); + return; + } + + dep->flags &= ~(DWC3_EP_STALL | DWC3_EP_WEDGE); + if (dwc->delayed_status) + dwc3_ep0_send_delayed_status(dwc); + } + if ((dep->flags & DWC3_EP_DELAY_START) && !usb_endpoint_xfer_isoc(dep->endpoint.desc)) __dwc3_gadget_kick_transfer(dep); --- a/drivers/usb/dwc3/gadget.h +++ b/drivers/usb/dwc3/gadget.h @@ -113,6 +113,7 @@ int dwc3_gadget_ep0_set_halt(struct usb_ int dwc3_gadget_ep0_queue(struct usb_ep *ep, struct usb_request *request, gfp_t gfp_flags); int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value, int protocol); +void dwc3_ep0_send_delayed_status(struct dwc3 *dwc);
/** * dwc3_gadget_ep_get_transfer_index - Gets transfer index from HW
From: Pawel Laszczak pawell@cadence.com
commit 52d3967704aea6cb316d419a33a5e1d56d33a3c1 upstream.
Patch fixes issue caused setting On-chip memory overflow bit in usb_sts register. The issue occurred because EP_CFG register was set twice before USB_STS.CFGSTS was set. Every write operation on EP_CFG.BUFFERING causes that controller increases internal counter holding the number of reserved on-chip buffers. First time this register was updated in function cdns3_ep_config before delegating SET_CONFIGURATION request to class driver and again it was updated when class wanted to enable endpoint. This patch fixes this issue by configuring endpoints enabled by class driver in cdns3_gadget_ep_enable and others just before status stage.
Cc: stable@vger.kernel.org#v5.8+ Fixes: 7733f6c32e36 ("usb: cdns3: Add Cadence USB3 DRD Driver") Reported-and-tested-by: Peter Chen peter.chen@nxp.com Signed-off-by: Pawel Laszczak pawell@cadence.com Signed-off-by: Peter Chen peter.chen@nxp.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/usb/cdns3/ep0.c | 65 +++++++++++++++------------- drivers/usb/cdns3/gadget.c | 102 +++++++++++++++++++++++++-------------------- drivers/usb/cdns3/gadget.h | 3 - 3 files changed, 94 insertions(+), 76 deletions(-)
--- a/drivers/usb/cdns3/ep0.c +++ b/drivers/usb/cdns3/ep0.c @@ -137,48 +137,36 @@ static int cdns3_req_ep0_set_configurati struct usb_ctrlrequest *ctrl_req) { enum usb_device_state device_state = priv_dev->gadget.state; - struct cdns3_endpoint *priv_ep; u32 config = le16_to_cpu(ctrl_req->wValue); int result = 0; - int i;
switch (device_state) { case USB_STATE_ADDRESS: - /* Configure non-control EPs */ - for (i = 0; i < CDNS3_ENDPOINTS_MAX_COUNT; i++) { - priv_ep = priv_dev->eps[i]; - if (!priv_ep) - continue; - - if (priv_ep->flags & EP_CLAIMED) - cdns3_ep_config(priv_ep); - } - result = cdns3_ep0_delegate_req(priv_dev, ctrl_req);
- if (result) - return result; - - if (!config) { - cdns3_hw_reset_eps_config(priv_dev); - usb_gadget_set_state(&priv_dev->gadget, - USB_STATE_ADDRESS); - } + if (result || !config) + goto reset_config;
break; case USB_STATE_CONFIGURED: result = cdns3_ep0_delegate_req(priv_dev, ctrl_req); + if (!config && !result) + goto reset_config;
- if (!config && !result) { - cdns3_hw_reset_eps_config(priv_dev); - usb_gadget_set_state(&priv_dev->gadget, - USB_STATE_ADDRESS); - } break; default: - result = -EINVAL; + return -EINVAL; }
+ return 0; + +reset_config: + if (result != USB_GADGET_DELAYED_STATUS) + cdns3_hw_reset_eps_config(priv_dev); + + usb_gadget_set_state(&priv_dev->gadget, + USB_STATE_ADDRESS); + return result; }
@@ -705,6 +693,7 @@ static int cdns3_gadget_ep0_queue(struct unsigned long flags; int ret = 0; u8 zlp = 0; + int i;
spin_lock_irqsave(&priv_dev->lock, flags); trace_cdns3_ep0_queue(priv_dev, request); @@ -718,6 +707,17 @@ static int cdns3_gadget_ep0_queue(struct /* send STATUS stage. Should be called only for SET_CONFIGURATION */ if (priv_dev->ep0_stage == CDNS3_STATUS_STAGE) { cdns3_select_ep(priv_dev, 0x00); + + /* + * Configure all non-control EPs which are not enabled by class driver + */ + for (i = 0; i < CDNS3_ENDPOINTS_MAX_COUNT; i++) { + priv_ep = priv_dev->eps[i]; + if (priv_ep && priv_ep->flags & EP_CLAIMED && + !(priv_ep->flags & EP_ENABLED)) + cdns3_ep_config(priv_ep, 0); + } + cdns3_set_hw_configuration(priv_dev); cdns3_ep0_complete_setup(priv_dev, 0, 1); request->actual = 0; @@ -803,6 +803,7 @@ void cdns3_ep0_config(struct cdns3_devic struct cdns3_usb_regs __iomem *regs; struct cdns3_endpoint *priv_ep; u32 max_packet_size = 64; + u32 ep_cfg;
regs = priv_dev->regs;
@@ -834,8 +835,10 @@ void cdns3_ep0_config(struct cdns3_devic BIT(0) | BIT(16)); }
- writel(EP_CFG_ENABLE | EP_CFG_MAXPKTSIZE(max_packet_size), - ®s->ep_cfg); + ep_cfg = EP_CFG_ENABLE | EP_CFG_MAXPKTSIZE(max_packet_size); + + if (!(priv_ep->flags & EP_CONFIGURED)) + writel(ep_cfg, ®s->ep_cfg);
writel(EP_STS_EN_SETUPEN | EP_STS_EN_DESCMISEN | EP_STS_EN_TRBERREN, ®s->ep_sts_en); @@ -843,8 +846,10 @@ void cdns3_ep0_config(struct cdns3_devic /* init ep in */ cdns3_select_ep(priv_dev, USB_DIR_IN);
- writel(EP_CFG_ENABLE | EP_CFG_MAXPKTSIZE(max_packet_size), - ®s->ep_cfg); + if (!(priv_ep->flags & EP_CONFIGURED)) + writel(ep_cfg, ®s->ep_cfg); + + priv_ep->flags |= EP_CONFIGURED;
writel(EP_STS_EN_SETUPEN | EP_STS_EN_TRBERREN, ®s->ep_sts_en);
--- a/drivers/usb/cdns3/gadget.c +++ b/drivers/usb/cdns3/gadget.c @@ -296,6 +296,8 @@ static void cdns3_ep_stall_flush(struct */ void cdns3_hw_reset_eps_config(struct cdns3_device *priv_dev) { + int i; + writel(USB_CONF_CFGRST, &priv_dev->regs->usb_conf);
cdns3_allow_enable_l1(priv_dev, 0); @@ -304,6 +306,10 @@ void cdns3_hw_reset_eps_config(struct cd priv_dev->out_mem_is_allocated = 0; priv_dev->wait_for_setup = 0; priv_dev->using_streams = 0; + + for (i = 0; i < CDNS3_ENDPOINTS_MAX_COUNT; i++) + if (priv_dev->eps[i]) + priv_dev->eps[i]->flags &= ~EP_CONFIGURED; }
/** @@ -1907,27 +1913,6 @@ static int cdns3_ep_onchip_buffer_reserv return 0; }
-static void cdns3_stream_ep_reconfig(struct cdns3_device *priv_dev, - struct cdns3_endpoint *priv_ep) -{ - if (!priv_ep->use_streams || priv_dev->gadget.speed < USB_SPEED_SUPER) - return; - - if (priv_dev->dev_ver >= DEV_VER_V3) { - u32 mask = BIT(priv_ep->num + (priv_ep->dir ? 16 : 0)); - - /* - * Stream capable endpoints are handled by using ep_tdl - * register. Other endpoints use TDL from TRB feature. - */ - cdns3_clear_register_bit(&priv_dev->regs->tdl_from_trb, mask); - } - - /* Enable Stream Bit TDL chk and SID chk */ - cdns3_set_register_bit(&priv_dev->regs->ep_cfg, EP_CFG_STREAM_EN | - EP_CFG_TDL_CHK | EP_CFG_SID_CHK); -} - static void cdns3_configure_dmult(struct cdns3_device *priv_dev, struct cdns3_endpoint *priv_ep) { @@ -1965,8 +1950,9 @@ static void cdns3_configure_dmult(struct /** * cdns3_ep_config Configure hardware endpoint * @priv_ep: extended endpoint object + * @enable: set EP_CFG_ENABLE bit in ep_cfg register. */ -void cdns3_ep_config(struct cdns3_endpoint *priv_ep) +int cdns3_ep_config(struct cdns3_endpoint *priv_ep, bool enable) { bool is_iso_ep = (priv_ep->type == USB_ENDPOINT_XFER_ISOC); struct cdns3_device *priv_dev = priv_ep->cdns3_dev; @@ -2027,7 +2013,7 @@ void cdns3_ep_config(struct cdns3_endpoi break; default: /* all other speed are not supported */ - return; + return -EINVAL; }
if (max_packet_size == 1024) @@ -2037,11 +2023,33 @@ void cdns3_ep_config(struct cdns3_endpoi else priv_ep->trb_burst_size = 16;
- ret = cdns3_ep_onchip_buffer_reserve(priv_dev, buffering + 1, - !!priv_ep->dir); - if (ret) { - dev_err(priv_dev->dev, "onchip mem is full, ep is invalid\n"); - return; + /* onchip buffer is only allocated before configuration */ + if (!priv_dev->hw_configured_flag) { + ret = cdns3_ep_onchip_buffer_reserve(priv_dev, buffering + 1, + !!priv_ep->dir); + if (ret) { + dev_err(priv_dev->dev, "onchip mem is full, ep is invalid\n"); + return ret; + } + } + + if (enable) + ep_cfg |= EP_CFG_ENABLE; + + if (priv_ep->use_streams && priv_dev->gadget.speed >= USB_SPEED_SUPER) { + if (priv_dev->dev_ver >= DEV_VER_V3) { + u32 mask = BIT(priv_ep->num + (priv_ep->dir ? 16 : 0)); + + /* + * Stream capable endpoints are handled by using ep_tdl + * register. Other endpoints use TDL from TRB feature. + */ + cdns3_clear_register_bit(&priv_dev->regs->tdl_from_trb, + mask); + } + + /* Enable Stream Bit TDL chk and SID chk */ + ep_cfg |= EP_CFG_STREAM_EN | EP_CFG_TDL_CHK | EP_CFG_SID_CHK; }
ep_cfg |= EP_CFG_MAXPKTSIZE(max_packet_size) | @@ -2051,9 +2059,12 @@ void cdns3_ep_config(struct cdns3_endpoi
cdns3_select_ep(priv_dev, bEndpointAddress); writel(ep_cfg, &priv_dev->regs->ep_cfg); + priv_ep->flags |= EP_CONFIGURED;
dev_dbg(priv_dev->dev, "Configure %s: with val %08x\n", priv_ep->name, ep_cfg); + + return 0; }
/* Find correct direction for HW endpoint according to description */ @@ -2194,7 +2205,7 @@ static int cdns3_gadget_ep_enable(struct u32 bEndpointAddress; unsigned long flags; int enable = 1; - int ret; + int ret = 0; int val;
priv_ep = ep_to_cdns3_ep(ep); @@ -2233,6 +2244,17 @@ static int cdns3_gadget_ep_enable(struct bEndpointAddress = priv_ep->num | priv_ep->dir; cdns3_select_ep(priv_dev, bEndpointAddress);
+ /* + * For some versions of controller at some point during ISO OUT traffic + * DMA reads Transfer Ring for the EP which has never got doorbell. + * This issue was detected only on simulation, but to avoid this issue + * driver add protection against it. To fix it driver enable ISO OUT + * endpoint before setting DRBL. This special treatment of ISO OUT + * endpoints are recommended by controller specification. + */ + if (priv_ep->type == USB_ENDPOINT_XFER_ISOC && !priv_ep->dir) + enable = 0; + if (usb_ss_max_streams(comp_desc) && usb_endpoint_xfer_bulk(desc)) { /* * Enable stream support (SS mode) related interrupts @@ -2243,13 +2265,17 @@ static int cdns3_gadget_ep_enable(struct EP_STS_EN_SIDERREN | EP_STS_EN_MD_EXITEN | EP_STS_EN_STREAMREN; priv_ep->use_streams = true; - cdns3_stream_ep_reconfig(priv_dev, priv_ep); + ret = cdns3_ep_config(priv_ep, enable); priv_dev->using_streams |= true; } + } else { + ret = cdns3_ep_config(priv_ep, enable); }
- ret = cdns3_allocate_trb_pool(priv_ep); + if (ret) + goto exit;
+ ret = cdns3_allocate_trb_pool(priv_ep); if (ret) goto exit;
@@ -2279,20 +2305,6 @@ static int cdns3_gadget_ep_enable(struct
writel(reg, &priv_dev->regs->ep_sts_en);
- /* - * For some versions of controller at some point during ISO OUT traffic - * DMA reads Transfer Ring for the EP which has never got doorbell. - * This issue was detected only on simulation, but to avoid this issue - * driver add protection against it. To fix it driver enable ISO OUT - * endpoint before setting DRBL. This special treatment of ISO OUT - * endpoints are recommended by controller specification. - */ - if (priv_ep->type == USB_ENDPOINT_XFER_ISOC && !priv_ep->dir) - enable = 0; - - if (enable) - cdns3_set_register_bit(&priv_dev->regs->ep_cfg, EP_CFG_ENABLE); - ep->desc = desc; priv_ep->flags &= ~(EP_PENDING_REQUEST | EP_STALLED | EP_STALL_PENDING | EP_QUIRK_ISO_OUT_EN | EP_QUIRK_EXTRA_BUF_EN); --- a/drivers/usb/cdns3/gadget.h +++ b/drivers/usb/cdns3/gadget.h @@ -1154,6 +1154,7 @@ struct cdns3_endpoint { #define EP_QUIRK_EXTRA_BUF_DET BIT(12) #define EP_QUIRK_EXTRA_BUF_EN BIT(13) #define EP_TDLCHK_EN BIT(15) +#define EP_CONFIGURED BIT(16) u32 flags;
struct cdns3_request *descmis_req; @@ -1351,7 +1352,7 @@ void cdns3_gadget_giveback(struct cdns3_ int cdns3_init_ep0(struct cdns3_device *priv_dev, struct cdns3_endpoint *priv_ep); void cdns3_ep0_config(struct cdns3_device *priv_dev); -void cdns3_ep_config(struct cdns3_endpoint *priv_ep); +int cdns3_ep_config(struct cdns3_endpoint *priv_ep, bool enable); void cdns3_check_ep0_interrupt_proceed(struct cdns3_device *priv_dev, int dir); int __cdns3_gadget_wakeup(struct cdns3_device *priv_dev);
From: Jerome Brunet jbrunet@baylibre.com
commit 38203b8385bf6283537162bde7d499f830964711 upstream.
Commit a4e7279cd1d1 ("cdc-acm: introduce a cool down") is causing regression if there is some USB error, such as -EPROTO.
This has been reported on some samples of the Odroid-N2 using the Combee II Zibgee USB dongle.
struct acm *acm = container_of(work, struct acm, work)
is incorrect in case of a delayed work and causes warnings, usually from the workqueue:
WARNING: CPU: 0 PID: 0 at kernel/workqueue.c:1474 __queue_work+0x480/0x528.
When this happens, USB eventually stops working completely after a while. Also the ACM_ERROR_DELAY bit is never set, so the cooldown mechanism previously introduced cannot be triggered and acm_submit_read_urb() is never called.
This changes makes the cdc-acm driver use a single delayed work, fixing the pointer arithmetic in acm_softint() and set the ACM_ERROR_DELAY when the cooldown mechanism appear to be needed.
Fixes: a4e7279cd1d1 ("cdc-acm: introduce a cool down") Cc: Oliver Neukum oneukum@suse.com Reported-by: Pascal Vizeli pascal.vizeli@nabucasa.com Acked-by: Oliver Neukum oneukum@suse.com Signed-off-by: Jerome Brunet jbrunet@baylibre.com Link: https://lore.kernel.org/r/20201019170702.150534-1-jbrunet@baylibre.com Cc: stable stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/usb/class/cdc-acm.c | 12 +++++------- drivers/usb/class/cdc-acm.h | 3 +-- 2 files changed, 6 insertions(+), 9 deletions(-)
--- a/drivers/usb/class/cdc-acm.c +++ b/drivers/usb/class/cdc-acm.c @@ -507,6 +507,7 @@ static void acm_read_bulk_callback(struc "%s - cooling babbling device\n", __func__); usb_mark_last_busy(acm->dev); set_bit(rb->index, &acm->urbs_in_error_delay); + set_bit(ACM_ERROR_DELAY, &acm->flags); cooldown = true; break; default: @@ -532,7 +533,7 @@ static void acm_read_bulk_callback(struc
if (stopped || stalled || cooldown) { if (stalled) - schedule_work(&acm->work); + schedule_delayed_work(&acm->dwork, 0); else if (cooldown) schedule_delayed_work(&acm->dwork, HZ / 2); return; @@ -562,13 +563,13 @@ static void acm_write_bulk(struct urb *u acm_write_done(acm, wb); spin_unlock_irqrestore(&acm->write_lock, flags); set_bit(EVENT_TTY_WAKEUP, &acm->flags); - schedule_work(&acm->work); + schedule_delayed_work(&acm->dwork, 0); }
static void acm_softint(struct work_struct *work) { int i; - struct acm *acm = container_of(work, struct acm, work); + struct acm *acm = container_of(work, struct acm, dwork.work);
if (test_bit(EVENT_RX_STALL, &acm->flags)) { smp_mb(); /* against acm_suspend() */ @@ -584,7 +585,7 @@ static void acm_softint(struct work_stru if (test_and_clear_bit(ACM_ERROR_DELAY, &acm->flags)) { for (i = 0; i < acm->rx_buflimit; i++) if (test_and_clear_bit(i, &acm->urbs_in_error_delay)) - acm_submit_read_urb(acm, i, GFP_NOIO); + acm_submit_read_urb(acm, i, GFP_KERNEL); }
if (test_and_clear_bit(EVENT_TTY_WAKEUP, &acm->flags)) @@ -1364,7 +1365,6 @@ made_compressed_probe: acm->ctrlsize = ctrlsize; acm->readsize = readsize; acm->rx_buflimit = num_rx_buf; - INIT_WORK(&acm->work, acm_softint); INIT_DELAYED_WORK(&acm->dwork, acm_softint); init_waitqueue_head(&acm->wioctl); spin_lock_init(&acm->write_lock); @@ -1574,7 +1574,6 @@ static void acm_disconnect(struct usb_in }
acm_kill_urbs(acm); - cancel_work_sync(&acm->work); cancel_delayed_work_sync(&acm->dwork);
tty_unregister_device(acm_tty_driver, acm->minor); @@ -1617,7 +1616,6 @@ static int acm_suspend(struct usb_interf return 0;
acm_kill_urbs(acm); - cancel_work_sync(&acm->work); cancel_delayed_work_sync(&acm->dwork); acm->urbs_in_error_delay = 0;
--- a/drivers/usb/class/cdc-acm.h +++ b/drivers/usb/class/cdc-acm.h @@ -112,8 +112,7 @@ struct acm { # define ACM_ERROR_DELAY 3 unsigned long urbs_in_error_delay; /* these need to be restarted after a delay */ struct usb_cdc_line_coding line; /* bits, stop, parity */ - struct work_struct work; /* work queue entry for various purposes*/ - struct delayed_work dwork; /* for cool downs needed in error recovery */ + struct delayed_work dwork; /* work queue entry for various purposes */ unsigned int ctrlin; /* input control lines (DCD, DSR, RI, break, overruns) */ unsigned int ctrlout; /* output control lines (DTR, RTS) */ struct async_icount iocount; /* counters for control line changes */
From: Li Jun jun.li@nxp.com
commit 2d9c6442a9c81f4f8dee678d0b3c183173ab1e2d upstream.
Current tcpm_detach() only reset hard_reset_count if port->attached is true, this may cause this counter clear is missed if the CC disconnect event is generated after tcpm_port_reset() is done by other events, e.g. VBUS off comes first before CC disconect for a power sink, in that case the first tcpm_detach() will only clear port->attached flag but leave hard_reset_count there because tcpm_port_is_disconnected() is still false, then later tcpm_detach() by CC disconnect will directly return due to port->attached is cleared, finally this will result tcpm will not try hard reset or error recovery for later attach.
ChiYuan reported this issue on his platform with below tcpm trace: After power sink session setup after hard reset 2 times, detach from the power source and then attach: [ 4848.046358] VBUS off [ 4848.046384] state change SNK_READY -> SNK_UNATTACHED [ 4848.050908] Setting voltage/current limit 0 mV 0 mA [ 4848.050936] polarity 0 [ 4848.052593] Requesting mux state 0, usb-role 0, orientation 0 [ 4848.053222] Start toggling [ 4848.086500] state change SNK_UNATTACHED -> TOGGLING [ 4848.089983] CC1: 0 -> 0, CC2: 3 -> 3 [state TOGGLING, polarity 0, connected] [ 4848.089993] state change TOGGLING -> SNK_ATTACH_WAIT [ 4848.090031] pending state change SNK_ATTACH_WAIT -> SNK_DEBOUNCED @200 ms [ 4848.141162] CC1: 0 -> 0, CC2: 3 -> 0 [state SNK_ATTACH_WAIT, polarity 0, disconnected] [ 4848.141170] state change SNK_ATTACH_WAIT -> SNK_ATTACH_WAIT [ 4848.141184] pending state change SNK_ATTACH_WAIT -> SNK_UNATTACHED @20 ms [ 4848.163156] state change SNK_ATTACH_WAIT -> SNK_UNATTACHED [delayed 20 ms] [ 4848.163162] Start toggling [ 4848.216918] CC1: 0 -> 0, CC2: 0 -> 3 [state TOGGLING, polarity 0, connected] [ 4848.216954] state change TOGGLING -> SNK_ATTACH_WAIT [ 4848.217080] pending state change SNK_ATTACH_WAIT -> SNK_DEBOUNCED @200 ms [ 4848.231771] CC1: 0 -> 0, CC2: 3 -> 0 [state SNK_ATTACH_WAIT, polarity 0, disconnected] [ 4848.231800] state change SNK_ATTACH_WAIT -> SNK_ATTACH_WAIT [ 4848.231857] pending state change SNK_ATTACH_WAIT -> SNK_UNATTACHED @20 ms [ 4848.256022] state change SNK_ATTACH_WAIT -> SNK_UNATTACHED [delayed20 ms] [ 4848.256049] Start toggling [ 4848.871148] VBUS on [ 4848.885324] CC1: 0 -> 0, CC2: 0 -> 3 [state TOGGLING, polarity 0, connected] [ 4848.885372] state change TOGGLING -> SNK_ATTACH_WAIT [ 4848.885548] pending state change SNK_ATTACH_WAIT -> SNK_DEBOUNCED @200 ms [ 4849.088240] state change SNK_ATTACH_WAIT -> SNK_DEBOUNCED [delayed200 ms] [ 4849.088284] state change SNK_DEBOUNCED -> SNK_ATTACHED [ 4849.088291] polarity 1 [ 4849.088769] Requesting mux state 1, usb-role 2, orientation 2 [ 4849.088895] state change SNK_ATTACHED -> SNK_STARTUP [ 4849.088907] state change SNK_STARTUP -> SNK_DISCOVERY [ 4849.088915] Setting voltage/current limit 5000 mV 0 mA [ 4849.088927] vbus=0 charge:=1 [ 4849.090505] state change SNK_DISCOVERY -> SNK_WAIT_CAPABILITIES [ 4849.090828] pending state change SNK_WAIT_CAPABILITIES -> SNK_READY @240 ms [ 4849.335878] state change SNK_WAIT_CAPABILITIES -> SNK_READY [delayed240 ms]
this patch fix this issue by clear hard_reset_count at any cases of cc disconnect, í.e. don't check port->attached flag.
Fixes: 4b4e02c83167 ("typec: tcpm: Move out of staging") Cc: stable@vger.kernel.org Reported-and-tested-by: ChiYuan Huang cy_huang@richtek.com Reviewed-by: Guenter Roeck linux@roeck-us.net Reviewed-by: Heikki Krogerus heikki.krogerus@linux.intel.com Signed-off-by: Li Jun jun.li@nxp.com Link: https://lore.kernel.org/r/1602500592-3817-1-git-send-email-jun.li@nxp.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/usb/typec/tcpm/tcpm.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
--- a/drivers/usb/typec/tcpm/tcpm.c +++ b/drivers/usb/typec/tcpm/tcpm.c @@ -2789,6 +2789,9 @@ static void tcpm_reset_port(struct tcpm_
static void tcpm_detach(struct tcpm_port *port) { + if (tcpm_port_is_disconnected(port)) + port->hard_reset_count = 0; + if (!port->attached) return;
@@ -2797,9 +2800,6 @@ static void tcpm_detach(struct tcpm_port port->tcpc->set_bist_data(port->tcpc, false); }
- if (tcpm_port_is_disconnected(port)) - port->hard_reset_count = 0; - tcpm_reset_port(port); }
From: Ran Wang ran.wang_1@nxp.com
commit 3cd54a618834430a26a648d880dd83d740f2ae30 upstream.
fsl_usb2_device_register() should stop init if dma_set_mask() return error.
Fixes: cae058610465 ("drivers/usb/host: fsl: Set DMA_MASK of usb platform device") Reviewed-by: Peter Chen peter.chen@nxp.com Signed-off-by: Ran Wang ran.wang_1@nxp.com Link: https://lore.kernel.org/r/20201010060308.33693-1-ran.wang_1@nxp.com Cc: stable stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/usb/host/fsl-mph-dr-of.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
--- a/drivers/usb/host/fsl-mph-dr-of.c +++ b/drivers/usb/host/fsl-mph-dr-of.c @@ -94,10 +94,13 @@ static struct platform_device *fsl_usb2_
pdev->dev.coherent_dma_mask = ofdev->dev.coherent_dma_mask;
- if (!pdev->dev.dma_mask) + if (!pdev->dev.dma_mask) { pdev->dev.dma_mask = &ofdev->dev.coherent_dma_mask; - else - dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); + } else { + retval = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); + if (retval) + goto error; + }
retval = platform_device_add_data(pdev, pdata, sizeof(*pdata)); if (retval)
From: Bastien Nocera hadess@hadess.net
commit 0942d59b0af46511d59dbf5bd69ec4a64d1a854c upstream.
From: Bastien Nocera hadess@hadess.net
When a USB device driver has both an id_table and a match() function, make sure to check both to find a match, first matching the id_table, then checking the match() function.
This makes it possible to have module autoloading done through the id_table when devices are plugged in, before checking for further device eligibility in the match() function.
Cc: stable@vger.kernel.org # 5.8 Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: Alan Stern stern@rowland.harvard.edu Co-developed-by: M. Vefa Bicakci m.v.b@runbox.com Tested-by: Bastien Nocera hadess@hadess.net Signed-off-by: Bastien Nocera hadess@hadess.net Signed-off-by: M. Vefa Bicakci m.v.b@runbox.com Tested-by: Pan (Pany) YUAN pany@fedoraproject.org Link: https://lore.kernel.org/r/20201022135521.375211-2-m.v.b@runbox.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/usb/core/driver.c | 30 +++++++++++++++++++++--------- drivers/usb/core/generic.c | 4 +--- drivers/usb/core/usb.h | 2 ++ 3 files changed, 24 insertions(+), 12 deletions(-)
--- a/drivers/usb/core/driver.c +++ b/drivers/usb/core/driver.c @@ -839,6 +839,22 @@ const struct usb_device_id *usb_device_m return NULL; }
+bool usb_driver_applicable(struct usb_device *udev, + struct usb_device_driver *udrv) +{ + if (udrv->id_table && udrv->match) + return usb_device_match_id(udev, udrv->id_table) != NULL && + udrv->match(udev); + + if (udrv->id_table) + return usb_device_match_id(udev, udrv->id_table) != NULL; + + if (udrv->match) + return udrv->match(udev); + + return false; +} + static int usb_device_match(struct device *dev, struct device_driver *drv) { /* devices and interfaces are handled separately */ @@ -853,17 +869,14 @@ static int usb_device_match(struct devic udev = to_usb_device(dev); udrv = to_usb_device_driver(drv);
- if (udrv->id_table) - return usb_device_match_id(udev, udrv->id_table) != NULL; - - if (udrv->match) - return udrv->match(udev); - /* If the device driver under consideration does not have a * id_table or a match function, then let the driver's probe * function decide. */ - return 1; + if (!udrv->id_table && !udrv->match) + return 1; + + return usb_driver_applicable(udev, udrv);
} else if (is_usb_interface(dev)) { struct usb_interface *intf; @@ -941,8 +954,7 @@ static int __usb_bus_reprobe_drivers(str return 0;
udev = to_usb_device(dev); - if (usb_device_match_id(udev, new_udriver->id_table) == NULL && - (!new_udriver->match || new_udriver->match(udev) == 0)) + if (!usb_driver_applicable(udev, new_udriver)) return 0;
ret = device_reprobe(dev); --- a/drivers/usb/core/generic.c +++ b/drivers/usb/core/generic.c @@ -205,9 +205,7 @@ static int __check_usb_generic(struct de udrv = to_usb_device_driver(drv); if (udrv == &usb_generic_driver) return 0; - if (usb_device_match_id(udev, udrv->id_table) != NULL) - return 1; - return (udrv->match && udrv->match(udev)); + return usb_driver_applicable(udev, udrv); }
static bool usb_generic_driver_match(struct usb_device *udev) --- a/drivers/usb/core/usb.h +++ b/drivers/usb/core/usb.h @@ -74,6 +74,8 @@ extern int usb_match_device(struct usb_d const struct usb_device_id *id); extern const struct usb_device_id *usb_device_match_id(struct usb_device *udev, const struct usb_device_id *id); +extern bool usb_driver_applicable(struct usb_device *udev, + struct usb_device_driver *udrv); extern void usb_forced_unbind_intf(struct usb_interface *intf); extern void usb_unbind_and_rebind_marked_interfaces(struct usb_device *udev);
From: Bastien Nocera hadess@hadess.net
commit 0cb686692fd200db12dcfb8231e793c1c98aec41 upstream.
From: Bastien Nocera hadess@hadess.net
Contrary to the comment above the id table, we didn't implement a match function. This meant that every single Apple device that was already plugged in to the computer would have its device driver reprobed when the apple-mfi-fastcharge driver was loaded, eg. the SD card reader could be reprobed when the apple-mfi-fastcharge after pivoting root during boot up and the module became available.
Make sure that the driver probe isn't being run for unsupported devices by adding a match function that checks the product ID, in addition to the id_table checking the vendor ID.
Link: https://bugzilla.redhat.com/show_bug.cgi?id=1878347 Link: https://lore.kernel.org/linux-usb/CAE3RAxt0WhBEz8zkHrVO5RiyEOasayy1QUAjsv-pB... Fixes: 249fa8217b84 ("USB: Add driver to control USB fast charge for iOS devices") Cc: stable@vger.kernel.org # 5.8 Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: Alan Stern stern@rowland.harvard.edu [m.v.b: Add Link and Reported-by tags to the commit message] Reported-by: Pany pany@fedoraproject.org Tested-by: Pan (Pany) YUAN pany@fedoraproject.org Tested-by: Bastien Nocera hadess@hadess.net Signed-off-by: Bastien Nocera hadess@hadess.net Signed-off-by: M. Vefa Bicakci m.v.b@runbox.com Link: https://lore.kernel.org/r/20201022135521.375211-3-m.v.b@runbox.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/usb/misc/apple-mfi-fastcharge.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-)
--- a/drivers/usb/misc/apple-mfi-fastcharge.c +++ b/drivers/usb/misc/apple-mfi-fastcharge.c @@ -163,17 +163,23 @@ static const struct power_supply_desc ap .property_is_writeable = apple_mfi_fc_property_is_writeable };
+static bool mfi_fc_match(struct usb_device *udev) +{ + int idProduct; + + idProduct = le16_to_cpu(udev->descriptor.idProduct); + /* See comment above mfi_fc_id_table[] */ + return (idProduct >= 0x1200 && idProduct <= 0x12ff); +} + static int mfi_fc_probe(struct usb_device *udev) { struct power_supply_config battery_cfg = {}; struct mfi_device *mfi = NULL; - int err, idProduct; + int err;
- idProduct = le16_to_cpu(udev->descriptor.idProduct); - /* See comment above mfi_fc_id_table[] */ - if (idProduct < 0x1200 || idProduct > 0x12ff) { + if (!mfi_fc_match(udev)) return -ENODEV; - }
mfi = kzalloc(sizeof(struct mfi_device), GFP_KERNEL); if (!mfi) { @@ -220,6 +226,7 @@ static struct usb_device_driver mfi_fc_d .probe = mfi_fc_probe, .disconnect = mfi_fc_disconnect, .id_table = mfi_fc_id_table, + .match = mfi_fc_match, .generic_subclass = 1, };
From: Chris Wilson chris@chris-wilson.co.uk
commit 8195400f7ea95399f721ad21f4d663a62c65036f upstream.
If i915.ko is being used as a passthrough device, it does not know if the host is using intel_iommu. Mixing the iommu and gfx causes a few issues (such as scanout overfetch) which we need to workaround inside the driver, so if we detect we are running under a hypervisor, also assume the device access is being virtualised.
Reported-by: Stefan Fritsch sf@sfritsch.de Suggested-by: Stefan Fritsch sf@sfritsch.de Signed-off-by: Chris Wilson chris@chris-wilson.co.uk Cc: Zhenyu Wang zhenyuw@linux.intel.com Cc: Joonas Lahtinen joonas.lahtinen@linux.intel.com Cc: Stefan Fritsch sf@sfritsch.de Cc: stable@vger.kernel.org Tested-by: Stefan Fritsch sf@sfritsch.de Reviewed-by: Zhenyu Wang zhenyuw@linux.intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20201019101523.4145-1-chris@ch... (cherry picked from commit f566fdcd6cc49a9d5b5d782f56e3e7cb243f01b8) Signed-off-by: Rodrigo Vivi rodrigo.vivi@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/i915/i915_drv.h | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -33,6 +33,8 @@ #include <uapi/drm/i915_drm.h> #include <uapi/drm/drm_fourcc.h>
+#include <asm/hypervisor.h> + #include <linux/io-mapping.h> #include <linux/i2c.h> #include <linux/i2c-algo-bit.h> @@ -1716,7 +1718,9 @@ static inline bool intel_vtd_active(void if (intel_iommu_gfx_mapped) return true; #endif - return false; + + /* Running as a guest, we assume the host is enforcing VT'd */ + return !hypervisor_is_type(X86_HYPER_NATIVE); }
static inline bool intel_scanout_needs_vtd_wa(struct drm_i915_private *dev_priv)
From: Jiri Slaby jslaby@suse.cz
commit 6ca03f90527e499dd5e32d6522909e2ad390896b upstream.
Use 'strlen' of the string, add one for NUL terminator and simply do 'copy_to_user' instead of the explicit 'for' loop. This makes the KDGKBSENT case more compact.
The only thing we need to take care about is NULL 'func_table[i]'. Use an empty string in that case.
The original check for overflow could never trigger as the func_buf strings are always shorter or equal to 'struct kbsentry's.
Cc: stable@vger.kernel.org Signed-off-by: Jiri Slaby jslaby@suse.cz Link: https://lore.kernel.org/r/20201019085517.10176-1-jslaby@suse.cz Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/tty/vt/keyboard.c | 28 +++++++++------------------- 1 file changed, 9 insertions(+), 19 deletions(-)
--- a/drivers/tty/vt/keyboard.c +++ b/drivers/tty/vt/keyboard.c @@ -1995,9 +1995,7 @@ out: int vt_do_kdgkb_ioctl(int cmd, struct kbsentry __user *user_kdgkb, int perm) { struct kbsentry *kbs; - char *p; u_char *q; - u_char __user *up; int sz, fnw_sz; int delta; char *first_free, *fj, *fnw; @@ -2023,23 +2021,15 @@ int vt_do_kdgkb_ioctl(int cmd, struct kb i = array_index_nospec(kbs->kb_func, MAX_NR_FUNC);
switch (cmd) { - case KDGKBSENT: - sz = sizeof(kbs->kb_string) - 1; /* sz should have been - a struct member */ - up = user_kdgkb->kb_string; - p = func_table[i]; - if(p) - for ( ; *p && sz; p++, sz--) - if (put_user(*p, up++)) { - ret = -EFAULT; - goto reterr; - } - if (put_user('\0', up)) { - ret = -EFAULT; - goto reterr; - } - kfree(kbs); - return ((p && *p) ? -EOVERFLOW : 0); + case KDGKBSENT: { + /* size should have been a struct member */ + unsigned char *from = func_table[i] ? : ""; + + ret = copy_to_user(user_kdgkb->kb_string, from, + strlen(from) + 1) ? -EFAULT : 0; + + goto reterr; + } case KDSKBSENT: if (!perm) { ret = -EPERM;
From: Jiri Slaby jslaby@suse.cz
commit 82e61c3909db51d91b9d3e2071557b6435018b80 upstream.
Both read-side users of func_table/func_buf need locking. Without that, one can easily confuse the code by repeatedly setting altering strings like: while (1) for (a = 0; a < 2; a++) { struct kbsentry kbs = {}; strcpy((char *)kbs.kb_string, a ? ".\n" : "88888\n"); ioctl(fd, KDSKBSENT, &kbs); }
When that program runs, one can get unexpected output by holding F1 (note the unxpected period on the last line): . 88888 .8888
So protect all accesses to 'func_table' (and func_buf) by preexisting 'func_buf_lock'.
It is easy in 'k_fn' handler as 'puts_queue' is expected not to sleep. On the other hand, KDGKBSENT needs a local (atomic) copy of the string because copy_to_user can sleep. Use already allocated, but unused 'kbs->kb_string' for that purpose.
Note that the program above needs at least CAP_SYS_TTY_CONFIG.
This depends on the previous patch and on the func_buf_lock lock added in commit 46ca3f735f34 (tty/vt: fix write/write race in ioctl(KDSKBSENT) handler) in 5.2.
Likely fixes CVE-2020-25656.
Cc: stable@vger.kernel.org Reported-by: Minh Yuan yuanmingbuaa@gmail.com Signed-off-by: Jiri Slaby jslaby@suse.cz Link: https://lore.kernel.org/r/20201019085517.10176-2-jslaby@suse.cz Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/tty/vt/keyboard.c | 17 +++++++++++++---- 1 file changed, 13 insertions(+), 4 deletions(-)
--- a/drivers/tty/vt/keyboard.c +++ b/drivers/tty/vt/keyboard.c @@ -743,8 +743,13 @@ static void k_fn(struct vc_data *vc, uns return;
if ((unsigned)value < ARRAY_SIZE(func_table)) { + unsigned long flags; + + spin_lock_irqsave(&func_buf_lock, flags); if (func_table[value]) puts_queue(vc, func_table[value]); + spin_unlock_irqrestore(&func_buf_lock, flags); + } else pr_err("k_fn called with value=%d\n", value); } @@ -1991,7 +1996,7 @@ out: #undef s #undef v
-/* FIXME: This one needs untangling and locking */ +/* FIXME: This one needs untangling */ int vt_do_kdgkb_ioctl(int cmd, struct kbsentry __user *user_kdgkb, int perm) { struct kbsentry *kbs; @@ -2023,10 +2028,14 @@ int vt_do_kdgkb_ioctl(int cmd, struct kb switch (cmd) { case KDGKBSENT: { /* size should have been a struct member */ - unsigned char *from = func_table[i] ? : ""; + ssize_t len = sizeof(user_kdgkb->kb_string); + + spin_lock_irqsave(&func_buf_lock, flags); + len = strlcpy(kbs->kb_string, func_table[i] ? : "", len); + spin_unlock_irqrestore(&func_buf_lock, flags);
- ret = copy_to_user(user_kdgkb->kb_string, from, - strlen(from) + 1) ? -EFAULT : 0; + ret = copy_to_user(user_kdgkb->kb_string, kbs->kb_string, + len + 1) ? -EFAULT : 0;
goto reterr; }
From: Jiri Slaby jslaby@suse.cz
commit d54654790302ccaa72589380dce060d376ef8716 upstream.
In commit 5ba127878722, we shuffled with the check of 'perm'. But my brain somehow inverted the condition in 'do_unimap_ioctl' (I thought it is ||, not &&), so GIO_UNIMAP stopped working completely.
Move the 'perm' checks back to do_unimap_ioctl and do them right again. In fact, this reverts this part of code to the pre-5ba127878722 state. Except 'perm' is now a bool.
Fixes: 5ba127878722 ("vt_ioctl: move perm checks level up") Cc: stable@vger.kernel.org Reported-by: Fabian Vogt fvogt@suse.com Signed-off-by: Jiri Slaby jslaby@suse.cz Link: https://lore.kernel.org/r/20201026055419.30518-1-jslaby@suse.cz Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/tty/vt/vt_ioctl.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-)
--- a/drivers/tty/vt/vt_ioctl.c +++ b/drivers/tty/vt/vt_ioctl.c @@ -550,7 +550,7 @@ static int vt_io_fontreset(struct consol }
static inline int do_unimap_ioctl(int cmd, struct unimapdesc __user *user_ud, - struct vc_data *vc) + bool perm, struct vc_data *vc) { struct unimapdesc tmp;
@@ -558,9 +558,11 @@ static inline int do_unimap_ioctl(int cm return -EFAULT; switch (cmd) { case PIO_UNIMAP: + if (!perm) + return -EPERM; return con_set_unimap(vc, tmp.entry_ct, tmp.entries); case GIO_UNIMAP: - if (fg_console != vc->vc_num) + if (!perm && fg_console != vc->vc_num) return -EPERM; return con_get_unimap(vc, tmp.entry_ct, &(user_ud->entry_ct), tmp.entries); @@ -640,10 +642,7 @@ static int vt_io_ioctl(struct vc_data *v
case PIO_UNIMAP: case GIO_UNIMAP: - if (!perm) - return -EPERM; - - return do_unimap_ioctl(cmd, up, vc); + return do_unimap_ioctl(cmd, up, perm, vc);
default: return -ENOIOCTLCMD;
From: Jason Gerecke jason.gerecke@wacom.com
commit d9216d753b2b1406b801243b12aaf00a5ce5b861 upstream.
It has recently been reported that the "heartbeat" report from devices like the 2nd-gen Intuos Pro (PTH-460, PTH-660, PTH-860) or the 2nd-gen Bluetooth-enabled Intuos tablets (CTL-4100WL, CTL-6100WL) can cause the driver to send a spurious BTN_TOUCH=0 once per second in the middle of drawing. This can result in broken lines while drawing on Chrome OS.
The source of the issue has been traced back to a change which modified the driver to only call `wacom_wac_pad_report()` once per report instead of once per collection. As part of this change, pad-handling code was removed from `wacom_wac_collection()` under the assumption that the `WACOM_PEN_FIELD` and `WACOM_TOUCH_FIELD` checks would not be satisfied when a pad or battery collection was being processed.
To be clear, the macros `WACOM_PAD_FIELD` and `WACOM_PEN_FIELD` do not currently check exclusive conditions. In fact, most "pad" fields will also appear to be "pen" fields simply due to their presence inside of a Digitizer application collection. Because of this, the removal of the check from `wacom_wac_collection()` just causes pad / battery collections to instead trigger a call to `wacom_wac_pen_report()` instead. The pen report function in turn resets the tip switch state just prior to exiting, resulting in the observed BTN_TOUCH=0 symptom.
To correct this, we restore a version of the `WACOM_PAD_FIELD` check in `wacom_wac_collection()` and return early. This effectively prevents pad / battery collections from being reported until the very end of the report as originally intended.
Fixes: d4b8efeb46d9 ("HID: wacom: generic: Correct pad syncing") Cc: stable@vger.kernel.org # v4.17+ Signed-off-by: Jason Gerecke jason.gerecke@wacom.com Reviewed-by: Ping Cheng ping.cheng@wacom.com Tested-by: Ping Cheng ping.cheng@wacom.com Signed-off-by: Jiri Kosina jkosina@suse.cz Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/hid/wacom_wac.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/drivers/hid/wacom_wac.c +++ b/drivers/hid/wacom_wac.c @@ -2773,7 +2773,9 @@ static int wacom_wac_collection(struct h if (report->type != HID_INPUT_REPORT) return -1;
- if (WACOM_PEN_FIELD(field) && wacom->wacom_wac.pen_input) + if (WACOM_PAD_FIELD(field)) + return 0; + else if (WACOM_PEN_FIELD(field) && wacom->wacom_wac.pen_input) wacom_wac_pen_report(hdev, report); else if (WACOM_FINGER_FIELD(field) && wacom->wacom_wac.touch_input) wacom_wac_finger_report(hdev, report);
From: Borislav Petkov bp@suse.de
commit b3149ffcdb31a8eb854cc442a389ae0b539bf28a upstream.
Add asm/mce.h to asm/asm-prototypes.h so that that asm symbol's checksum can be generated in order to support CONFIG_MODVERSIONS with it and fix:
WARNING: modpost: EXPORT symbol "copy_mc_fragile" [vmlinux] version \ generation failed, symbol will not be versioned.
For reference see:
4efca4ed05cb ("kbuild: modversions for EXPORT_SYMBOL() for asm") 334bb7738764 ("x86/kbuild: enable modversions for symbols exported from asm")
Fixes: ec6347bb4339 ("x86, powerpc: Rename memcpy_mcsafe() to copy_mc_to_{user, kernel}()") Signed-off-by: Borislav Petkov bp@suse.de Link: https://lkml.kernel.org/r/20201007111447.GA23257@zn.tnic Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/include/asm/asm-prototypes.h | 1 + 1 file changed, 1 insertion(+)
--- a/arch/x86/include/asm/asm-prototypes.h +++ b/arch/x86/include/asm/asm-prototypes.h @@ -5,6 +5,7 @@ #include <asm/string.h> #include <asm/page.h> #include <asm/checksum.h> +#include <asm/mce.h>
#include <asm-generic/asm-prototypes.h>
From: Russell King rmk+kernel@armlinux.org.uk
commit 82776f6c75a90e1d2103e689b84a689de8f1aa02 upstream.
Commit 293f89959483 ("tty: serial: 21285: stop using the unused[] variable from struct uart_port") introduced a bug which stops the transmit interrupt being disabled when there are no characters to transmit - disabling the transmit interrupt at the interrupt controller is the only way to stop an interrupt storm. If this interrupt is not disabled when there are no transmit characters, we end up with an interrupt storm which prevents the machine making forward progress.
Fixes: 293f89959483 ("tty: serial: 21285: stop using the unused[] variable from struct uart_port") Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Cc: stable stable@vger.kernel.org Link: https://lore.kernel.org/r/E1kU4GS-0006lE-OO@rmk-PC.armlinux.org.uk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/tty/serial/21285.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-)
--- a/drivers/tty/serial/21285.c +++ b/drivers/tty/serial/21285.c @@ -50,25 +50,25 @@ static const char serial21285_name[] = "
static bool is_enabled(struct uart_port *port, int bit) { - unsigned long private_data = (unsigned long)port->private_data; + unsigned long *private_data = (unsigned long *)&port->private_data;
- if (test_bit(bit, &private_data)) + if (test_bit(bit, private_data)) return true; return false; }
static void enable(struct uart_port *port, int bit) { - unsigned long private_data = (unsigned long)port->private_data; + unsigned long *private_data = (unsigned long *)&port->private_data;
- set_bit(bit, &private_data); + set_bit(bit, private_data); }
static void disable(struct uart_port *port, int bit) { - unsigned long private_data = (unsigned long)port->private_data; + unsigned long *private_data = (unsigned long *)&port->private_data;
- clear_bit(bit, &private_data); + clear_bit(bit, private_data); }
#define is_tx_enabled(port) is_enabled(port, tx_enabled_bit)
From: Vladimir Oltean vladimir.oltean@nxp.com
commit c97f2a6fb3dfbfbbc88edc8ea62ef2b944e18849 upstream.
Prior to the commit that this one fixes, the FIFO size was derived from the read-only register LPUARTx_FIFO[TXFIFOSIZE] using the following formula:
TX FIFO size = 2 ^ (LPUARTx_FIFO[TXFIFOSIZE] - 1)
The documentation for LS1021A is a mess. Under chapter 26.1.3 LS1021A LPUART module special consideration, it mentions TXFIFO_SZ and RXFIFO_SZ being equal to 4, and in the register description for LPUARTx_FIFO, it shows the out-of-reset value of TXFIFOSIZE and RXFIFOSIZE fields as "011", even though these registers read as "101" in reality.
And when LPUART on LS1021A was working, the "101" value did correspond to "16 datawords", by applying the formula above, even though the documentation is wrong again (!!!!) and says that "101" means 64 datawords (hint: it doesn't).
So the "new" formula created by commit f77ebb241ce0 has all the premises of being wrong for LS1021A, because it relied only on false data and no actual experimentation.
Interestingly, in commit c2f448cff22a ("tty: serial: fsl_lpuart: add LS1028A support"), Michael Walle applied a workaround to this by manually setting the FIFO widths for LS1028A. It looks like the same values are used by LS1021A as well, in fact.
When the driver thinks that it has a deeper FIFO than it really has, getty (user space) output gets truncated.
Many thanks to Michael for pointing out where to look.
Fixes: f77ebb241ce0 ("tty: serial: fsl_lpuart: correct the FIFO depth size") Suggested-by: Michael Walle michael@walle.cc Signed-off-by: Vladimir Oltean vladimir.oltean@nxp.com Link: https://lore.kernel.org/r/20201023013429.3551026-1-vladimir.oltean@nxp.com Reviewed-by:Fugang Duan fugang.duan@nxp.com Cc: stable stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/tty/serial/fsl_lpuart.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-)
--- a/drivers/tty/serial/fsl_lpuart.c +++ b/drivers/tty/serial/fsl_lpuart.c @@ -314,9 +314,10 @@ MODULE_DEVICE_TABLE(of, lpuart_dt_ids); /* Forward declare this for the dma callbacks*/ static void lpuart_dma_tx_complete(void *arg);
-static inline bool is_ls1028a_lpuart(struct lpuart_port *sport) +static inline bool is_layerscape_lpuart(struct lpuart_port *sport) { - return sport->devtype == LS1028A_LPUART; + return (sport->devtype == LS1021A_LPUART || + sport->devtype == LS1028A_LPUART); }
static inline bool is_imx8qxp_lpuart(struct lpuart_port *sport) @@ -1644,11 +1645,11 @@ static int lpuart32_startup(struct uart_ UARTFIFO_FIFOSIZE_MASK);
/* - * The LS1028A has a fixed length of 16 words. Although it supports the - * RX/TXSIZE fields their encoding is different. Eg the reference manual - * states 0b101 is 16 words. + * The LS1021A and LS1028A have a fixed FIFO depth of 16 words. + * Although they support the RX/TXSIZE fields, their encoding is + * different. Eg the reference manual states 0b101 is 16 words. */ - if (is_ls1028a_lpuart(sport)) { + if (is_layerscape_lpuart(sport)) { sport->rxfifo_size = 16; sport->txfifo_size = 16; sport->port.fifosize = sport->txfifo_size;
From: Gaurav Kohli gkohli@codeaurora.org
commit bbeb97464eefc65f506084fd9f18f21653e01137 upstream.
Below race can come, if trace_open and resize of cpu buffer is running parallely on different cpus CPUX CPUY ring_buffer_resize atomic_read(&buffer->resize_disabled) tracing_open tracing_reset_online_cpus ring_buffer_reset_cpu rb_reset_cpu rb_update_pages remove/insert pages resetting pointer
This race can cause data abort or some times infinte loop in rb_remove_pages and rb_insert_pages while checking pages for sanity.
Take buffer lock to fix this.
Link: https://lkml.kernel.org/r/1601976833-24377-1-git-send-email-gkohli@codeauror...
Cc: stable@vger.kernel.org Fixes: b23d7a5f4a07a ("ring-buffer: speed up buffer resets by avoiding synchronize_rcu for each CPU") Signed-off-by: Gaurav Kohli gkohli@codeaurora.org Signed-off-by: Steven Rostedt (VMware) rostedt@goodmis.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- kernel/trace/ring_buffer.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
--- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -4866,6 +4866,9 @@ void ring_buffer_reset_cpu(struct trace_ if (!cpumask_test_cpu(cpu, buffer->cpumask)) return;
+ /* prevent another thread from changing buffer sizes */ + mutex_lock(&buffer->mutex); + atomic_inc(&cpu_buffer->resize_disabled); atomic_inc(&cpu_buffer->record_disabled);
@@ -4876,6 +4879,8 @@ void ring_buffer_reset_cpu(struct trace_
atomic_dec(&cpu_buffer->record_disabled); atomic_dec(&cpu_buffer->resize_disabled); + + mutex_unlock(&buffer->mutex); } EXPORT_SYMBOL_GPL(ring_buffer_reset_cpu);
@@ -4889,6 +4894,9 @@ void ring_buffer_reset_online_cpus(struc struct ring_buffer_per_cpu *cpu_buffer; int cpu;
+ /* prevent another thread from changing buffer sizes */ + mutex_lock(&buffer->mutex); + for_each_online_buffer_cpu(buffer, cpu) { cpu_buffer = buffer->buffers[cpu];
@@ -4907,6 +4915,8 @@ void ring_buffer_reset_online_cpus(struc atomic_dec(&cpu_buffer->record_disabled); atomic_dec(&cpu_buffer->resize_disabled); } + + mutex_unlock(&buffer->mutex); }
/**
From: Michael S. Tsirkin mst@redhat.com
commit 5e1a3149eec8675c2767cc465903f5e4829de5b0 upstream.
This reverts commit 7ed9e3d97c32d969caded2dfb6e67c1a2cc5a0b1.
The patch creates a DoS risk since it can result in a high order memory allocation.
Fixes: 7ed9e3d97c32d ("vhost-vdpa: fix page pinning leakage in error path") Cc: stable@vger.kernel.org Signed-off-by: Michael S. Tsirkin mst@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/vhost/vdpa.c | 117 ++++++++++++++++++++------------------------------- 1 file changed, 47 insertions(+), 70 deletions(-)
--- a/drivers/vhost/vdpa.c +++ b/drivers/vhost/vdpa.c @@ -595,19 +595,21 @@ static int vhost_vdpa_process_iotlb_upda struct vhost_dev *dev = &v->vdev; struct vhost_iotlb *iotlb = dev->iotlb; struct page **page_list; - struct vm_area_struct **vmas; + unsigned long list_size = PAGE_SIZE / sizeof(struct page *); unsigned int gup_flags = FOLL_LONGTERM; - unsigned long map_pfn, last_pfn = 0; - unsigned long npages, lock_limit; - unsigned long i, nmap = 0; + unsigned long npages, cur_base, map_pfn, last_pfn = 0; + unsigned long locked, lock_limit, pinned, i; u64 iova = msg->iova; - long pinned; int ret = 0;
if (vhost_iotlb_itree_first(iotlb, msg->iova, msg->iova + msg->size - 1)) return -EEXIST;
+ page_list = (struct page **) __get_free_page(GFP_KERNEL); + if (!page_list) + return -ENOMEM; + if (msg->perm & VHOST_ACCESS_WO) gup_flags |= FOLL_WRITE;
@@ -615,86 +617,61 @@ static int vhost_vdpa_process_iotlb_upda if (!npages) return -EINVAL;
- page_list = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL); - vmas = kvmalloc_array(npages, sizeof(struct vm_area_struct *), - GFP_KERNEL); - if (!page_list || !vmas) { - ret = -ENOMEM; - goto free; - } - mmap_read_lock(dev->mm);
+ locked = atomic64_add_return(npages, &dev->mm->pinned_vm); lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; - if (npages + atomic64_read(&dev->mm->pinned_vm) > lock_limit) { - ret = -ENOMEM; - goto unlock; - }
- pinned = pin_user_pages(msg->uaddr & PAGE_MASK, npages, gup_flags, - page_list, vmas); - if (npages != pinned) { - if (pinned < 0) { - ret = pinned; - } else { - unpin_user_pages(page_list, pinned); - ret = -ENOMEM; - } - goto unlock; + if (locked > lock_limit) { + ret = -ENOMEM; + goto out; }
+ cur_base = msg->uaddr & PAGE_MASK; iova &= PAGE_MASK; - map_pfn = page_to_pfn(page_list[0]);
- /* One more iteration to avoid extra vdpa_map() call out of loop. */ - for (i = 0; i <= npages; i++) { - unsigned long this_pfn; - u64 csize; - - /* The last chunk may have no valid PFN next to it */ - this_pfn = i < npages ? page_to_pfn(page_list[i]) : -1UL; - - if (last_pfn && (this_pfn == -1UL || - this_pfn != last_pfn + 1)) { - /* Pin a contiguous chunk of memory */ - csize = last_pfn - map_pfn + 1; - ret = vhost_vdpa_map(v, iova, csize << PAGE_SHIFT, - map_pfn << PAGE_SHIFT, - msg->perm); - if (ret) { - /* - * Unpin the rest chunks of memory on the - * flight with no corresponding vdpa_map() - * calls having been made yet. On the other - * hand, vdpa_unmap() in the failure path - * is in charge of accounting the number of - * pinned pages for its own. - * This asymmetrical pattern of accounting - * is for efficiency to pin all pages at - * once, while there is no other callsite - * of vdpa_map() than here above. - */ - unpin_user_pages(&page_list[nmap], - npages - nmap); - goto out; + while (npages) { + pinned = min_t(unsigned long, npages, list_size); + ret = pin_user_pages(cur_base, pinned, + gup_flags, page_list, NULL); + if (ret != pinned) + goto out; + + if (!last_pfn) + map_pfn = page_to_pfn(page_list[0]); + + for (i = 0; i < ret; i++) { + unsigned long this_pfn = page_to_pfn(page_list[i]); + u64 csize; + + if (last_pfn && (this_pfn != last_pfn + 1)) { + /* Pin a contiguous chunk of memory */ + csize = (last_pfn - map_pfn + 1) << PAGE_SHIFT; + if (vhost_vdpa_map(v, iova, csize, + map_pfn << PAGE_SHIFT, + msg->perm)) + goto out; + map_pfn = this_pfn; + iova += csize; } - atomic64_add(csize, &dev->mm->pinned_vm); - nmap += csize; - iova += csize << PAGE_SHIFT; - map_pfn = this_pfn; + + last_pfn = this_pfn; } - last_pfn = this_pfn; + + cur_base += ret << PAGE_SHIFT; + npages -= ret; }
- WARN_ON(nmap != npages); + /* Pin the rest chunk */ + ret = vhost_vdpa_map(v, iova, (last_pfn - map_pfn + 1) << PAGE_SHIFT, + map_pfn << PAGE_SHIFT, msg->perm); out: - if (ret) + if (ret) { vhost_vdpa_unmap(v, msg->iova, msg->size); -unlock: + atomic64_sub(npages, &dev->mm->pinned_vm); + } mmap_read_unlock(dev->mm); -free: - kvfree(vmas); - kvfree(page_list); + free_page((unsigned long)page_list); return ret; }
From: Christophe Leroy christophe.leroy@csgroup.eu
commit 542db12a9c42d1ce70c45091765e02f74c129f43 upstream.
The following random segfault is observed from time to time with map_hugetlb selftest:
root@localhost:~# ./map_hugetlb 1 19 524288 kB hugepages Mapping 1 Mbytes Segmentation fault
[ 31.219972] map_hugetlb[365]: segfault (11) at 117 nip 77974f8c lr 779a6834 code 1 in ld-2.23.so[77966000+21000] [ 31.220192] map_hugetlb[365]: code: 9421ffc0 480318d1 93410028 90010044 9361002c 93810030 93a10034 93c10038 [ 31.220307] map_hugetlb[365]: code: 93e1003c 93210024 8123007c 81430038 <80e90004> 814a0004 7f443a14 813a0004 [ 31.221911] BUG: Bad rss-counter state mm:(ptrval) type:MM_FILEPAGES val:33 [ 31.229362] BUG: Bad rss-counter state mm:(ptrval) type:MM_ANONPAGES val:5
This fault is due to hugetlb_free_pgd_range() freeing page tables that are also used by regular pages.
As explain in the comment at the beginning of hugetlb_free_pgd_range(), the verification done in free_pgd_range() on floor and ceiling is not done here, which means hugetlb_free_pte_range() can free outside the expected range.
As the verification cannot be done in hugetlb_free_pgd_range(), it must be done in hugetlb_free_pte_range().
Fixes: b250c8c08c79 ("powerpc/8xx: Manage 512k huge pages as standard pages.") Cc: stable@vger.kernel.org Signed-off-by: Christophe Leroy christophe.leroy@csgroup.eu Reviewed-by: Aneesh Kumar K.V aneesh.kumar@linux.ibm.com Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/f0cb2a5477cd87d1eaadb128042e20aeb2bc2859.159886067... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/mm/hugetlbpage.c | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-)
--- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -330,10 +330,24 @@ static void free_hugepd_range(struct mmu get_hugepd_cache_index(pdshift - shift)); }
-static void hugetlb_free_pte_range(struct mmu_gather *tlb, pmd_t *pmd, unsigned long addr) +static void hugetlb_free_pte_range(struct mmu_gather *tlb, pmd_t *pmd, + unsigned long addr, unsigned long end, + unsigned long floor, unsigned long ceiling) { + unsigned long start = addr; pgtable_t token = pmd_pgtable(*pmd);
+ start &= PMD_MASK; + if (start < floor) + return; + if (ceiling) { + ceiling &= PMD_MASK; + if (!ceiling) + return; + } + if (end - 1 > ceiling - 1) + return; + pmd_clear(pmd); pte_free_tlb(tlb, token, addr); mm_dec_nr_ptes(tlb->mm); @@ -363,7 +377,7 @@ static void hugetlb_free_pmd_range(struc */ WARN_ON(!IS_ENABLED(CONFIG_PPC_8xx));
- hugetlb_free_pte_range(tlb, pmd, addr); + hugetlb_free_pte_range(tlb, pmd, addr, end, floor, ceiling);
continue; }
From: Jan Kara jack@suse.cz
commit a7be300de800e755714c71103ae4a0d205e41e99 upstream.
udf_process_sequence() allocates temporary array for processing partition descriptors on volume which it fails to free. Free the array when it is not needed anymore.
Fixes: 7b78fd02fb19 ("udf: Fix handling of Partition Descriptors") CC: stable@vger.kernel.org Reported-by: syzbot+128f4dd6e796c98b3760@syzkaller.appspotmail.com Signed-off-by: Jan Kara jack@suse.cz Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/udf/super.c | 21 +++++++++++++-------- 1 file changed, 13 insertions(+), 8 deletions(-)
--- a/fs/udf/super.c +++ b/fs/udf/super.c @@ -1704,7 +1704,8 @@ static noinline int udf_process_sequence "Pointers (max %u supported)\n", UDF_MAX_TD_NESTING); brelse(bh); - return -EIO; + ret = -EIO; + goto out; }
vdp = (struct volDescPtr *)bh->b_data; @@ -1724,7 +1725,8 @@ static noinline int udf_process_sequence curr = get_volume_descriptor_record(ident, bh, &data); if (IS_ERR(curr)) { brelse(bh); - return PTR_ERR(curr); + ret = PTR_ERR(curr); + goto out; } /* Descriptor we don't care about? */ if (!curr) @@ -1746,28 +1748,31 @@ static noinline int udf_process_sequence */ if (!data.vds[VDS_POS_PRIMARY_VOL_DESC].block) { udf_err(sb, "Primary Volume Descriptor not found!\n"); - return -EAGAIN; + ret = -EAGAIN; + goto out; } ret = udf_load_pvoldesc(sb, data.vds[VDS_POS_PRIMARY_VOL_DESC].block); if (ret < 0) - return ret; + goto out;
if (data.vds[VDS_POS_LOGICAL_VOL_DESC].block) { ret = udf_load_logicalvol(sb, data.vds[VDS_POS_LOGICAL_VOL_DESC].block, fileset); if (ret < 0) - return ret; + goto out; }
/* Now handle prevailing Partition Descriptors */ for (i = 0; i < data.num_part_descs; i++) { ret = udf_load_partdesc(sb, data.part_descs_loc[i].rec.block); if (ret < 0) - return ret; + goto out; } - - return 0; + ret = 0; +out: + kfree(data.part_descs_loc); + return ret; }
/*
From: Paul Cercueil paul@crapouillou.net
commit baf6fd97b16ea8f981b8a8b04039596f32fc2972 upstream.
The jz4780_dma_tx_status() function would check if a channel's cookie state was set to 'completed', and if not, it would enter the critical section. However, in that time frame, the jz4780_dma_chan_irq() function was able to set the cookie to 'completed', and clear the jzchan->vchan pointer, which was deferenced in the critical section of the first function.
Fix this race by checking the channel's cookie state after entering the critical function and not before.
Fixes: d894fc6046fe ("dmaengine: jz4780: add driver for the Ingenic JZ4780 DMA controller") Cc: stable@vger.kernel.org # v4.0 Signed-off-by: Paul Cercueil paul@crapouillou.net Reported-by: Artur Rojek contact@artur-rojek.eu Tested-by: Artur Rojek contact@artur-rojek.eu Link: https://lore.kernel.org/r/20201004140307.885556-1-paul@crapouillou.net Signed-off-by: Vinod Koul vkoul@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/dma/dma-jz4780.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
--- a/drivers/dma/dma-jz4780.c +++ b/drivers/dma/dma-jz4780.c @@ -639,11 +639,11 @@ static enum dma_status jz4780_dma_tx_sta unsigned long flags; unsigned long residue = 0;
+ spin_lock_irqsave(&jzchan->vchan.lock, flags); + status = dma_cookie_status(chan, cookie, txstate); if ((status == DMA_COMPLETE) || (txstate == NULL)) - return status; - - spin_lock_irqsave(&jzchan->vchan.lock, flags); + goto out_unlock_irqrestore;
vdesc = vchan_find_desc(&jzchan->vchan, cookie); if (vdesc) { @@ -660,6 +660,7 @@ static enum dma_status jz4780_dma_tx_sta && jzchan->desc->status & (JZ_DMA_DCS_AR | JZ_DMA_DCS_HLT)) status = DMA_ERROR;
+out_unlock_irqrestore: spin_unlock_irqrestore(&jzchan->vchan.lock, flags); return status; }
From: Laurent Vivier lvivier@redhat.com
commit 1eca16b231570c8ae57fb91fdfbc48eb52c6a93b upstream.
Since commit f959dcd6ddfd ("dma-direct: Fix potential NULL pointer dereference") an error is reported when we load vdpa_sim and virtio-vdpa:
[ 129.351207] net eth0: Unexpected TXQ (0) queue failure: -12
It seems that dma_mask is not initialized.
This patch initializes dma_mask() and calls dma_set_mask_and_coherent() to fix the problem.
Full log:
[ 128.548628] ------------[ cut here ]------------ [ 128.553268] WARNING: CPU: 23 PID: 1105 at kernel/dma/mapping.c:149 dma_map_page_attrs+0x14c/0x1d0 [ 128.562139] Modules linked in: virtio_net net_failover failover virtio_vdpa vdpa_sim vringh vhost_iotlb vdpa xt_CHECKSUM xt_MASQUERADE xt_conntrack ipt_REJECT nf_reject_ipv4 nft_compat nft_counter nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables nfnetlink tun bridge stp llc iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi rfkill intel_rapl_msr intel_rapl_common isst_if_common sunrpc skx_edac nfit libnvdimm x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel ipmi_ssif kvm mgag200 i2c_algo_bit irqbypass drm_kms_helper crct10dif_pclmul crc32_pclmul syscopyarea ghash_clmulni_intel iTCO_wdt sysfillrect iTCO_vendor_support sysimgblt rapl fb_sys_fops dcdbas intel_cstate drm acpi_ipmi ipmi_si mei_me dell_smbios intel_uncore ipmi_devintf mei i2c_i801 dell_wmi_descriptor wmi_bmof pcspkr lpc_ich i2c_smbus ipmi_msghandler acpi_power_meter ip_tables xfs libcrc32c sd_mod t10_pi sg ahci libahci libata megaraid_sas tg3 crc32c_intel wmi dm_mirror dm_region_hash dm_log [ 128.562188] dm_mod [ 128.651334] CPU: 23 PID: 1105 Comm: NetworkManager Tainted: G S I 5.10.0-rc1+ #59 [ 128.659939] Hardware name: Dell Inc. PowerEdge R440/04JN2K, BIOS 2.8.1 06/30/2020 [ 128.667419] RIP: 0010:dma_map_page_attrs+0x14c/0x1d0 [ 128.672384] Code: 1c 25 28 00 00 00 0f 85 97 00 00 00 48 83 c4 10 5b 5d 41 5c 41 5d c3 4c 89 da eb d7 48 89 f2 48 2b 50 18 48 89 d0 eb 8d 0f 0b <0f> 0b 48 c7 c0 ff ff ff ff eb c3 48 89 d9 48 8b 40 40 e8 2d a0 aa [ 128.691131] RSP: 0018:ffffae0f0151f3c8 EFLAGS: 00010246 [ 128.696357] RAX: ffffffffc06b7400 RBX: 00000000000005fa RCX: 0000000000000000 [ 128.703488] RDX: 0000000000000040 RSI: ffffcee3c7861200 RDI: ffff9e2bc16cd000 [ 128.710620] RBP: 0000000000000000 R08: 0000000000000002 R09: 0000000000000000 [ 128.717754] R10: 0000000000000002 R11: 0000000000000000 R12: ffff9e472cb291f8 [ 128.724886] R13: ffff9e2bc14da780 R14: ffff9e472bc20000 R15: ffff9e2bc1b14940 [ 128.732020] FS: 00007f887bae23c0(0000) GS:ffff9e4ac01c0000(0000) knlGS:0000000000000000 [ 128.740105] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 128.745852] CR2: 0000562bc09de998 CR3: 00000003c156c006 CR4: 00000000007706e0 [ 128.752982] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 128.760114] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 128.767247] PKRU: 55555554 [ 128.769961] Call Trace: [ 128.772418] virtqueue_add+0x81e/0xb00 [ 128.776176] virtqueue_add_inbuf_ctx+0x26/0x30 [ 128.780625] try_fill_recv+0x3a2/0x6e0 [virtio_net] [ 128.785509] virtnet_open+0xf9/0x180 [virtio_net] [ 128.790217] __dev_open+0xe8/0x180 [ 128.793620] __dev_change_flags+0x1a7/0x210 [ 128.797808] dev_change_flags+0x21/0x60 [ 128.801646] do_setlink+0x328/0x10e0 [ 128.805227] ? __nla_validate_parse+0x121/0x180 [ 128.809757] ? __nla_parse+0x21/0x30 [ 128.813338] ? inet6_validate_link_af+0x5c/0xf0 [ 128.817871] ? cpumask_next+0x17/0x20 [ 128.821535] ? __snmp6_fill_stats64.isra.54+0x6b/0x110 [ 128.826676] ? __nla_validate_parse+0x47/0x180 [ 128.831120] __rtnl_newlink+0x541/0x8e0 [ 128.834962] ? __nla_reserve+0x38/0x50 [ 128.838713] ? security_sock_rcv_skb+0x2a/0x40 [ 128.843158] ? netlink_deliver_tap+0x2c/0x1e0 [ 128.847518] ? netlink_attachskb+0x1d8/0x220 [ 128.851793] ? skb_queue_tail+0x1b/0x50 [ 128.855641] ? fib6_clean_node+0x43/0x170 [ 128.859652] ? _cond_resched+0x15/0x30 [ 128.863406] ? kmem_cache_alloc_trace+0x3a3/0x420 [ 128.868110] rtnl_newlink+0x43/0x60 [ 128.871602] rtnetlink_rcv_msg+0x12c/0x380 [ 128.875701] ? rtnl_calcit.isra.39+0x110/0x110 [ 128.880147] netlink_rcv_skb+0x50/0x100 [ 128.883987] netlink_unicast+0x1a5/0x280 [ 128.887913] netlink_sendmsg+0x23d/0x470 [ 128.891839] sock_sendmsg+0x5b/0x60 [ 128.895331] ____sys_sendmsg+0x1ef/0x260 [ 128.899255] ? copy_msghdr_from_user+0x5c/0x90 [ 128.903702] ___sys_sendmsg+0x7c/0xc0 [ 128.907369] ? dev_forward_change+0x130/0x130 [ 128.911731] ? sysctl_head_finish.part.29+0x24/0x40 [ 128.916616] ? new_sync_write+0x11f/0x1b0 [ 128.920628] ? mntput_no_expire+0x47/0x240 [ 128.924727] __sys_sendmsg+0x57/0xa0 [ 128.928309] do_syscall_64+0x33/0x40 [ 128.931887] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [ 128.936937] RIP: 0033:0x7f88792e3857 [ 128.940518] Code: c3 66 90 41 54 41 89 d4 55 48 89 f5 53 89 fb 48 83 ec 10 e8 0b ed ff ff 44 89 e2 48 89 ee 89 df 41 89 c0 b8 2e 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 35 44 89 c7 48 89 44 24 08 e8 44 ed ff ff 48 [ 128.959263] RSP: 002b:00007ffdca60dea0 EFLAGS: 00000293 ORIG_RAX: 000000000000002e [ 128.966827] RAX: ffffffffffffffda RBX: 000000000000000c RCX: 00007f88792e3857 [ 128.973960] RDX: 0000000000000000 RSI: 00007ffdca60def0 RDI: 000000000000000c [ 128.981095] RBP: 00007ffdca60def0 R08: 0000000000000000 R09: 0000000000000000 [ 128.988224] R10: 0000000000000001 R11: 0000000000000293 R12: 0000000000000000 [ 128.995357] R13: 0000000000000000 R14: 00007ffdca60e0a8 R15: 00007ffdca60e09c [ 129.002492] CPU: 23 PID: 1105 Comm: NetworkManager Tainted: G S I 5.10.0-rc1+ #59 [ 129.011093] Hardware name: Dell Inc. PowerEdge R440/04JN2K, BIOS 2.8.1 06/30/2020 [ 129.018571] Call Trace: [ 129.021027] dump_stack+0x57/0x6a [ 129.024346] __warn.cold.14+0xe/0x3d [ 129.027925] ? dma_map_page_attrs+0x14c/0x1d0 [ 129.032283] report_bug+0xbd/0xf0 [ 129.035602] handle_bug+0x44/0x80 [ 129.038922] exc_invalid_op+0x13/0x60 [ 129.042589] asm_exc_invalid_op+0x12/0x20 [ 129.046602] RIP: 0010:dma_map_page_attrs+0x14c/0x1d0 [ 129.051566] Code: 1c 25 28 00 00 00 0f 85 97 00 00 00 48 83 c4 10 5b 5d 41 5c 41 5d c3 4c 89 da eb d7 48 89 f2 48 2b 50 18 48 89 d0 eb 8d 0f 0b <0f> 0b 48 c7 c0 ff ff ff ff eb c3 48 89 d9 48 8b 40 40 e8 2d a0 aa [ 129.070311] RSP: 0018:ffffae0f0151f3c8 EFLAGS: 00010246 [ 129.075536] RAX: ffffffffc06b7400 RBX: 00000000000005fa RCX: 0000000000000000 [ 129.082669] RDX: 0000000000000040 RSI: ffffcee3c7861200 RDI: ffff9e2bc16cd000 [ 129.089803] RBP: 0000000000000000 R08: 0000000000000002 R09: 0000000000000000 [ 129.096936] R10: 0000000000000002 R11: 0000000000000000 R12: ffff9e472cb291f8 [ 129.104068] R13: ffff9e2bc14da780 R14: ffff9e472bc20000 R15: ffff9e2bc1b14940 [ 129.111200] virtqueue_add+0x81e/0xb00 [ 129.114952] virtqueue_add_inbuf_ctx+0x26/0x30 [ 129.119399] try_fill_recv+0x3a2/0x6e0 [virtio_net] [ 129.124280] virtnet_open+0xf9/0x180 [virtio_net] [ 129.128984] __dev_open+0xe8/0x180 [ 129.132390] __dev_change_flags+0x1a7/0x210 [ 129.136575] dev_change_flags+0x21/0x60 [ 129.140415] do_setlink+0x328/0x10e0 [ 129.143994] ? __nla_validate_parse+0x121/0x180 [ 129.148528] ? __nla_parse+0x21/0x30 [ 129.152107] ? inet6_validate_link_af+0x5c/0xf0 [ 129.156639] ? cpumask_next+0x17/0x20 [ 129.160306] ? __snmp6_fill_stats64.isra.54+0x6b/0x110 [ 129.165443] ? __nla_validate_parse+0x47/0x180 [ 129.169890] __rtnl_newlink+0x541/0x8e0 [ 129.173731] ? __nla_reserve+0x38/0x50 [ 129.177483] ? security_sock_rcv_skb+0x2a/0x40 [ 129.181928] ? netlink_deliver_tap+0x2c/0x1e0 [ 129.186286] ? netlink_attachskb+0x1d8/0x220 [ 129.190560] ? skb_queue_tail+0x1b/0x50 [ 129.194401] ? fib6_clean_node+0x43/0x170 [ 129.198411] ? _cond_resched+0x15/0x30 [ 129.202163] ? kmem_cache_alloc_trace+0x3a3/0x420 [ 129.206869] rtnl_newlink+0x43/0x60 [ 129.210361] rtnetlink_rcv_msg+0x12c/0x380 [ 129.214462] ? rtnl_calcit.isra.39+0x110/0x110 [ 129.218908] netlink_rcv_skb+0x50/0x100 [ 129.222747] netlink_unicast+0x1a5/0x280 [ 129.226672] netlink_sendmsg+0x23d/0x470 [ 129.230599] sock_sendmsg+0x5b/0x60 [ 129.234090] ____sys_sendmsg+0x1ef/0x260 [ 129.238015] ? copy_msghdr_from_user+0x5c/0x90 [ 129.242461] ___sys_sendmsg+0x7c/0xc0 [ 129.246128] ? dev_forward_change+0x130/0x130 [ 129.250487] ? sysctl_head_finish.part.29+0x24/0x40 [ 129.255368] ? new_sync_write+0x11f/0x1b0 [ 129.259381] ? mntput_no_expire+0x47/0x240 [ 129.263478] __sys_sendmsg+0x57/0xa0 [ 129.267058] do_syscall_64+0x33/0x40 [ 129.270639] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [ 129.275689] RIP: 0033:0x7f88792e3857 [ 129.279268] Code: c3 66 90 41 54 41 89 d4 55 48 89 f5 53 89 fb 48 83 ec 10 e8 0b ed ff ff 44 89 e2 48 89 ee 89 df 41 89 c0 b8 2e 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 35 44 89 c7 48 89 44 24 08 e8 44 ed ff ff 48 [ 129.298015] RSP: 002b:00007ffdca60dea0 EFLAGS: 00000293 ORIG_RAX: 000000000000002e [ 129.305581] RAX: ffffffffffffffda RBX: 000000000000000c RCX: 00007f88792e3857 [ 129.312712] RDX: 0000000000000000 RSI: 00007ffdca60def0 RDI: 000000000000000c [ 129.319846] RBP: 00007ffdca60def0 R08: 0000000000000000 R09: 0000000000000000 [ 129.326978] R10: 0000000000000001 R11: 0000000000000293 R12: 0000000000000000 [ 129.334109] R13: 0000000000000000 R14: 00007ffdca60e0a8 R15: 00007ffdca60e09c [ 129.341249] ---[ end trace c551e8028fbaf59d ]--- [ 129.351207] net eth0: Unexpected TXQ (0) queue failure: -12 [ 129.360445] net eth0: Unexpected TXQ (0) queue failure: -12 [ 129.824428] net eth0: Unexpected TXQ (0) queue failure: -12
Fixes: 2c53d0f64c06 ("vdpasim: vDPA device simulator") Signed-off-by: Laurent Vivier lvivier@redhat.com Link: https://lore.kernel.org/r/20201027175914.689278-1-lvivier@redhat.com Signed-off-by: Michael S. Tsirkin mst@redhat.com Cc: stable@vger.kernel.org Acked-by: Jason Wang jasowang@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/vdpa/vdpa_sim/vdpa_sim.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c @@ -362,7 +362,9 @@ static struct vdpasim *vdpasim_create(vo spin_lock_init(&vdpasim->iommu_lock);
dev = &vdpasim->vdpa.dev; - dev->coherent_dma_mask = DMA_BIT_MASK(64); + dev->dma_mask = &dev->coherent_dma_mask; + if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) + goto err_iommu; set_dma_ops(dev, &vdpasim_dma_ops);
vdpasim->iommu = vhost_iotlb_alloc(2048, 0);
From: Daniel Vetter daniel.vetter@ffwll.ch
commit f49a51bfdc8ea717c97ccd4cc98b7e6daaa5553a upstream.
When we forward an mmap to the dma_buf exporter, they get to own everything. Unfortunately drm_gem_mmap_obj() overwrote vma->vm_private_data after the driver callback, wreaking the exporter complete. This was noticed because vb2_common_vm_close blew up on mali gpu with panfrost after commit 26d3ac3cb04d ("drm/shmem-helpers: Redirect mmap for imported dma-buf").
Unfortunately drm_gem_mmap_obj also acquires a surplus reference that we need to drop in shmem helpers, which is a bit of a mislayer situation. Maybe the entire dma_buf_mmap forwarding should be pulled into core gem code.
Note that the only two other drivers which forward mmap in their own code (etnaviv and exynos) get this somewhat right by overwriting the gem mmap code. But they seem to still have the leak. This might be a good excuse to move these drivers over to shmem helpers completely.
Reviewed-by: Boris Brezillon boris.brezillon@collabora.com Acked-by: Christian König christian.koenig@amd.com Cc: Christian König christian.koenig@amd.com Cc: Sumit Semwal sumit.semwal@linaro.org Cc: Lucas Stach l.stach@pengutronix.de Cc: Russell King linux+etnaviv@armlinux.org.uk Cc: Christian Gmeiner christian.gmeiner@gmail.com Cc: Inki Dae inki.dae@samsung.com Cc: Joonyoung Shim jy0922.shim@samsung.com Cc: Seung-Woo Kim sw0312.kim@samsung.com Cc: Kyungmin Park kyungmin.park@samsung.com Fixes: 26d3ac3cb04d ("drm/shmem-helpers: Redirect mmap for imported dma-buf") Cc: Boris Brezillon boris.brezillon@collabora.com Cc: Thomas Zimmermann tzimmermann@suse.de Cc: Gerd Hoffmann kraxel@redhat.com Cc: Rob Herring robh@kernel.org Cc: dri-devel@lists.freedesktop.org Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org Cc: stable@vger.kernel.org # v5.9+ Reported-and-tested-by: piotr.oniszczuk@gmail.com Cc: piotr.oniszczuk@gmail.com Signed-off-by: Daniel Vetter daniel.vetter@intel.com Link: https://patchwork.freedesktop.org/patch/msgid/20201027214922.3566743-1-danie... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/drm_gem.c | 4 ++-- drivers/gpu/drm/drm_gem_shmem_helper.c | 7 ++++++- 2 files changed, 8 insertions(+), 3 deletions(-)
--- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1085,6 +1085,8 @@ int drm_gem_mmap_obj(struct drm_gem_obje */ drm_gem_object_get(obj);
+ vma->vm_private_data = obj; + if (obj->funcs && obj->funcs->mmap) { ret = obj->funcs->mmap(obj, vma); if (ret) { @@ -1107,8 +1109,6 @@ int drm_gem_mmap_obj(struct drm_gem_obje vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot); }
- vma->vm_private_data = obj; - return 0; } EXPORT_SYMBOL(drm_gem_mmap_obj); --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -594,8 +594,13 @@ int drm_gem_shmem_mmap(struct drm_gem_ob /* Remove the fake offset */ vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
- if (obj->import_attach) + if (obj->import_attach) { + /* Drop the reference drm_gem_mmap_obj() acquired.*/ + drm_gem_object_put(obj); + vma->vm_private_data = NULL; + return dma_buf_mmap(obj->dma_buf, vma, 0); + }
shmem = to_drm_gem_shmem_obj(obj);
From: Nuno Sá nuno.sa@analog.com
commit b07c47bfab6f5c4c7182d23e854bbceaf7829c85 upstream.
When returning or breaking early from a `for_each_available_child_of_node()` loop, we need to explicitly call `of_node_put()` on the child node to possibly release the node.
Fixes: f110f3188e563 ("iio: temperature: Add support for LTC2983") Signed-off-by: Nuno Sá nuno.sa@analog.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200925091045.302-1-nuno.sa@analog.com Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/iio/temperature/ltc2983.c | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-)
--- a/drivers/iio/temperature/ltc2983.c +++ b/drivers/iio/temperature/ltc2983.c @@ -1285,18 +1285,20 @@ static int ltc2983_parse_dt(struct ltc29 ret = of_property_read_u32(child, "reg", &sensor.chan); if (ret) { dev_err(dev, "reg property must given for child nodes\n"); - return ret; + goto put_child; }
/* check if we have a valid channel */ if (sensor.chan < LTC2983_MIN_CHANNELS_NR || sensor.chan > LTC2983_MAX_CHANNELS_NR) { + ret = -EINVAL; dev_err(dev, "chan:%d must be from 1 to 20\n", sensor.chan); - return -EINVAL; + goto put_child; } else if (channel_avail_mask & BIT(sensor.chan)) { + ret = -EINVAL; dev_err(dev, "chan:%d already in use\n", sensor.chan); - return -EINVAL; + goto put_child; }
ret = of_property_read_u32(child, "adi,sensor-type", @@ -1304,7 +1306,7 @@ static int ltc2983_parse_dt(struct ltc29 if (ret) { dev_err(dev, "adi,sensor-type property must given for child nodes\n"); - return ret; + goto put_child; }
dev_dbg(dev, "Create new sensor, type %u, chann %u", @@ -1334,13 +1336,15 @@ static int ltc2983_parse_dt(struct ltc29 st->sensors[chan] = ltc2983_adc_new(child, st, &sensor); } else { dev_err(dev, "Unknown sensor type %d\n", sensor.type); - return -EINVAL; + ret = -EINVAL; + goto put_child; }
if (IS_ERR(st->sensors[chan])) { dev_err(dev, "Failed to create sensor %ld", PTR_ERR(st->sensors[chan])); - return PTR_ERR(st->sensors[chan]); + ret = PTR_ERR(st->sensors[chan]); + goto put_child; } /* set generic sensor parameters */ st->sensors[chan]->chan = sensor.chan; @@ -1351,6 +1355,9 @@ static int ltc2983_parse_dt(struct ltc29 }
return 0; +put_child: + of_node_put(child); + return ret; }
static int ltc2983_setup(struct ltc2983_data *st, bool assign_iio)
From: Eugen Hristev eugen.hristev@microchip.com
commit 1a198794451449113fa86994ed491d6986802c23 upstream.
After the move of the postenable code to preenable, the DMA start was done before the DMA init, which is not correct. The DMA is initialized in set_watermark. Because of this, we need to call the DMA start functions in set_watermark, after the DMA init, instead of preenable hook, when the DMA is not properly setup yet.
Fixes: f3c034f61775 ("iio: at91-sama5d2_adc: adjust iio_triggered_buffer_{predisable,postenable} positions") Signed-off-by: Eugen Hristev eugen.hristev@microchip.com Link: https://lore.kernel.org/r/20200923121748.49384-1-eugen.hristev@microchip.com Cc: Stable@vger.kernel.org Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/iio/adc/at91-sama5d2_adc.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-)
--- a/drivers/iio/adc/at91-sama5d2_adc.c +++ b/drivers/iio/adc/at91-sama5d2_adc.c @@ -884,7 +884,7 @@ static bool at91_adc_current_chan_is_tou AT91_SAMA5D2_MAX_CHAN_IDX + 1); }
-static int at91_adc_buffer_preenable(struct iio_dev *indio_dev) +static int at91_adc_buffer_prepare(struct iio_dev *indio_dev) { int ret; u8 bit; @@ -901,7 +901,7 @@ static int at91_adc_buffer_preenable(str /* we continue with the triggered buffer */ ret = at91_adc_dma_start(indio_dev); if (ret) { - dev_err(&indio_dev->dev, "buffer postenable failed\n"); + dev_err(&indio_dev->dev, "buffer prepare failed\n"); return ret; }
@@ -989,7 +989,6 @@ static int at91_adc_buffer_postdisable(s }
static const struct iio_buffer_setup_ops at91_buffer_setup_ops = { - .preenable = &at91_adc_buffer_preenable, .postdisable = &at91_adc_buffer_postdisable, };
@@ -1563,6 +1562,7 @@ static void at91_adc_dma_disable(struct static int at91_adc_set_watermark(struct iio_dev *indio_dev, unsigned int val) { struct at91_adc_state *st = iio_priv(indio_dev); + int ret;
if (val > AT91_HWFIFO_MAX_SIZE) return -EINVAL; @@ -1586,7 +1586,15 @@ static int at91_adc_set_watermark(struct else if (val > 1) at91_adc_dma_init(to_platform_device(&indio_dev->dev));
- return 0; + /* + * We can start the DMA only after setting the watermark and + * having the DMA initialization completed + */ + ret = at91_adc_buffer_prepare(indio_dev); + if (ret) + at91_adc_dma_disable(to_platform_device(&indio_dev->dev)); + + return ret; }
static int at91_adc_update_scan_mode(struct iio_dev *indio_dev,
From: Jonathan Cameron Jonathan.Cameron@huawei.com
commit 6b0cc5dce0725ae8f1a2883514da731c55eeb35e upstream.
This case is a bit different to the rest of the series. The driver was doing a regmap_bulk_read into a buffer that wasn't dma safe as it was on the stack with no guarantee of it being in a cacheline on it's own. Fixing that also dealt with the data leak and alignment issues that Lars-Peter pointed out.
Also removed some unaligned handling as we are now aligned.
Fixes tag is for the dma safe buffer issue. Potentially we would need to backport timestamp alignment futher but that is a totally different patch.
Fixes: fd64df16f40e ("iio: imu: inv_mpu6050: Add SPI support for MPU6000") Reported-by: Lars-Peter Clausen lars@metafoo.de Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Reviewed-by: Andy Shevchenko andy.shevchenko@gmail.com Reviewed-by: Jean-Baptiste Maneyrol jmaneyrol@invensense.com Cc: Stable@vger.kernel.org Link: https://lore.kernel.org/r/20200722155103.979802-18-jic23@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h | 12 +++++++++--- drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c | 12 +++++------- 2 files changed, 14 insertions(+), 10 deletions(-)
--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h +++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h @@ -122,6 +122,13 @@ struct inv_mpu6050_chip_config { u8 user_ctrl; };
+/* + * Maximum of 6 + 6 + 2 + 7 (for MPU9x50) = 21 round up to 24 and plus 8. + * May be less if fewer channels are enabled, as long as the timestamp + * remains 8 byte aligned + */ +#define INV_MPU6050_OUTPUT_DATA_SIZE 32 + /** * struct inv_mpu6050_hw - Other important hardware information. * @whoami: Self identification byte from WHO_AM_I register @@ -165,6 +172,7 @@ struct inv_mpu6050_hw { * @magn_raw_to_gauss: coefficient to convert mag raw value to Gauss. * @magn_orient: magnetometer sensor chip orientation if available. * @suspended_sensors: sensors mask of sensors turned off for suspend + * @data: dma safe buffer used for bulk reads. */ struct inv_mpu6050_state { struct mutex lock; @@ -190,6 +198,7 @@ struct inv_mpu6050_state { s32 magn_raw_to_gauss[3]; struct iio_mount_matrix magn_orient; unsigned int suspended_sensors; + u8 data[INV_MPU6050_OUTPUT_DATA_SIZE] ____cacheline_aligned; };
/*register and associated bit definition*/ @@ -334,9 +343,6 @@ struct inv_mpu6050_state { #define INV_ICM20608_TEMP_OFFSET 8170 #define INV_ICM20608_TEMP_SCALE 3059976
-/* 6 + 6 + 2 + 7 (for MPU9x50) = 21 round up to 24 and plus 8 */ -#define INV_MPU6050_OUTPUT_DATA_SIZE 32 - #define INV_MPU6050_REG_INT_PIN_CFG 0x37 #define INV_MPU6050_ACTIVE_HIGH 0x00 #define INV_MPU6050_ACTIVE_LOW 0x80 --- a/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c +++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c @@ -13,7 +13,6 @@ #include <linux/interrupt.h> #include <linux/poll.h> #include <linux/math64.h> -#include <asm/unaligned.h> #include "inv_mpu_iio.h"
/** @@ -121,7 +120,6 @@ irqreturn_t inv_mpu6050_read_fifo(int ir struct inv_mpu6050_state *st = iio_priv(indio_dev); size_t bytes_per_datum; int result; - u8 data[INV_MPU6050_OUTPUT_DATA_SIZE]; u16 fifo_count; s64 timestamp; int int_status; @@ -160,11 +158,11 @@ irqreturn_t inv_mpu6050_read_fifo(int ir * read fifo_count register to know how many bytes are inside the FIFO * right now */ - result = regmap_bulk_read(st->map, st->reg->fifo_count_h, data, - INV_MPU6050_FIFO_COUNT_BYTE); + result = regmap_bulk_read(st->map, st->reg->fifo_count_h, + st->data, INV_MPU6050_FIFO_COUNT_BYTE); if (result) goto end_session; - fifo_count = get_unaligned_be16(&data[0]); + fifo_count = be16_to_cpup((__be16 *)&st->data[0]);
/* * Handle fifo overflow by resetting fifo. @@ -182,7 +180,7 @@ irqreturn_t inv_mpu6050_read_fifo(int ir inv_mpu6050_update_period(st, pf->timestamp, nb); for (i = 0; i < nb; ++i) { result = regmap_bulk_read(st->map, st->reg->fifo_r_w, - data, bytes_per_datum); + st->data, bytes_per_datum); if (result) goto flush_fifo; /* skip first samples if needed */ @@ -191,7 +189,7 @@ irqreturn_t inv_mpu6050_read_fifo(int ir continue; } timestamp = inv_mpu6050_get_timestamp(st); - iio_push_to_buffers_with_timestamp(indio_dev, data, timestamp); + iio_push_to_buffers_with_timestamp(indio_dev, st->data, timestamp); }
end_session:
From: Tom Rix trix@redhat.com
commit f71e41e23e129640f620b65fc362a6da02580310 upstream.
Potential error return is not checked. This can lead to use of undefined data.
Detected by clang static analysis.
st_lsm6dsx_shub.c:540:8: warning: Assigned value is garbage or undefined *val = (s16)le16_to_cpu(*((__le16 *)data)); ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Fixes: c91c1c844ebd ("iio: imu: st_lsm6dsx: add i2c embedded controller support") Signed-off-by: Tom Rix <trix@redhat.com Cc: Stable@vger.kernel.org Reviewed-by: Andy Shevchenko andy.shevchenko@gmail.com Link: https://lore.kernel.org/r/20200809175551.6794-1-trix@redhat.com Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c +++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c @@ -313,6 +313,8 @@ st_lsm6dsx_shub_read(struct st_lsm6dsx_s
err = st_lsm6dsx_shub_read_output(hw, data, len & ST_LS6DSX_READ_OP_MASK); + if (err < 0) + return err;
st_lsm6dsx_shub_master_enable(sensor, false);
From: Jonathan Cameron Jonathan.Cameron@huawei.com
commit 0456ecf34d466261970e0ff92b2b9c78a4908637 upstream.
One of a class of bugs pointed out by Lars in a recent review. iio_push_to_buffers_with_timestamp assumes the buffer used is aligned to the size of the timestamp (8 bytes). This is not guaranteed in this driver which uses a 24 byte array of smaller elements on the stack. As Lars also noted this anti pattern can involve a leak of data to userspace and that indeed can happen here. We close both issues by moving to a suitable array in the iio_priv() data with alignment explicitly requested. This data is allocated with kzalloc so no data can leak appart from previous readings.
Depending on the enabled channels, the location of the timestamp can be at various aligned offsets through the buffer. As such we any use of a structure to enforce this alignment would incorrectly suggest a single location for the timestamp. Comments adjusted to express this clearly in the code.
Fixes: ac45e57f1590 ("iio: light: Add driver for Silabs si1132, si1141/2/3 and si1145/6/7 ambient light, uv index and proximity sensors") Reported-by: Lars-Peter Clausen lars@metafoo.de Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Reviewed-by: Andy Shevchenko andy.shevchenko@gmail.com Cc: Peter Meerwald-Stadler pmeerw@pmeerw.net Cc: Stable@vger.kernel.org Link: https://lore.kernel.org/r/20200722155103.979802-9-jic23@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/iio/light/si1145.c | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-)
--- a/drivers/iio/light/si1145.c +++ b/drivers/iio/light/si1145.c @@ -168,6 +168,7 @@ struct si1145_part_info { * @part_info: Part information * @trig: Pointer to iio trigger * @meas_rate: Value of MEAS_RATE register. Only set in HW in auto mode + * @buffer: Used to pack data read from sensor. */ struct si1145_data { struct i2c_client *client; @@ -179,6 +180,14 @@ struct si1145_data { bool autonomous; struct iio_trigger *trig; int meas_rate; + /* + * Ensure timestamp will be naturally aligned if present. + * Maximum buffer size (may be only partly used if not all + * channels are enabled): + * 6*2 bytes channels data + 4 bytes alignment + + * 8 bytes timestamp + */ + u8 buffer[24] __aligned(8); };
/* @@ -440,12 +449,6 @@ static irqreturn_t si1145_trigger_handle struct iio_poll_func *pf = private; struct iio_dev *indio_dev = pf->indio_dev; struct si1145_data *data = iio_priv(indio_dev); - /* - * Maximum buffer size: - * 6*2 bytes channels data + 4 bytes alignment + - * 8 bytes timestamp - */ - u8 buffer[24]; int i, j = 0; int ret; u8 irq_status = 0; @@ -478,7 +481,7 @@ static irqreturn_t si1145_trigger_handle
ret = i2c_smbus_read_i2c_block_data_or_emulated( data->client, indio_dev->channels[i].address, - sizeof(u16) * run, &buffer[j]); + sizeof(u16) * run, &data->buffer[j]); if (ret < 0) goto done; j += run * sizeof(u16); @@ -493,7 +496,7 @@ static irqreturn_t si1145_trigger_handle goto done; }
- iio_push_to_buffers_with_timestamp(indio_dev, buffer, + iio_push_to_buffers_with_timestamp(indio_dev, data->buffer, iio_get_time_ns(indio_dev));
done:
From: Tobias Jordan kernel@cdqe.de
commit da4410d4078ba4ead9d6f1027d6db77c5a74ecee upstream.
Add missing of_node_put calls when exiting the for_each_child_of_node loop in rcar_gyroadc_parse_subdevs early.
Also add goto-exception handling for the error paths in that loop.
Fixes: 059c53b32329 ("iio: adc: Add Renesas GyroADC driver") Signed-off-by: Tobias Jordan kernel@cdqe.de Link: https://lore.kernel.org/r/20200926161946.GA10240@agrajag.zerfleddert.de Cc: Stable@vger.kernel.org Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/iio/adc/rcar-gyroadc.c | 21 +++++++++++++++------ 1 file changed, 15 insertions(+), 6 deletions(-)
--- a/drivers/iio/adc/rcar-gyroadc.c +++ b/drivers/iio/adc/rcar-gyroadc.c @@ -357,7 +357,7 @@ static int rcar_gyroadc_parse_subdevs(st num_channels = ARRAY_SIZE(rcar_gyroadc_iio_channels_3); break; default: - return -EINVAL; + goto err_e_inval; }
/* @@ -374,7 +374,7 @@ static int rcar_gyroadc_parse_subdevs(st dev_err(dev, "Failed to get child reg property of ADC "%pOFn".\n", child); - return ret; + goto err_of_node_put; }
/* Channel number is too high. */ @@ -382,7 +382,7 @@ static int rcar_gyroadc_parse_subdevs(st dev_err(dev, "Only %i channels supported with %pOFn, but reg = <%i>.\n", num_channels, child, reg); - return -EINVAL; + goto err_e_inval; } }
@@ -391,7 +391,7 @@ static int rcar_gyroadc_parse_subdevs(st dev_err(dev, "Channel %i uses different ADC mode than the rest.\n", reg); - return -EINVAL; + goto err_e_inval; }
/* Channel is valid, grab the regulator. */ @@ -401,7 +401,8 @@ static int rcar_gyroadc_parse_subdevs(st if (IS_ERR(vref)) { dev_dbg(dev, "Channel %i 'vref' supply not connected.\n", reg); - return PTR_ERR(vref); + ret = PTR_ERR(vref); + goto err_of_node_put; }
priv->vref[reg] = vref; @@ -425,8 +426,10 @@ static int rcar_gyroadc_parse_subdevs(st * attached to the GyroADC at a time, so if we found it, * we can stop parsing here. */ - if (childmode == RCAR_GYROADC_MODE_SELECT_1_MB88101A) + if (childmode == RCAR_GYROADC_MODE_SELECT_1_MB88101A) { + of_node_put(child); break; + } }
if (first) { @@ -435,6 +438,12 @@ static int rcar_gyroadc_parse_subdevs(st }
return 0; + +err_e_inval: + ret = -EINVAL; +err_of_node_put: + of_node_put(child); + return ret; }
static void rcar_gyroadc_deinit_supplies(struct iio_dev *indio_dev)
From: Nuno Sá nuno.sa@analog.com
commit b8a533f3c24b3b8f1fdbefc5ada6a7d5733d63e6 upstream.
When returning or breaking early from a `for_each_available_child_of_node()` loop, we need to explicitly call `of_node_put()` on the child node to possibly release the node.
Fixes: 506d2e317a0a0 ("iio: adc: Add driver support for AD7292") Signed-off-by: Nuno Sá nuno.sa@analog.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200925091045.302-2-nuno.sa@analog.com Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/iio/adc/ad7292.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/drivers/iio/adc/ad7292.c +++ b/drivers/iio/adc/ad7292.c @@ -310,8 +310,10 @@ static int ad7292_probe(struct spi_devic
for_each_available_child_of_node(spi->dev.of_node, child) { diff_channels = of_property_read_bool(child, "diff-channels"); - if (diff_channels) + if (diff_channels) { + of_node_put(child); break; + } }
if (diff_channels) {
From: Jonathan Cameron Jonathan.Cameron@huawei.com
commit 39e91f3be4cba51c1560bcda3a343ed1f64dc916 upstream.
One of a class of bugs pointed out by Lars in a recent review. iio_push_to_buffers_with_timestamp assumes the buffer used is aligned to the size of the timestamp (8 bytes). This is not guaranteed in this driver which uses an array of smaller elements on the stack.
We fix this issues by moving to a suitable structure in the iio_priv() data with alignment explicitly requested. This data is allocated with kzalloc so no data can leak apart from previous readings. Note that previously no data could leak 'including' previous readings but I don't think it is an issue to potentially leak them like this now does.
In this case the postioning of the timestamp is depends on what other channels are enabled. As such we cannot use a structure to make the alignment explicit as it would be missleading by suggesting only one possible location for the timestamp.
Fixes: 815bbc87462a ("iio: ti-adc0832: add triggered buffer support") Reported-by: Lars-Peter Clausen lars@metafoo.de Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Reviewed-by: Andy Shevchenko andy.shevchenko@gmail.com Cc: Akinobu Mita akinobu.mita@gmail.com Cc: Stable@vger.kernel.org Link: https://lore.kernel.org/r/20200722155103.979802-25-jic23@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/iio/adc/ti-adc0832.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-)
--- a/drivers/iio/adc/ti-adc0832.c +++ b/drivers/iio/adc/ti-adc0832.c @@ -29,6 +29,12 @@ struct adc0832 { struct regulator *reg; struct mutex lock; u8 mux_bits; + /* + * Max size needed: 16x 1 byte ADC data + 8 bytes timestamp + * May be shorter if not all channels are enabled subject + * to the timestamp remaining 8 byte aligned. + */ + u8 data[24] __aligned(8);
u8 tx_buf[2] ____cacheline_aligned; u8 rx_buf[2]; @@ -200,7 +206,6 @@ static irqreturn_t adc0832_trigger_handl struct iio_poll_func *pf = p; struct iio_dev *indio_dev = pf->indio_dev; struct adc0832 *adc = iio_priv(indio_dev); - u8 data[24] = { }; /* 16x 1 byte ADC data + 8 bytes timestamp */ int scan_index; int i = 0;
@@ -218,10 +223,10 @@ static irqreturn_t adc0832_trigger_handl goto out; }
- data[i] = ret; + adc->data[i] = ret; i++; } - iio_push_to_buffers_with_timestamp(indio_dev, data, + iio_push_to_buffers_with_timestamp(indio_dev, adc->data, iio_get_time_ns(indio_dev)); out: mutex_unlock(&adc->lock);
From: Jonathan Cameron Jonathan.Cameron@huawei.com
commit 293e809b2e8e608b65a949101aaf7c0bd1224247 upstream.
One of a class of bugs pointed out by Lars in a recent review. iio_push_to_buffers_with_timestamp assumes the buffer used is aligned to the size of the timestamp (8 bytes). This is not guaranteed in this driver which uses an array of smaller elements on the stack.
We move to a suitable structure in the iio_priv() data with alignment explicitly requested. This data is allocated with kzalloc so no data can leak apart from previous readings. Note that previously no leak at all could occur, but previous readings should never be a problem.
In this case the timestamp location depends on what other channels are enabled. As such we can't use a structure without misleading by suggesting only one possible timestamp location.
Fixes: 50a6edb1b6e0 ("iio: adc: add ADC12130/ADC12132/ADC12138 ADC driver") Reported-by: Lars-Peter Clausen lars@metafoo.de Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Reviewed-by: Andy Shevchenko andy.shevchenko@gmail.com Cc: Akinobu Mita akinobu.mita@gmail.com Cc: Stable@vger.kernel.org Link: https://lore.kernel.org/r/20200722155103.979802-26-jic23@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/iio/adc/ti-adc12138.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-)
--- a/drivers/iio/adc/ti-adc12138.c +++ b/drivers/iio/adc/ti-adc12138.c @@ -47,6 +47,12 @@ struct adc12138 { struct completion complete; /* The number of cclk periods for the S/H's acquisition time */ unsigned int acquisition_time; + /* + * Maximum size needed: 16x 2 bytes ADC data + 8 bytes timestamp. + * Less may be need if not all channels are enabled, as long as + * the 8 byte alignment of the timestamp is maintained. + */ + __be16 data[20] __aligned(8);
u8 tx_buf[2] ____cacheline_aligned; u8 rx_buf[2]; @@ -329,7 +335,6 @@ static irqreturn_t adc12138_trigger_hand struct iio_poll_func *pf = p; struct iio_dev *indio_dev = pf->indio_dev; struct adc12138 *adc = iio_priv(indio_dev); - __be16 data[20] = { }; /* 16x 2 bytes ADC data + 8 bytes timestamp */ __be16 trash; int ret; int scan_index; @@ -345,7 +350,7 @@ static irqreturn_t adc12138_trigger_hand reinit_completion(&adc->complete);
ret = adc12138_start_and_read_conv(adc, scan_chan, - i ? &data[i - 1] : &trash); + i ? &adc->data[i - 1] : &trash); if (ret) { dev_warn(&adc->spi->dev, "failed to start conversion\n"); @@ -362,7 +367,7 @@ static irqreturn_t adc12138_trigger_hand }
if (i) { - ret = adc12138_read_conv_data(adc, &data[i - 1]); + ret = adc12138_read_conv_data(adc, &adc->data[i - 1]); if (ret) { dev_warn(&adc->spi->dev, "failed to get conversion data\n"); @@ -370,7 +375,7 @@ static irqreturn_t adc12138_trigger_hand } }
- iio_push_to_buffers_with_timestamp(indio_dev, data, + iio_push_to_buffers_with_timestamp(indio_dev, adc->data, iio_get_time_ns(indio_dev)); out: mutex_unlock(&adc->lock);
From: Jonathan Cameron Jonathan.Cameron@huawei.com
commit c14edb4d0bdc53f969ea84c7f384472c28b1a9f8 upstream.
One of a class of bugs pointed out by Lars in a recent review. iio_push_to_buffers_with_timestamp assumes the buffer used is aligned to the size of the timestamp (8 bytes). This is not guaranteed in this driver which uses an array of smaller elements on the stack. As Lars also noted this anti pattern can involve a leak of data to userspace and that indeed can happen here. We close both issues by moving to an array of suitable structures in the iio_priv() data.
This data is allocated with kzalloc so no data can leak apart from previous readings.
For the tagged path the data is aligned by using __aligned(8) for the buffer on the stack.
There has been a lot of churn in this driver, so likely backports may be needed for stable.
Fixes: 290a6ce11d93 ("iio: imu: add support to lsm6dsx driver") Reported-by: Lars-Peter Clausen lars@metafoo.de Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Reviewed-by: Andy Shevchenko andy.shevchenko@gmail.com Cc: Lorenzo Bianconi lorenzo.bianconi83@gmail.com Cc: Stable@vger.kernel.org Link: https://lore.kernel.org/r/20200722155103.979802-17-jic23@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h | 6 +++ drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c | 42 +++++++++++++++---------- 2 files changed, 32 insertions(+), 16 deletions(-)
--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h +++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h @@ -383,6 +383,7 @@ struct st_lsm6dsx_sensor { * @iio_devs: Pointers to acc/gyro iio_dev instances. * @settings: Pointer to the specific sensor settings in use. * @orientation: sensor chip orientation relative to main hardware. + * @scan: Temporary buffers used to align data before iio_push_to_buffers() */ struct st_lsm6dsx_hw { struct device *dev; @@ -411,6 +412,11 @@ struct st_lsm6dsx_hw { const struct st_lsm6dsx_settings *settings;
struct iio_mount_matrix orientation; + /* Ensure natural alignment of buffer elements */ + struct { + __le16 channels[3]; + s64 ts __aligned(8); + } scan[3]; };
static __maybe_unused const struct iio_event_spec st_lsm6dsx_event = { --- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c +++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c @@ -353,9 +353,6 @@ int st_lsm6dsx_read_fifo(struct st_lsm6d int err, sip, acc_sip, gyro_sip, ts_sip, ext_sip, read_len, offset; u16 fifo_len, pattern_len = hw->sip * ST_LSM6DSX_SAMPLE_SIZE; u16 fifo_diff_mask = hw->settings->fifo_ops.fifo_diff.mask; - u8 gyro_buff[ST_LSM6DSX_IIO_BUFF_SIZE]; - u8 acc_buff[ST_LSM6DSX_IIO_BUFF_SIZE]; - u8 ext_buff[ST_LSM6DSX_IIO_BUFF_SIZE]; bool reset_ts = false; __le16 fifo_status; s64 ts = 0; @@ -416,19 +413,22 @@ int st_lsm6dsx_read_fifo(struct st_lsm6d
while (acc_sip > 0 || gyro_sip > 0 || ext_sip > 0) { if (gyro_sip > 0 && !(sip % gyro_sensor->decimator)) { - memcpy(gyro_buff, &hw->buff[offset], - ST_LSM6DSX_SAMPLE_SIZE); - offset += ST_LSM6DSX_SAMPLE_SIZE; + memcpy(hw->scan[ST_LSM6DSX_ID_GYRO].channels, + &hw->buff[offset], + sizeof(hw->scan[ST_LSM6DSX_ID_GYRO].channels)); + offset += sizeof(hw->scan[ST_LSM6DSX_ID_GYRO].channels); } if (acc_sip > 0 && !(sip % acc_sensor->decimator)) { - memcpy(acc_buff, &hw->buff[offset], - ST_LSM6DSX_SAMPLE_SIZE); - offset += ST_LSM6DSX_SAMPLE_SIZE; + memcpy(hw->scan[ST_LSM6DSX_ID_ACC].channels, + &hw->buff[offset], + sizeof(hw->scan[ST_LSM6DSX_ID_ACC].channels)); + offset += sizeof(hw->scan[ST_LSM6DSX_ID_ACC].channels); } if (ext_sip > 0 && !(sip % ext_sensor->decimator)) { - memcpy(ext_buff, &hw->buff[offset], - ST_LSM6DSX_SAMPLE_SIZE); - offset += ST_LSM6DSX_SAMPLE_SIZE; + memcpy(hw->scan[ST_LSM6DSX_ID_EXT0].channels, + &hw->buff[offset], + sizeof(hw->scan[ST_LSM6DSX_ID_EXT0].channels)); + offset += sizeof(hw->scan[ST_LSM6DSX_ID_EXT0].channels); }
if (ts_sip-- > 0) { @@ -458,19 +458,22 @@ int st_lsm6dsx_read_fifo(struct st_lsm6d if (gyro_sip > 0 && !(sip % gyro_sensor->decimator)) { iio_push_to_buffers_with_timestamp( hw->iio_devs[ST_LSM6DSX_ID_GYRO], - gyro_buff, gyro_sensor->ts_ref + ts); + &hw->scan[ST_LSM6DSX_ID_GYRO], + gyro_sensor->ts_ref + ts); gyro_sip--; } if (acc_sip > 0 && !(sip % acc_sensor->decimator)) { iio_push_to_buffers_with_timestamp( hw->iio_devs[ST_LSM6DSX_ID_ACC], - acc_buff, acc_sensor->ts_ref + ts); + &hw->scan[ST_LSM6DSX_ID_ACC], + acc_sensor->ts_ref + ts); acc_sip--; } if (ext_sip > 0 && !(sip % ext_sensor->decimator)) { iio_push_to_buffers_with_timestamp( hw->iio_devs[ST_LSM6DSX_ID_EXT0], - ext_buff, ext_sensor->ts_ref + ts); + &hw->scan[ST_LSM6DSX_ID_EXT0], + ext_sensor->ts_ref + ts); ext_sip--; } sip++; @@ -555,7 +558,14 @@ int st_lsm6dsx_read_tagged_fifo(struct s { u16 pattern_len = hw->sip * ST_LSM6DSX_TAGGED_SAMPLE_SIZE; u16 fifo_len, fifo_diff_mask; - u8 iio_buff[ST_LSM6DSX_IIO_BUFF_SIZE], tag; + /* + * Alignment needed as this can ultimately be passed to a + * call to iio_push_to_buffers_with_timestamp() which + * must be passed a buffer that is aligned to 8 bytes so + * as to allow insertion of a naturally aligned timestamp. + */ + u8 iio_buff[ST_LSM6DSX_IIO_BUFF_SIZE] __aligned(8); + u8 tag; bool reset_ts = false; int i, err, read_len; __le16 fifo_status;
From: Jonathan Cameron Jonathan.Cameron@huawei.com
commit 10ab7cfd5522f0041028556dac864a003e158556 upstream.
One of a class of bugs pointed out by Lars in a recent review. iio_push_to_buffers_with_timestamp assumes the buffer used is aligned to the size of the timestamp (8 bytes). This is not guaranteed in this driver which uses a 16 byte array of smaller elements on the stack. This is fixed by using an explicit c structure. As there are no holes in the structure, there is no possiblity of data leakage in this case.
The explicit alignment of ts is not strictly necessary but potentially makes the code slightly less fragile. It also removes the possibility of this being cut and paste into another driver where the alignment isn't already true.
Fixes: 36e0371e7764 ("iio:itg3200: Use iio_push_to_buffers_with_timestamp()") Reported-by: Lars-Peter Clausen lars@metafoo.de Signed-off-by: Jonathan Cameron Jonathan.Cameron@huawei.com Reviewed-by: Andy Shevchenko andy.shevchenko@gmail.com Cc: Stable@vger.kernel.org Link: https://lore.kernel.org/r/20200722155103.979802-6-jic23@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/iio/gyro/itg3200_buffer.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-)
--- a/drivers/iio/gyro/itg3200_buffer.c +++ b/drivers/iio/gyro/itg3200_buffer.c @@ -46,13 +46,20 @@ static irqreturn_t itg3200_trigger_handl struct iio_poll_func *pf = p; struct iio_dev *indio_dev = pf->indio_dev; struct itg3200 *st = iio_priv(indio_dev); - __be16 buf[ITG3200_SCAN_ELEMENTS + sizeof(s64)/sizeof(u16)]; + /* + * Ensure correct alignment and padding including for the + * timestamp that may be inserted. + */ + struct { + __be16 buf[ITG3200_SCAN_ELEMENTS]; + s64 ts __aligned(8); + } scan;
- int ret = itg3200_read_all_channels(st->i2c, buf); + int ret = itg3200_read_all_channels(st->i2c, scan.buf); if (ret < 0) goto error_ret;
- iio_push_to_buffers_with_timestamp(indio_dev, buf, pf->timestamp); + iio_push_to_buffers_with_timestamp(indio_dev, &scan, pf->timestamp);
iio_trigger_notify_done(indio_dev->trig);
From: Aneesh Kumar K.V aneesh.kumar@linux.ibm.com
commit ec72024e35dddb88a81e40071c87ceb18b5ee835 upstream.
Similar to commit 89c140bbaeee ("pseries: Fix 64 bit logical memory block panic") make sure different variables tracking lmb_size are updated to be 64 bit.
This was found by code audit.
Cc: stable@vger.kernel.org Signed-off-by: Aneesh Kumar K.V aneesh.kumar@linux.ibm.com Acked-by: Nathan Lynch nathanl@linux.ibm.com Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20201007114836.282468-2-aneesh.kumar@linux.ibm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/include/asm/drmem.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/powerpc/include/asm/drmem.h +++ b/arch/powerpc/include/asm/drmem.h @@ -20,7 +20,7 @@ struct drmem_lmb { struct drmem_lmb_info { struct drmem_lmb *lmbs; int n_lmbs; - u32 lmb_size; + u64 lmb_size; };
extern struct drmem_lmb_info *drmem_info; @@ -80,7 +80,7 @@ struct of_drconf_cell_v2 { #define DRCONF_MEM_RESERVED 0x00000080 #define DRCONF_MEM_HOTREMOVABLE 0x00000100
-static inline u32 drmem_lmb_size(void) +static inline u64 drmem_lmb_size(void) { return drmem_info->lmb_size; }
From: Paul E. McKenney paulmck@kernel.org
commit ba3a86e47232ad9f76160929f33ac9c64e4d0567 upstream.
The more intense grace-period processing resulting from the 50x RCU Tasks Trace grace-period speedups exposed the following race condition:
o Task A running on CPU 0 executes rcu_read_lock_trace(), entering a read-side critical section.
o When Task A eventually invokes rcu_read_unlock_trace() to exit its read-side critical section, this function notes that the ->trc_reader_special.s flag is zero and and therefore invoke wil set ->trc_reader_nesting to zero using WRITE_ONCE(). But before that happens...
o The RCU Tasks Trace grace-period kthread running on some other CPU interrogates Task A, but this fails because this task is currently running. This kthread therefore sends an IPI to CPU 0.
o CPU 0 receives the IPI, and thus invokes trc_read_check_handler(). Because Task A has not yet cleared its ->trc_reader_nesting counter, this function sees that Task A is still within its read-side critical section. This function therefore sets the ->trc_reader_nesting.b.need_qs flag, AKA the .need_qs flag.
Except that Task A has already checked the .need_qs flag, which is part of the ->trc_reader_special.s flag. The .need_qs flag therefore remains set until Task A's next rcu_read_unlock_trace().
o Task A now invokes synchronize_rcu_tasks_trace(), which cannot start a new grace period until the current grace period completes. And thus cannot return until after that time.
But Task A's .need_qs flag is still set, which prevents the current grace period from completing. And because Task A is blocked, it will never execute rcu_read_unlock_trace() until its call to synchronize_rcu_tasks_trace() returns.
We are therefore deadlocked.
This race is improbable, but 80 hours of rcutorture made it happen twice. The race was possible before the grace-period speedup, but roughly 50x less probable. Several thousand hours of rcutorture would have been necessary to have a reasonable chance of making this happen before this 50x speedup.
This commit therefore eliminates this deadlock by setting ->trc_reader_nesting to a large negative number before checking the .need_qs and zeroing (or decrementing with respect to its initial value) ->trc_reader_nesting. For its part, the IPI handler's trc_read_check_handler() function adds a check for negative values, deferring evaluation of the task in this case. Taken together, these changes avoid this deadlock scenario.
Fixes: 276c410448db ("rcu-tasks: Split ->trc_reader_need_end") Cc: Alexei Starovoitov alexei.starovoitov@gmail.com Cc: Daniel Borkmann daniel@iogearbox.net Cc: Jiri Olsa jolsa@redhat.com Cc: bpf@vger.kernel.org Cc: stable@vger.kernel.org # 5.7.x Signed-off-by: Paul E. McKenney paulmck@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- include/linux/rcupdate_trace.h | 4 ++++ kernel/rcu/tasks.h | 6 ++++++ 2 files changed, 10 insertions(+)
--- a/include/linux/rcupdate_trace.h +++ b/include/linux/rcupdate_trace.h @@ -50,6 +50,7 @@ static inline void rcu_read_lock_trace(v struct task_struct *t = current;
WRITE_ONCE(t->trc_reader_nesting, READ_ONCE(t->trc_reader_nesting) + 1); + barrier(); if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB) && t->trc_reader_special.b.need_mb) smp_mb(); // Pairs with update-side barriers @@ -72,6 +73,9 @@ static inline void rcu_read_unlock_trace
rcu_lock_release(&rcu_trace_lock_map); nesting = READ_ONCE(t->trc_reader_nesting) - 1; + barrier(); // Critical section before disabling. + // Disable IPI-based setting of .need_qs. + WRITE_ONCE(t->trc_reader_nesting, INT_MIN); if (likely(!READ_ONCE(t->trc_reader_special.s)) || nesting) { WRITE_ONCE(t->trc_reader_nesting, nesting); return; // We assume shallow reader nesting. --- a/kernel/rcu/tasks.h +++ b/kernel/rcu/tasks.h @@ -821,6 +821,12 @@ static void trc_read_check_handler(void WRITE_ONCE(t->trc_reader_checked, true); goto reset_ipi; } + // If we are racing with an rcu_read_unlock_trace(), try again later. + if (unlikely(t->trc_reader_nesting < 0)) { + if (WARN_ON_ONCE(atomic_dec_and_test(&trc_n_readers_need_end))) + wake_up(&trc_wait); + goto reset_ipi; + } WRITE_ONCE(t->trc_reader_checked, true);
// Get here if the task is in a read-side critical section. Set
From: Paul E. McKenney paulmck@kernel.org
commit 592031cc10858be4adb10f6c0f2608f6f21824aa upstream.
When rcu_tasks_trace_postgp() function detects an RCU Tasks Trace CPU stall, it adds all tasks blocking the current grace period to a list, invoking get_task_struct() on each to prevent them from being freed while on the list. It then traverses that list, printing stall-warning messages for each one that is still blocking the current grace period and removing it from the list. The list removal invokes the matching put_task_struct().
This of course means that in the admittedly unlikely event that some task executes its outermost rcu_read_unlock_trace() in the meantime, it won't be removed from the list and put_task_struct() won't be executing, resulting in a task_struct leak. This commit therefore makes the list removal and put_task_struct() unconditional, stopping the leak.
Note further that this bug can occur only after an RCU Tasks Trace CPU stall warning, which by default only happens after a grace period has extended for ten minutes (yes, not a typo, minutes).
Fixes: 4593e772b502 ("rcu-tasks: Add stall warnings for RCU Tasks Trace") Cc: Alexei Starovoitov alexei.starovoitov@gmail.com Cc: Daniel Borkmann daniel@iogearbox.net Cc: Jiri Olsa jolsa@redhat.com Cc: bpf@vger.kernel.org Cc: stable@vger.kernel.org # 5.7.x Signed-off-by: Paul E. McKenney paulmck@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- kernel/rcu/tasks.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
--- a/kernel/rcu/tasks.h +++ b/kernel/rcu/tasks.h @@ -1082,11 +1082,11 @@ static void rcu_tasks_trace_postgp(struc if (READ_ONCE(t->trc_reader_special.b.need_qs)) trc_add_holdout(t, &holdouts); firstreport = true; - list_for_each_entry_safe(t, g, &holdouts, trc_holdout_list) - if (READ_ONCE(t->trc_reader_special.b.need_qs)) { + list_for_each_entry_safe(t, g, &holdouts, trc_holdout_list) { + if (READ_ONCE(t->trc_reader_special.b.need_qs)) show_stalled_task_trace(t, &firstreport); - trc_del_holdout(t); - } + trc_del_holdout(t); // Release task_struct reference. + } if (firstreport) pr_err("INFO: rcu_tasks_trace detected stalls? (Counter/taskslist mismatch?)\n"); show_stalled_ipi_trace();
From: Paul E. McKenney paulmck@kernel.org
commit f747c7e15d7bc71a967a94ceda686cf2460b69e8 upstream.
The rcu_tasks_trace_postgp() function uses for_each_process_thread() to scan the task list without the benefit of RCU read-side protection, which can result in use-after-free errors on task_struct structures. This error was missed because the TRACE01 rcutorture scenario enables lockdep, but also builds with CONFIG_PREEMPT_NONE=y. In this situation, preemption is disabled everywhere, so lockdep thinks everywhere can be a legitimate RCU reader. This commit therefore adds the needed rcu_read_lock() and rcu_read_unlock().
Note that this bug can occur only after an RCU Tasks Trace CPU stall warning, which by default only happens after a grace period has extended for ten minutes (yes, not a typo, minutes).
Fixes: 4593e772b502 ("rcu-tasks: Add stall warnings for RCU Tasks Trace") Cc: Alexei Starovoitov alexei.starovoitov@gmail.com Cc: Daniel Borkmann daniel@iogearbox.net Cc: Jiri Olsa jolsa@redhat.com Cc: bpf@vger.kernel.org Cc: stable@vger.kernel.org # 5.7.x Signed-off-by: Paul E. McKenney paulmck@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- kernel/rcu/tasks.h | 2 ++ 1 file changed, 2 insertions(+)
--- a/kernel/rcu/tasks.h +++ b/kernel/rcu/tasks.h @@ -1078,9 +1078,11 @@ static void rcu_tasks_trace_postgp(struc if (ret) break; // Count reached zero. // Stall warning time, so make a list of the offenders. + rcu_read_lock(); for_each_process_thread(g, t) if (READ_ONCE(t->trc_reader_special.b.need_qs)) trc_add_holdout(t, &holdouts); + rcu_read_unlock(); firstreport = true; list_for_each_entry_safe(t, g, &holdouts, trc_holdout_list) { if (READ_ONCE(t->trc_reader_special.b.need_qs))
From: Maciej W. Rozycki macro@linux-mips.org
commit cf3af0a4d3b62ab48e0b90180ea161d0f5d4953f upstream.
Fix a crash on DEC platforms starting with:
VFS: Mounted root (nfs filesystem) on device 0:11. Freeing unused PROM memory: 124k freed BUG: Bad page state in process swapper pfn:00001 page:(ptrval) refcount:0 mapcount:-128 mapping:00000000 index:0x1 pfn:0x1 flags: 0x0() raw: 00000000 00000100 00000122 00000000 00000001 00000000 ffffff7f 00000000 page dumped because: nonzero mapcount Modules linked in: CPU: 0 PID: 1 Comm: swapper Not tainted 5.9.0-00858-g865c50e1d279 #1 Stack : 8065dc48 0000000b 8065d2b8 9bc27dcc 80645bfc 9bc259a4 806a1b97 80703124 80710000 8064a900 00000001 80099574 806b116c 1000ec00 9bc27d88 806a6f30 00000000 00000000 80645bfc 00000000 31232039 80706ba4 2e392e35 8039f348 2d383538 00000070 0000000a 35363867 00000000 806c2830 80710000 806b0000 80710000 8064a900 00000001 81000000 00000000 00000000 8035af2c 80700000 ... Call Trace: [<8004bc5c>] show_stack+0x34/0x104 [<8015675c>] bad_page+0xfc/0x128 [<80157714>] free_pcppages_bulk+0x1f4/0x5dc [<801591cc>] free_unref_page+0xc0/0x130 [<8015cb04>] free_reserved_area+0x144/0x1d8 [<805abd78>] kernel_init+0x20/0x100 [<80046070>] ret_from_kernel_thread+0x14/0x1c Disabling lock debugging due to kernel taint
caused by an attempt to free bootmem space that as from commit b93ddc4f9156 ("mips: Reserve memory for the kernel image resources") has not been anymore reserved due to the removal of generic MIPS arch code that used to reserve all the memory from the beginning of RAM up to the kernel load address.
This memory does need to be reserved on DEC platforms however as it is used by REX firmware as working area, as per the TURBOchannel firmware specification[1]:
Table 2-2 REX Memory Regions ------------------------------------------------------------------------- Starting Ending Region Address Address Use ------------------------------------------------------------------------- 0 0xa0000000 0xa000ffff Restart block, exception vectors, REX stack and bss 1 0xa0010000 0xa0017fff Keyboard or tty drivers
2 0xa0018000 0xa001f3ff 1) CRT driver
3 0xa0020000 0xa002ffff boot, cnfg, init and t objects
4 0xa0020000 0xa002ffff 64KB scratch space ------------------------------------------------------------------------- 1) Note that the last 3 Kbytes of region 2 are reserved for backward compatibility with previous system software. -------------------------------------------------------------------------
(this table uses KSEG2 unmapped virtual addresses, which in the MIPS architecture are offset from physical addresses by a fixed value of 0xa0000000 and therefore the regions referred do correspond to the beginning of the physical address space) and we call into the firmware on several occasions throughout the bootstrap process. It is believed that pre-REX firmware used with non-TURBOchannel DEC platforms has the same requirements, as hinted by note #1 cited.
Recreate the discarded reservation then, in DEC platform code, removing the crash.
[1] "TURBOchannel Firmware Specification", On-line version, EK-TCAAD-FS-004, Digital Equipment Corporation, January 1993, Chapter 2 "System Module Firmware", p. 2-5
Signed-off-by: Maciej W. Rozycki macro@linux-mips.org Fixes: b93ddc4f9156 ("mips: Reserve memory for the kernel image resources") Cc: stable@vger.kernel.org # v5.2+ Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
Signed-off-by: Thomas Bogendoerfer tsbogend@alpha.franken.de
--- arch/mips/dec/setup.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-)
--- a/arch/mips/dec/setup.c +++ b/arch/mips/dec/setup.c @@ -6,7 +6,7 @@ * for more details. * * Copyright (C) 1998 Harald Koerfgen - * Copyright (C) 2000, 2001, 2002, 2003, 2005 Maciej W. Rozycki + * Copyright (C) 2000, 2001, 2002, 2003, 2005, 2020 Maciej W. Rozycki */ #include <linux/console.h> #include <linux/export.h> @@ -15,6 +15,7 @@ #include <linux/ioport.h> #include <linux/irq.h> #include <linux/irqnr.h> +#include <linux/memblock.h> #include <linux/param.h> #include <linux/percpu-defs.h> #include <linux/sched.h> @@ -22,6 +23,7 @@ #include <linux/types.h> #include <linux/pm.h>
+#include <asm/addrspace.h> #include <asm/bootinfo.h> #include <asm/cpu.h> #include <asm/cpu-features.h> @@ -29,7 +31,9 @@ #include <asm/irq.h> #include <asm/irq_cpu.h> #include <asm/mipsregs.h> +#include <asm/page.h> #include <asm/reboot.h> +#include <asm/sections.h> #include <asm/time.h> #include <asm/traps.h> #include <asm/wbflush.h> @@ -146,6 +150,9 @@ void __init plat_mem_setup(void)
ioport_resource.start = ~0UL; ioport_resource.end = 0UL; + + /* Stay away from the firmware working memory area for now. */ + memblock_reserve(PHYS_OFFSET, __pa_symbol(&_text) - PHYS_OFFSET); }
/*
From: Paul Cercueil paul@crapouillou.net
commit 7487abbe85afd02c35c283315cefc5e19c28d40f upstream.
Since INGENIC_GENERIC_BOARD was introduced, the JZ4740_QI_LB60 option is no longer the default, so the symbol has to be selected by the defconfig, otherwise the kernel built will be for a generic Ingenic board and won't have the Device Tree blob built-in.
Cc: stable@vger.kernel.org # v5.7 Fixes: 62249209a772 ("MIPS: ingenic: Default to a generic board") Signed-off-by: Paul Cercueil paul@crapouillou.net Signed-off-by: Thomas Bogendoerfer tsbogend@alpha.franken.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/mips/configs/qi_lb60_defconfig | 1 + 1 file changed, 1 insertion(+)
--- a/arch/mips/configs/qi_lb60_defconfig +++ b/arch/mips/configs/qi_lb60_defconfig @@ -8,6 +8,7 @@ CONFIG_EMBEDDED=y # CONFIG_COMPAT_BRK is not set CONFIG_SLAB=y CONFIG_MACH_INGENIC=y +CONFIG_JZ4740_QI_LB60=y CONFIG_HZ_100=y # CONFIG_SECCOMP is not set CONFIG_MODULES=y
From: Sven Schnelle svens@linux.ibm.com
commit b3bd02495cb339124f13135d51940cf48d83e5cb upstream.
The sysfs function might race with stp_work_fn. To prevent that, add the required locking. Another issue is that the sysfs functions are checking the stp_online flag, but this flag just holds the user setting whether STP is enabled. Add a flag to clock_sync_flag whether stp_info holds valid data and use that instead.
Cc: stable@vger.kernel.org Signed-off-by: Sven Schnelle svens@linux.ibm.com Reviewed-by: Alexander Egorenkov egorenar@linux.ibm.com Signed-off-by: Vasily Gorbik gor@linux.ibm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/s390/kernel/time.c | 118 ++++++++++++++++++++++++++++++++++-------------- 1 file changed, 85 insertions(+), 33 deletions(-)
--- a/arch/s390/kernel/time.c +++ b/arch/s390/kernel/time.c @@ -345,8 +345,9 @@ static DEFINE_PER_CPU(atomic_t, clock_sy static DEFINE_MUTEX(clock_sync_mutex); static unsigned long clock_sync_flags;
-#define CLOCK_SYNC_HAS_STP 0 -#define CLOCK_SYNC_STP 1 +#define CLOCK_SYNC_HAS_STP 0 +#define CLOCK_SYNC_STP 1 +#define CLOCK_SYNC_STPINFO_VALID 2
/* * The get_clock function for the physical clock. It will get the current @@ -583,6 +584,22 @@ void stp_queue_work(void) queue_work(time_sync_wq, &stp_work); }
+static int __store_stpinfo(void) +{ + int rc = chsc_sstpi(stp_page, &stp_info, sizeof(struct stp_sstpi)); + + if (rc) + clear_bit(CLOCK_SYNC_STPINFO_VALID, &clock_sync_flags); + else + set_bit(CLOCK_SYNC_STPINFO_VALID, &clock_sync_flags); + return rc; +} + +static int stpinfo_valid(void) +{ + return stp_online && test_bit(CLOCK_SYNC_STPINFO_VALID, &clock_sync_flags); +} + static int stp_sync_clock(void *data) { struct clock_sync_data *sync = data; @@ -604,8 +621,7 @@ static int stp_sync_clock(void *data) if (rc == 0) { sync->clock_delta = clock_delta; clock_sync_global(clock_delta); - rc = chsc_sstpi(stp_page, &stp_info, - sizeof(struct stp_sstpi)); + rc = __store_stpinfo(); if (rc == 0 && stp_info.tmd != 2) rc = -EAGAIN; } @@ -650,7 +666,7 @@ static void stp_work_fn(struct work_stru if (rc) goto out_unlock;
- rc = chsc_sstpi(stp_page, &stp_info, sizeof(struct stp_sstpi)); + rc = __store_stpinfo(); if (rc || stp_info.c == 0) goto out_unlock;
@@ -687,10 +703,14 @@ static ssize_t ctn_id_show(struct device struct device_attribute *attr, char *buf) { - if (!stp_online) - return -ENODATA; - return sprintf(buf, "%016llx\n", - *(unsigned long long *) stp_info.ctnid); + ssize_t ret = -ENODATA; + + mutex_lock(&stp_work_mutex); + if (stpinfo_valid()) + ret = sprintf(buf, "%016llx\n", + *(unsigned long long *) stp_info.ctnid); + mutex_unlock(&stp_work_mutex); + return ret; }
static DEVICE_ATTR_RO(ctn_id); @@ -699,9 +719,13 @@ static ssize_t ctn_type_show(struct devi struct device_attribute *attr, char *buf) { - if (!stp_online) - return -ENODATA; - return sprintf(buf, "%i\n", stp_info.ctn); + ssize_t ret = -ENODATA; + + mutex_lock(&stp_work_mutex); + if (stpinfo_valid()) + ret = sprintf(buf, "%i\n", stp_info.ctn); + mutex_unlock(&stp_work_mutex); + return ret; }
static DEVICE_ATTR_RO(ctn_type); @@ -710,9 +734,13 @@ static ssize_t dst_offset_show(struct de struct device_attribute *attr, char *buf) { - if (!stp_online || !(stp_info.vbits & 0x2000)) - return -ENODATA; - return sprintf(buf, "%i\n", (int)(s16) stp_info.dsto); + ssize_t ret = -ENODATA; + + mutex_lock(&stp_work_mutex); + if (stpinfo_valid() && (stp_info.vbits & 0x2000)) + ret = sprintf(buf, "%i\n", (int)(s16) stp_info.dsto); + mutex_unlock(&stp_work_mutex); + return ret; }
static DEVICE_ATTR_RO(dst_offset); @@ -721,9 +749,13 @@ static ssize_t leap_seconds_show(struct struct device_attribute *attr, char *buf) { - if (!stp_online || !(stp_info.vbits & 0x8000)) - return -ENODATA; - return sprintf(buf, "%i\n", (int)(s16) stp_info.leaps); + ssize_t ret = -ENODATA; + + mutex_lock(&stp_work_mutex); + if (stpinfo_valid() && (stp_info.vbits & 0x8000)) + ret = sprintf(buf, "%i\n", (int)(s16) stp_info.leaps); + mutex_unlock(&stp_work_mutex); + return ret; }
static DEVICE_ATTR_RO(leap_seconds); @@ -732,9 +764,13 @@ static ssize_t stratum_show(struct devic struct device_attribute *attr, char *buf) { - if (!stp_online) - return -ENODATA; - return sprintf(buf, "%i\n", (int)(s16) stp_info.stratum); + ssize_t ret = -ENODATA; + + mutex_lock(&stp_work_mutex); + if (stpinfo_valid()) + ret = sprintf(buf, "%i\n", (int)(s16) stp_info.stratum); + mutex_unlock(&stp_work_mutex); + return ret; }
static DEVICE_ATTR_RO(stratum); @@ -743,9 +779,13 @@ static ssize_t time_offset_show(struct d struct device_attribute *attr, char *buf) { - if (!stp_online || !(stp_info.vbits & 0x0800)) - return -ENODATA; - return sprintf(buf, "%i\n", (int) stp_info.tto); + ssize_t ret = -ENODATA; + + mutex_lock(&stp_work_mutex); + if (stpinfo_valid() && (stp_info.vbits & 0x0800)) + ret = sprintf(buf, "%i\n", (int) stp_info.tto); + mutex_unlock(&stp_work_mutex); + return ret; }
static DEVICE_ATTR_RO(time_offset); @@ -754,9 +794,13 @@ static ssize_t time_zone_offset_show(str struct device_attribute *attr, char *buf) { - if (!stp_online || !(stp_info.vbits & 0x4000)) - return -ENODATA; - return sprintf(buf, "%i\n", (int)(s16) stp_info.tzo); + ssize_t ret = -ENODATA; + + mutex_lock(&stp_work_mutex); + if (stpinfo_valid() && (stp_info.vbits & 0x4000)) + ret = sprintf(buf, "%i\n", (int)(s16) stp_info.tzo); + mutex_unlock(&stp_work_mutex); + return ret; }
static DEVICE_ATTR_RO(time_zone_offset); @@ -765,9 +809,13 @@ static ssize_t timing_mode_show(struct d struct device_attribute *attr, char *buf) { - if (!stp_online) - return -ENODATA; - return sprintf(buf, "%i\n", stp_info.tmd); + ssize_t ret = -ENODATA; + + mutex_lock(&stp_work_mutex); + if (stpinfo_valid()) + ret = sprintf(buf, "%i\n", stp_info.tmd); + mutex_unlock(&stp_work_mutex); + return ret; }
static DEVICE_ATTR_RO(timing_mode); @@ -776,9 +824,13 @@ static ssize_t timing_state_show(struct struct device_attribute *attr, char *buf) { - if (!stp_online) - return -ENODATA; - return sprintf(buf, "%i\n", stp_info.tst); + ssize_t ret = -ENODATA; + + mutex_lock(&stp_work_mutex); + if (stpinfo_valid()) + ret = sprintf(buf, "%i\n", stp_info.tst); + mutex_unlock(&stp_work_mutex); + return ret; }
static DEVICE_ATTR_RO(timing_state);
From: Andrew Donnellan ajd@linux.ibm.com
commit bd59380c5ba4147dcbaad3e582b55ccfd120b764 upstream.
A number of userspace utilities depend on making calls to RTAS to retrieve information and update various things.
The existing API through which we expose RTAS to userspace exposes more RTAS functionality than we actually need, through the sys_rtas syscall, which allows root (or anyone with CAP_SYS_ADMIN) to make any RTAS call they want with arbitrary arguments.
Many RTAS calls take the address of a buffer as an argument, and it's up to the caller to specify the physical address of the buffer as an argument. We allocate a buffer (the "RMO buffer") in the Real Memory Area that RTAS can access, and then expose the physical address and size of this buffer in /proc/powerpc/rtas/rmo_buffer. Userspace is expected to read this address, poke at the buffer using /dev/mem, and pass an address in the RMO buffer to the RTAS call.
However, there's nothing stopping the caller from specifying whatever address they want in the RTAS call, and it's easy to construct a series of RTAS calls that can overwrite arbitrary bytes (even without /dev/mem access).
Additionally, there are some RTAS calls that do potentially dangerous things and for which there are no legitimate userspace use cases.
In the past, this would not have been a particularly big deal as it was assumed that root could modify all system state freely, but with Secure Boot and lockdown we need to care about this.
We can't fundamentally change the ABI at this point, however we can address this by implementing a filter that checks RTAS calls against a list of permitted calls and forces the caller to use addresses within the RMO buffer.
The list is based off the list of calls that are used by the librtas userspace library, and has been tested with a number of existing userspace RTAS utilities. For compatibility with any applications we are not aware of that require other calls, the filter can be turned off at build time.
Cc: stable@vger.kernel.org Reported-by: Daniel Axtens dja@axtens.net Signed-off-by: Andrew Donnellan ajd@linux.ibm.com Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20200820044512.7543-1-ajd@linux.ibm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/Kconfig | 13 +++ arch/powerpc/kernel/rtas.c | 153 +++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 166 insertions(+)
--- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -1001,6 +1001,19 @@ config PPC_SECVAR_SYSFS read/write operations on these variables. Say Y if you have secure boot enabled and want to expose variables to userspace.
+config PPC_RTAS_FILTER + bool "Enable filtering of RTAS syscalls" + default y + depends on PPC_RTAS + help + The RTAS syscall API has security issues that could be used to + compromise system integrity. This option enforces restrictions on the + RTAS calls and arguments passed by userspace programs to mitigate + these issues. + + Say Y unless you know what you are doing and the filter is causing + problems for you. + endmenu
config ISA_DMA_API --- a/arch/powerpc/kernel/rtas.c +++ b/arch/powerpc/kernel/rtas.c @@ -992,6 +992,147 @@ struct pseries_errorlog *get_pseries_err return NULL; }
+#ifdef CONFIG_PPC_RTAS_FILTER + +/* + * The sys_rtas syscall, as originally designed, allows root to pass + * arbitrary physical addresses to RTAS calls. A number of RTAS calls + * can be abused to write to arbitrary memory and do other things that + * are potentially harmful to system integrity, and thus should only + * be used inside the kernel and not exposed to userspace. + * + * All known legitimate users of the sys_rtas syscall will only ever + * pass addresses that fall within the RMO buffer, and use a known + * subset of RTAS calls. + * + * Accordingly, we filter RTAS requests to check that the call is + * permitted, and that provided pointers fall within the RMO buffer. + * The rtas_filters list contains an entry for each permitted call, + * with the indexes of the parameters which are expected to contain + * addresses and sizes of buffers allocated inside the RMO buffer. + */ +struct rtas_filter { + const char *name; + int token; + /* Indexes into the args buffer, -1 if not used */ + int buf_idx1; + int size_idx1; + int buf_idx2; + int size_idx2; + + int fixed_size; +}; + +static struct rtas_filter rtas_filters[] __ro_after_init = { + { "ibm,activate-firmware", -1, -1, -1, -1, -1 }, + { "ibm,configure-connector", -1, 0, -1, 1, -1, 4096 }, /* Special cased */ + { "display-character", -1, -1, -1, -1, -1 }, + { "ibm,display-message", -1, 0, -1, -1, -1 }, + { "ibm,errinjct", -1, 2, -1, -1, -1, 1024 }, + { "ibm,close-errinjct", -1, -1, -1, -1, -1 }, + { "ibm,open-errinct", -1, -1, -1, -1, -1 }, + { "ibm,get-config-addr-info2", -1, -1, -1, -1, -1 }, + { "ibm,get-dynamic-sensor-state", -1, 1, -1, -1, -1 }, + { "ibm,get-indices", -1, 2, 3, -1, -1 }, + { "get-power-level", -1, -1, -1, -1, -1 }, + { "get-sensor-state", -1, -1, -1, -1, -1 }, + { "ibm,get-system-parameter", -1, 1, 2, -1, -1 }, + { "get-time-of-day", -1, -1, -1, -1, -1 }, + { "ibm,get-vpd", -1, 0, -1, 1, 2 }, + { "ibm,lpar-perftools", -1, 2, 3, -1, -1 }, + { "ibm,platform-dump", -1, 4, 5, -1, -1 }, + { "ibm,read-slot-reset-state", -1, -1, -1, -1, -1 }, + { "ibm,scan-log-dump", -1, 0, 1, -1, -1 }, + { "ibm,set-dynamic-indicator", -1, 2, -1, -1, -1 }, + { "ibm,set-eeh-option", -1, -1, -1, -1, -1 }, + { "set-indicator", -1, -1, -1, -1, -1 }, + { "set-power-level", -1, -1, -1, -1, -1 }, + { "set-time-for-power-on", -1, -1, -1, -1, -1 }, + { "ibm,set-system-parameter", -1, 1, -1, -1, -1 }, + { "set-time-of-day", -1, -1, -1, -1, -1 }, + { "ibm,suspend-me", -1, -1, -1, -1, -1 }, + { "ibm,update-nodes", -1, 0, -1, -1, -1, 4096 }, + { "ibm,update-properties", -1, 0, -1, -1, -1, 4096 }, + { "ibm,physical-attestation", -1, 0, 1, -1, -1 }, +}; + +static bool in_rmo_buf(u32 base, u32 end) +{ + return base >= rtas_rmo_buf && + base < (rtas_rmo_buf + RTAS_RMOBUF_MAX) && + base <= end && + end >= rtas_rmo_buf && + end < (rtas_rmo_buf + RTAS_RMOBUF_MAX); +} + +static bool block_rtas_call(int token, int nargs, + struct rtas_args *args) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(rtas_filters); i++) { + struct rtas_filter *f = &rtas_filters[i]; + u32 base, size, end; + + if (token != f->token) + continue; + + if (f->buf_idx1 != -1) { + base = be32_to_cpu(args->args[f->buf_idx1]); + if (f->size_idx1 != -1) + size = be32_to_cpu(args->args[f->size_idx1]); + else if (f->fixed_size) + size = f->fixed_size; + else + size = 1; + + end = base + size - 1; + if (!in_rmo_buf(base, end)) + goto err; + } + + if (f->buf_idx2 != -1) { + base = be32_to_cpu(args->args[f->buf_idx2]); + if (f->size_idx2 != -1) + size = be32_to_cpu(args->args[f->size_idx2]); + else if (f->fixed_size) + size = f->fixed_size; + else + size = 1; + end = base + size - 1; + + /* + * Special case for ibm,configure-connector where the + * address can be 0 + */ + if (!strcmp(f->name, "ibm,configure-connector") && + base == 0) + return false; + + if (!in_rmo_buf(base, end)) + goto err; + } + + return false; + } + +err: + pr_err_ratelimited("sys_rtas: RTAS call blocked - exploit attempt?\n"); + pr_err_ratelimited("sys_rtas: token=0x%x, nargs=%d (called by %s)\n", + token, nargs, current->comm); + return true; +} + +#else + +static bool block_rtas_call(int token, int nargs, + struct rtas_args *args) +{ + return false; +} + +#endif /* CONFIG_PPC_RTAS_FILTER */ + /* We assume to be passed big endian arguments */ SYSCALL_DEFINE1(rtas, struct rtas_args __user *, uargs) { @@ -1029,6 +1170,9 @@ SYSCALL_DEFINE1(rtas, struct rtas_args _ args.rets = &args.args[nargs]; memset(args.rets, 0, nret * sizeof(rtas_arg_t));
+ if (block_rtas_call(token, nargs, &args)) + return -EINVAL; + /* Need to handle ibm,suspend_me call specially */ if (token == ibm_suspend_me_token) {
@@ -1090,6 +1234,9 @@ void __init rtas_initialize(void) unsigned long rtas_region = RTAS_INSTANTIATE_MAX; u32 base, size, entry; int no_base, no_size, no_entry; +#ifdef CONFIG_PPC_RTAS_FILTER + int i; +#endif
/* Get RTAS dev node and fill up our "rtas" structure with infos * about it. @@ -1129,6 +1276,12 @@ void __init rtas_initialize(void) #ifdef CONFIG_RTAS_ERROR_LOGGING rtas_last_error_token = rtas_token("rtas-last-error"); #endif + +#ifdef CONFIG_PPC_RTAS_FILTER + for (i = 0; i < ARRAY_SIZE(rtas_filters); i++) { + rtas_filters[i].token = rtas_token(rtas_filters[i].name); + } +#endif }
int __init early_init_dt_scan_rtas(unsigned long node,
From: Joel Stanley joel@jms.id.au
commit a02f6d42357acf6e5de6ffc728e6e77faf3ad217 upstream.
It's not done anything for a long time. Save the percpu variable, and emit a warning to remind users to not expect it to do anything.
This uses pr_warn_once instead of pr_warn_ratelimit as testing 'ppc64_cpu --smt=off' on a 24 core / 4 SMT system showed the warning to be noisy, as the online/offline loop is slow.
Fixes: 3fa8cad82b94 ("powerpc/pseries/cpuidle: smt-snooze-delay cleanup.") Cc: stable@vger.kernel.org # v3.14 Signed-off-by: Joel Stanley joel@jms.id.au Acked-by: Gautham R. Shenoy ego@linux.vnet.ibm.com Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20200902000012.3440389-1-joel@jms.id.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/kernel/sysfs.c | 42 +++++++++++++++++------------------------- 1 file changed, 17 insertions(+), 25 deletions(-)
--- a/arch/powerpc/kernel/sysfs.c +++ b/arch/powerpc/kernel/sysfs.c @@ -32,29 +32,27 @@
static DEFINE_PER_CPU(struct cpu, cpu_devices);
-/* - * SMT snooze delay stuff, 64-bit only for now - */ - #ifdef CONFIG_PPC64
-/* Time in microseconds we delay before sleeping in the idle loop */ -static DEFINE_PER_CPU(long, smt_snooze_delay) = { 100 }; +/* + * Snooze delay has not been hooked up since 3fa8cad82b94 ("powerpc/pseries/cpuidle: + * smt-snooze-delay cleanup.") and has been broken even longer. As was foretold in + * 2014: + * + * "ppc64_util currently utilises it. Once we fix ppc64_util, propose to clean + * up the kernel code." + * + * powerpc-utils stopped using it as of 1.3.8. At some point in the future this + * code should be removed. + */
static ssize_t store_smt_snooze_delay(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) { - struct cpu *cpu = container_of(dev, struct cpu, dev); - ssize_t ret; - long snooze; - - ret = sscanf(buf, "%ld", &snooze); - if (ret != 1) - return -EINVAL; - - per_cpu(smt_snooze_delay, cpu->dev.id) = snooze; + pr_warn_once("%s (%d) stored to unsupported smt_snooze_delay, which has no effect.\n", + current->comm, current->pid); return count; }
@@ -62,9 +60,9 @@ static ssize_t show_smt_snooze_delay(str struct device_attribute *attr, char *buf) { - struct cpu *cpu = container_of(dev, struct cpu, dev); - - return sprintf(buf, "%ld\n", per_cpu(smt_snooze_delay, cpu->dev.id)); + pr_warn_once("%s (%d) read from unsupported smt_snooze_delay\n", + current->comm, current->pid); + return sprintf(buf, "100\n"); }
static DEVICE_ATTR(smt_snooze_delay, 0644, show_smt_snooze_delay, @@ -72,16 +70,10 @@ static DEVICE_ATTR(smt_snooze_delay, 064
static int __init setup_smt_snooze_delay(char *str) { - unsigned int cpu; - long snooze; - if (!cpu_has_feature(CPU_FTR_SMT)) return 1;
- snooze = simple_strtol(str, NULL, 10); - for_each_possible_cpu(cpu) - per_cpu(smt_snooze_delay, cpu) = snooze; - + pr_warn("smt-snooze-delay command line option has no effect\n"); return 1; } __setup("smt-snooze-delay=", setup_smt_snooze_delay);
From: Aneesh Kumar K.V aneesh.kumar@linux.ibm.com
commit 301d2ea6572386245c5d2d2dc85c3b5a737b85ac upstream.
Similar to commit 89c140bbaeee ("pseries: Fix 64 bit logical memory block panic") make sure different variables tracking lmb_size are updated to be 64 bit.
This was found by code audit.
Cc: stable@vger.kernel.org Signed-off-by: Aneesh Kumar K.V aneesh.kumar@linux.ibm.com Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20201007114836.282468-3-aneesh.kumar@linux.ibm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/platforms/pseries/hotplug-memory.c | 43 ++++++++++++++++-------- 1 file changed, 29 insertions(+), 14 deletions(-)
--- a/arch/powerpc/platforms/pseries/hotplug-memory.c +++ b/arch/powerpc/platforms/pseries/hotplug-memory.c @@ -277,7 +277,7 @@ static int dlpar_offline_lmb(struct drme return dlpar_change_lmb_state(lmb, false); }
-static int pseries_remove_memblock(unsigned long base, unsigned int memblock_size) +static int pseries_remove_memblock(unsigned long base, unsigned long memblock_size) { unsigned long block_sz, start_pfn; int sections_per_block; @@ -308,10 +308,11 @@ out:
static int pseries_remove_mem_node(struct device_node *np) { - const __be32 *regs; + const __be32 *prop; unsigned long base; - unsigned int lmb_size; + unsigned long lmb_size; int ret = -EINVAL; + int addr_cells, size_cells;
/* * Check to see if we are actually removing memory @@ -322,12 +323,19 @@ static int pseries_remove_mem_node(struc /* * Find the base address and size of the memblock */ - regs = of_get_property(np, "reg", NULL); - if (!regs) + prop = of_get_property(np, "reg", NULL); + if (!prop) return ret;
- base = be64_to_cpu(*(unsigned long *)regs); - lmb_size = be32_to_cpu(regs[3]); + addr_cells = of_n_addr_cells(np); + size_cells = of_n_size_cells(np); + + /* + * "reg" property represents (addr,size) tuple. + */ + base = of_read_number(prop, addr_cells); + prop += addr_cells; + lmb_size = of_read_number(prop, size_cells);
pseries_remove_memblock(base, lmb_size); return 0; @@ -564,7 +572,7 @@ static int dlpar_memory_remove_by_ic(u32
#else static inline int pseries_remove_memblock(unsigned long base, - unsigned int memblock_size) + unsigned long memblock_size) { return -EOPNOTSUPP; } @@ -886,10 +894,11 @@ int dlpar_memory(struct pseries_hp_error
static int pseries_add_mem_node(struct device_node *np) { - const __be32 *regs; + const __be32 *prop; unsigned long base; - unsigned int lmb_size; + unsigned long lmb_size; int ret = -EINVAL; + int addr_cells, size_cells;
/* * Check to see if we are actually adding memory @@ -900,12 +909,18 @@ static int pseries_add_mem_node(struct d /* * Find the base and size of the memblock */ - regs = of_get_property(np, "reg", NULL); - if (!regs) + prop = of_get_property(np, "reg", NULL); + if (!prop) return ret;
- base = be64_to_cpu(*(unsigned long *)regs); - lmb_size = be32_to_cpu(regs[3]); + addr_cells = of_n_addr_cells(np); + size_cells = of_n_size_cells(np); + /* + * "reg" property represents (addr,size) tuple. + */ + base = of_read_number(prop, addr_cells); + prop += addr_cells; + lmb_size = of_read_number(prop, size_cells);
/* * Update memory region to represent the memory add
From: Mahesh Salgaonkar mahesh@linux.ibm.com
commit aea948bb80b478ddc2448f7359d574387521a52d upstream.
Every error log reported by OPAL is exported to userspace through a sysfs interface and notified using kobject_uevent(). The userspace daemon (opal_errd) then reads the error log and acknowledges the error log is saved safely to disk. Once acknowledged the kernel removes the respective sysfs file entry causing respective resources to be released including kobject.
However it's possible the userspace daemon may already be scanning elog entries when a new sysfs elog entry is created by the kernel. User daemon may read this new entry and ack it even before kernel can notify userspace about it through kobject_uevent() call. If that happens then we have a potential race between elog_ack_store->kobject_put() and kobject_uevent which can lead to use-after-free of a kernfs object resulting in a kernel crash. eg:
BUG: Unable to handle kernel data access on read at 0x6b6b6b6b6b6b6bfb Faulting instruction address: 0xc0000000008ff2a0 Oops: Kernel access of bad area, sig: 11 [#1] LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA PowerNV CPU: 27 PID: 805 Comm: irq/29-opal-elo Not tainted 5.9.0-rc2-gcc-8.2.0-00214-g6f56a67bcbb5-dirty #363 ... NIP kobject_uevent_env+0xa0/0x910 LR elog_event+0x1f4/0x2d0 Call Trace: 0x5deadbeef0000122 (unreliable) elog_event+0x1f4/0x2d0 irq_thread_fn+0x4c/0xc0 irq_thread+0x1c0/0x2b0 kthread+0x1c4/0x1d0 ret_from_kernel_thread+0x5c/0x6c
This patch fixes this race by protecting the sysfs file creation/notification by holding a reference count on kobject until we safely send kobject_uevent().
The function create_elog_obj() returns the elog object which if used by caller function will end up in use-after-free problem again. However, the return value of create_elog_obj() function isn't being used today and there is no need as well. Hence change it to return void to make this fix complete.
Fixes: 774fea1a38c6 ("powerpc/powernv: Read OPAL error log and export it through sysfs") Cc: stable@vger.kernel.org # v3.15+ Reported-by: Oliver O'Halloran oohall@gmail.com Signed-off-by: Mahesh Salgaonkar mahesh@linux.ibm.com Signed-off-by: Aneesh Kumar K.V aneesh.kumar@linux.ibm.com Reviewed-by: Oliver O'Halloran oohall@gmail.com Reviewed-by: Vasant Hegde hegdevasant@linux.vnet.ibm.com [mpe: Rework the logic to use a single return, reword comments, add oops] Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20201006122051.190176-1-mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/platforms/powernv/opal-elog.c | 33 ++++++++++++++++++++++------- 1 file changed, 26 insertions(+), 7 deletions(-)
--- a/arch/powerpc/platforms/powernv/opal-elog.c +++ b/arch/powerpc/platforms/powernv/opal-elog.c @@ -179,14 +179,14 @@ static ssize_t raw_attr_read(struct file return count; }
-static struct elog_obj *create_elog_obj(uint64_t id, size_t size, uint64_t type) +static void create_elog_obj(uint64_t id, size_t size, uint64_t type) { struct elog_obj *elog; int rc;
elog = kzalloc(sizeof(*elog), GFP_KERNEL); if (!elog) - return NULL; + return;
elog->kobj.kset = elog_kset;
@@ -219,18 +219,37 @@ static struct elog_obj *create_elog_obj( rc = kobject_add(&elog->kobj, NULL, "0x%llx", id); if (rc) { kobject_put(&elog->kobj); - return NULL; + return; }
+ /* + * As soon as the sysfs file for this elog is created/activated there is + * a chance the opal_errd daemon (or any userspace) might read and + * acknowledge the elog before kobject_uevent() is called. If that + * happens then there is a potential race between + * elog_ack_store->kobject_put() and kobject_uevent() which leads to a + * use-after-free of a kernfs object resulting in a kernel crash. + * + * To avoid that, we need to take a reference on behalf of the bin file, + * so that our reference remains valid while we call kobject_uevent(). + * We then drop our reference before exiting the function, leaving the + * bin file to drop the last reference (if it hasn't already). + */ + + /* Take a reference for the bin file */ + kobject_get(&elog->kobj); rc = sysfs_create_bin_file(&elog->kobj, &elog->raw_attr); - if (rc) { + if (rc == 0) { + kobject_uevent(&elog->kobj, KOBJ_ADD); + } else { + /* Drop the reference taken for the bin file */ kobject_put(&elog->kobj); - return NULL; }
- kobject_uevent(&elog->kobj, KOBJ_ADD); + /* Drop our reference */ + kobject_put(&elog->kobj);
- return elog; + return; }
static irqreturn_t elog_event(int irq, void *data)
From: Christophe Leroy christophe.leroy@csgroup.eu
commit 2c637d2df4ee4830e9d3eb2bd5412250522ce96e upstream.
low_sleep_handler() has an hardcoded restore of segment registers that doesn't take KUAP and KUEP into account.
Use head_32's load_segment_registers() routine instead.
Fixes: a68c31fc01ef ("powerpc/32s: Implement Kernel Userspace Access Protection") Fixes: 31ed2b13c48d ("powerpc/32s: Implement Kernel Userspace Execution Prevention.") Cc: stable@vger.kernel.org Signed-off-by: Christophe Leroy christophe.leroy@csgroup.eu Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/21b05f7298c1b18f73e6e5b4cd5005aafa24b6da.159982010... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/kernel/head_32.S | 2 +- arch/powerpc/platforms/powermac/sleep.S | 9 +-------- 2 files changed, 2 insertions(+), 9 deletions(-)
--- a/arch/powerpc/kernel/head_32.S +++ b/arch/powerpc/kernel/head_32.S @@ -1002,7 +1002,7 @@ BEGIN_MMU_FTR_SECTION END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_HIGH_BATS) blr
-load_segment_registers: +_GLOBAL(load_segment_registers) li r0, NUM_USER_SEGMENTS /* load up user segment register values */ mtctr r0 /* for context 0 */ li r3, 0 /* Kp = 0, Ks = 0, VSID = 0 */ --- a/arch/powerpc/platforms/powermac/sleep.S +++ b/arch/powerpc/platforms/powermac/sleep.S @@ -294,14 +294,7 @@ grackle_wake_up: * we do any r1 memory access as we are not sure they * are in a sane state above the first 256Mb region */ - li r0,16 /* load up segment register values */ - mtctr r0 /* for context 0 */ - lis r3,0x2000 /* Ku = 1, VSID = 0 */ - li r4,0 -3: mtsrin r3,r4 - addi r3,r3,0x111 /* increment VSID */ - addis r4,r4,0x1000 /* address of next segment */ - bdnz 3b + bl load_segment_registers sync isync
From: Ganesh Goudar ganeshgr@linux.ibm.com
commit 8d0e2101274358d9b6b1f27232b40253ca48bab5 upstream.
Use of nmi_enter/exit in real mode handler causes the kernel to panic and reboot on injecting SLB mutihit on pseries machine running in hash MMU mode, because these calls try to accesses memory outside RMO region in real mode handler where translation is disabled.
Add check to not to use these calls on pseries machine running in hash MMU mode.
Fixes: 116ac378bb3f ("powerpc/64s: machine check interrupt update NMI accounting") Cc: stable@vger.kernel.org # v5.8+ Signed-off-by: Ganesh Goudar ganeshgr@linux.ibm.com Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20201009064005.19777-2-ganeshgr@linux.ibm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/kernel/mce.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)
--- a/arch/powerpc/kernel/mce.c +++ b/arch/powerpc/kernel/mce.c @@ -591,12 +591,11 @@ EXPORT_SYMBOL_GPL(machine_check_print_ev long notrace machine_check_early(struct pt_regs *regs) { long handled = 0; - bool nested = in_nmi(); u8 ftrace_enabled = this_cpu_get_ftrace_enabled();
this_cpu_set_ftrace_enabled(0); - - if (!nested) + /* Do not use nmi_enter/exit for pseries hpte guest */ + if (radix_enabled() || !firmware_has_feature(FW_FEATURE_LPAR)) nmi_enter();
hv_nmi_check_nonrecoverable(regs); @@ -607,7 +606,7 @@ long notrace machine_check_early(struct if (ppc_md.machine_check_early) handled = ppc_md.machine_check_early(regs);
- if (!nested) + if (radix_enabled() || !firmware_has_feature(FW_FEATURE_LPAR)) nmi_exit();
this_cpu_set_ftrace_enabled(ftrace_enabled);
From: Michael Neuling mikey@neuling.org
commit 1da4a0272c5469169f78cd76cf175ff984f52f06 upstream.
__get_user_atomic_128_aligned() stores to kaddr using stvx which is a VMX store instruction, hence kaddr must be 16 byte aligned otherwise the store won't occur as expected.
Unfortunately when we call __get_user_atomic_128_aligned() in p9_hmi_special_emu(), the buffer we pass as kaddr (ie. vbuf) isn't guaranteed to be 16B aligned. This means that the write to vbuf in __get_user_atomic_128_aligned() has the bottom bits of the address truncated. This results in other local variables being overwritten. Also vbuf will not contain the correct data which results in the userspace emulation being wrong and hence undetected user data corruption.
In the past we've been mostly lucky as vbuf has ended up aligned but this is fragile and isn't always true. CONFIG_STACKPROTECTOR in particular can change the stack arrangement enough that our luck runs out.
This issue only occurs on POWER9 Nimbus <= DD2.1 bare metal.
The fix is to align vbuf to a 16 byte boundary.
Fixes: 5080332c2c89 ("powerpc/64s: Add workaround for P9 vector CI load issue") Cc: stable@vger.kernel.org # v4.15+ Signed-off-by: Michael Neuling mikey@neuling.org Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/20201013043741.743413-1-mikey@neuling.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/kernel/traps.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/powerpc/kernel/traps.c +++ b/arch/powerpc/kernel/traps.c @@ -889,7 +889,7 @@ static void p9_hmi_special_emu(struct pt { unsigned int ra, rb, t, i, sel, instr, rc; const void __user *addr; - u8 vbuf[16], *vdst; + u8 vbuf[16] __aligned(16), *vdst; unsigned long ea, msr, msr_mask; bool swap;
From: Christophe Leroy christophe.leroy@csgroup.eu
commit c118c7303ad528be8ff2aea8cd1ee15452c763f0 upstream.
We need r1 to be properly set before activating MMU, so reading task_struct->stack must be done with MMU off.
This means we need an additional register to play with MSR bits while r11 now points to the stack. For that, move r10 back to CR (As is already done for hash MMU) and use r10.
We still don't have r1 correct yet when we activate MMU. It is done in following patch.
Fixes: 028474876f47 ("powerpc/32: prepare for CONFIG_VMAP_STACK") Cc: stable@vger.kernel.org Signed-off-by: Christophe Leroy christophe.leroy@csgroup.eu Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/a027d447022a006c9c4958ac734128e577a3c5c1.159948610... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/kernel/head_32.S | 6 ------ arch/powerpc/kernel/head_32.h | 31 ++++++------------------------- 2 files changed, 6 insertions(+), 31 deletions(-)
--- a/arch/powerpc/kernel/head_32.S +++ b/arch/powerpc/kernel/head_32.S @@ -274,14 +274,8 @@ __secondary_hold_acknowledge: DO_KVM 0x200 MachineCheck: EXCEPTION_PROLOG_0 -#ifdef CONFIG_VMAP_STACK - li r11, MSR_KERNEL & ~(MSR_IR | MSR_RI) /* can take DTLB miss */ - mtmsr r11 - isync -#endif #ifdef CONFIG_PPC_CHRP mfspr r11, SPRN_SPRG_THREAD - tovirt_vmstack r11, r11 lwz r11, RTAS_SP(r11) cmpwi cr1, r11, 0 bne cr1, 7f --- a/arch/powerpc/kernel/head_32.h +++ b/arch/powerpc/kernel/head_32.h @@ -39,24 +39,13 @@ .endm
.macro EXCEPTION_PROLOG_1 for_rtas=0 -#ifdef CONFIG_VMAP_STACK - .ifeq \for_rtas - li r11, MSR_KERNEL & ~(MSR_IR | MSR_RI) /* can take DTLB miss */ - mtmsr r11 - isync - .endif subi r11, r1, INT_FRAME_SIZE /* use r1 if kernel */ -#else - tophys(r11,r1) /* use tophys(r1) if kernel */ - subi r11, r11, INT_FRAME_SIZE /* alloc exc. frame */ -#endif beq 1f mfspr r11,SPRN_SPRG_THREAD - tovirt_vmstack r11, r11 lwz r11,TASK_STACK-THREAD(r11) addi r11, r11, THREAD_SIZE - INT_FRAME_SIZE - tophys_novmstack r11, r11 1: + tophys_novmstack r11, r11 #ifdef CONFIG_VMAP_STACK mtcrf 0x7f, r11 bt 32 - THREAD_ALIGN_SHIFT, stack_overflow @@ -64,12 +53,11 @@ .endm
.macro EXCEPTION_PROLOG_2 handle_dar_dsisr=0 -#if defined(CONFIG_VMAP_STACK) && defined(CONFIG_PPC_BOOK3S) -BEGIN_MMU_FTR_SECTION +#ifdef CONFIG_VMAP_STACK mtcr r10 -FTR_SECTION_ELSE - stw r10, _CCR(r11) -ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_HPTE_TABLE) + li r10, MSR_KERNEL & ~(MSR_IR | MSR_RI) /* can take DTLB miss */ + mtmsr r10 + isync #else stw r10,_CCR(r11) /* save registers */ #endif @@ -77,11 +65,9 @@ ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_HP stw r12,GPR12(r11) stw r9,GPR9(r11) stw r10,GPR10(r11) -#if defined(CONFIG_VMAP_STACK) && defined(CONFIG_PPC_BOOK3S) -BEGIN_MMU_FTR_SECTION +#ifdef CONFIG_VMAP_STACK mfcr r10 stw r10, _CCR(r11) -END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE) #endif mfspr r12,SPRN_SPRG_SCRATCH1 stw r12,GPR11(r11) @@ -97,11 +83,7 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_T stw r10, _DSISR(r11) .endif lwz r9, SRR1(r12) -#if defined(CONFIG_VMAP_STACK) && defined(CONFIG_PPC_BOOK3S) -BEGIN_MMU_FTR_SECTION andi. r10, r9, MSR_PR -END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE) -#endif lwz r12, SRR0(r12) #else mfspr r12,SPRN_SRR0 @@ -328,7 +310,6 @@ label: #ifdef CONFIG_VMAP_STACK #ifdef CONFIG_SMP mfspr r11, SPRN_SPRG_THREAD - tovirt(r11, r11) lwz r11, TASK_CPU - THREAD(r11) slwi r11, r11, 3 addis r11, r11, emergency_ctx@ha
From: Christophe Leroy christophe.leroy@csgroup.eu
commit da7bb43ab9da39bcfed0d146ce94e1f0cbae4ca0 upstream.
We need r1 to be properly set before activating MMU, otherwise any new exception taken while saving registers into the stack in exception prologs will use the user stack, which is wrong and will even lockup or crash when KUAP is selected.
Do that by switching the meaning of r11 and r1 until we have saved r1 to the stack: copy r1 into r11 and setup the new stack pointer in r1. To avoid complicating and impacting all generic and specific prolog code (and more), copy back r1 into r11 once r11 is save onto the stack.
We could get rid of copying r1 back and forth at the cost of rewriting everything to use r1 instead of r11 all the way when CONFIG_VMAP_STACK is set, but the effort is probably not worth it.
Fixes: 028474876f47 ("powerpc/32: prepare for CONFIG_VMAP_STACK") Cc: stable@vger.kernel.org Signed-off-by: Christophe Leroy christophe.leroy@csgroup.eu Signed-off-by: Michael Ellerman mpe@ellerman.id.au Link: https://lore.kernel.org/r/8f85e8752ac5af602db7237ef53d634f4f3d3892.159948610... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/kernel/head_32.h | 43 ++++++++++++++++++++++++++++-------------- 1 file changed, 29 insertions(+), 14 deletions(-)
--- a/arch/powerpc/kernel/head_32.h +++ b/arch/powerpc/kernel/head_32.h @@ -39,15 +39,24 @@ .endm
.macro EXCEPTION_PROLOG_1 for_rtas=0 +#ifdef CONFIG_VMAP_STACK + mr r11, r1 + subi r1, r1, INT_FRAME_SIZE /* use r1 if kernel */ + beq 1f + mfspr r1,SPRN_SPRG_THREAD + lwz r1,TASK_STACK-THREAD(r1) + addi r1, r1, THREAD_SIZE - INT_FRAME_SIZE +#else subi r11, r1, INT_FRAME_SIZE /* use r1 if kernel */ beq 1f mfspr r11,SPRN_SPRG_THREAD lwz r11,TASK_STACK-THREAD(r11) addi r11, r11, THREAD_SIZE - INT_FRAME_SIZE +#endif 1: tophys_novmstack r11, r11 #ifdef CONFIG_VMAP_STACK - mtcrf 0x7f, r11 + mtcrf 0x7f, r1 bt 32 - THREAD_ALIGN_SHIFT, stack_overflow #endif .endm @@ -62,6 +71,15 @@ stw r10,_CCR(r11) /* save registers */ #endif mfspr r10, SPRN_SPRG_SCRATCH0 +#ifdef CONFIG_VMAP_STACK + stw r11,GPR1(r1) + stw r11,0(r1) + mr r11, r1 +#else + stw r1,GPR1(r11) + stw r1,0(r11) + tovirt(r1, r11) /* set new kernel sp */ +#endif stw r12,GPR12(r11) stw r9,GPR9(r11) stw r10,GPR10(r11) @@ -89,9 +107,6 @@ mfspr r12,SPRN_SRR0 mfspr r9,SPRN_SRR1 #endif - stw r1,GPR1(r11) - stw r1,0(r11) - tovirt_novmstack r1, r11 /* set new kernel sp */ #ifdef CONFIG_40x rlwinm r9,r9,0,14,12 /* clear MSR_WE (necessary?) */ #else @@ -309,19 +324,19 @@ label: .macro vmap_stack_overflow_exception #ifdef CONFIG_VMAP_STACK #ifdef CONFIG_SMP - mfspr r11, SPRN_SPRG_THREAD - lwz r11, TASK_CPU - THREAD(r11) - slwi r11, r11, 3 - addis r11, r11, emergency_ctx@ha + mfspr r1, SPRN_SPRG_THREAD + lwz r1, TASK_CPU - THREAD(r1) + slwi r1, r1, 3 + addis r1, r1, emergency_ctx@ha #else - lis r11, emergency_ctx@ha + lis r1, emergency_ctx@ha #endif - lwz r11, emergency_ctx@l(r11) - cmpwi cr1, r11, 0 + lwz r1, emergency_ctx@l(r1) + cmpwi cr1, r1, 0 bne cr1, 1f - lis r11, init_thread_union@ha - addi r11, r11, init_thread_union@l -1: addi r11, r11, THREAD_SIZE - INT_FRAME_SIZE + lis r1, init_thread_union@ha + addi r1, r1, init_thread_union@l +1: addi r1, r1, THREAD_SIZE - INT_FRAME_SIZE EXCEPTION_PROLOG_2 SAVE_NVGPRS(r11) addi r3, r1, STACK_FRAME_OVERHEAD
From: Naohiro Aota naohiro.aota@wdc.com
commit 4977d121bc9bc5138d4d48b85469123001859573 upstream.
When the bio's size reaches max_append_sectors, bio_add_hw_page returns 0 then __bio_iov_append_get_pages returns -EINVAL. This is an expected result of building a small enough bio not to be split in the IO path. However, iov_iter is not advanced in this case, causing the same pages are filled for the bio again and again.
Fix the case by properly advancing the iov_iter for already processed pages.
Fixes: 0512a75b98f8 ("block: Introduce REQ_OP_ZONE_APPEND") Cc: stable@vger.kernel.org # 5.8+ Reviewed-by: Johannes Thumshirn johannes.thumshirn@wdc.com Signed-off-by: Naohiro Aota naohiro.aota@wdc.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- block/bio.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-)
--- a/block/bio.c +++ b/block/bio.c @@ -1046,6 +1046,7 @@ static int __bio_iov_append_get_pages(st ssize_t size, left; unsigned len, i; size_t offset; + int ret = 0;
if (WARN_ON_ONCE(!max_append_sectors)) return 0; @@ -1068,15 +1069,17 @@ static int __bio_iov_append_get_pages(st
len = min_t(size_t, PAGE_SIZE - offset, left); if (bio_add_hw_page(q, bio, page, len, offset, - max_append_sectors, &same_page) != len) - return -EINVAL; + max_append_sectors, &same_page) != len) { + ret = -EINVAL; + break; + } if (same_page) put_page(page); offset = 0; }
- iov_iter_advance(iter, size); - return 0; + iov_iter_advance(iter, size - left); + return ret; }
/**
From: Jens Axboe axboe@kernel.dk
commit c8b5e2600a2cfa1cdfbecf151afd67aee227381d upstream.
io_poll_double_wake() is called for both request types - both pure poll requests, and internal polls. This means that we should be using the right handler based on the request type. Use the one that the original caller already assigned for the waitqueue handling, that will always match the correct type.
Cc: stable@vger.kernel.org # v5.8+ Reported-by: Pavel Begunkov asml.silence@gmail.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/io_uring.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
--- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -4786,8 +4786,10 @@ static int io_poll_double_wake(struct wa /* make sure double remove sees this as being gone */ wait->private = NULL; spin_unlock(&poll->head->lock); - if (!done) - __io_async_wake(req, poll, mask, io_poll_task_func); + if (!done) { + /* use wait func handler, so it matches the rq type */ + poll->wait.func(&poll->wait, mode, sync, key); + } } refcount_dec(&req->refs); return 1;
From: Sibi Sankar sibis@codeaurora.org
commit 1894622636745237f882bfab47925afc48e122e0 upstream.
Fix the discrepancy observed between accepted input and read back value while disabling remoteproc coredump through the coredump debugfs entry.
Fixes: 3afdc59e4390 ("remoteproc: Add coredump debugfs entry") Cc: stable@vger.kernel.org Signed-off-by: Sibi Sankar sibis@codeaurora.org Link: https://lore.kernel.org/r/20200916145100.15872-1-sibis@codeaurora.org Signed-off-by: Bjorn Andersson bjorn.andersson@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/remoteproc/remoteproc_debugfs.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/remoteproc/remoteproc_debugfs.c +++ b/drivers/remoteproc/remoteproc_debugfs.c @@ -94,7 +94,7 @@ static ssize_t rproc_coredump_write(stru goto out; }
- if (!strncmp(buf, "disable", count)) { + if (!strncmp(buf, "disabled", count)) { rproc->dump_conf = RPROC_COREDUMP_DISABLED; } else if (!strncmp(buf, "inline", count)) { rproc->dump_conf = RPROC_COREDUMP_INLINE;
From: Andreas Gruenbacher agruenba@redhat.com
commit 5a61ae1402f15276ee4e003e198aab816958ca69 upstream.
Commit ca399c96e96e changes gfs2_log_flush to not withdraw the filesystem while holding the log flush lock, but it fails to check if the filesystem needs to be withdrawn once the log flush lock has been released. Likewise, commit f05b86db314d depends on gfs2_log_flush to trigger for delayed withdraws. Add that and clean up the code flow somewhat.
In gfs2_put_super, add a check for delayed withdraws that have been missed to prevent these kinds of bugs in the future.
Fixes: ca399c96e96e ("gfs2: flesh out delayed withdraw for gfs2_log_flush") Fixes: f05b86db314d ("gfs2: Prepare to withdraw as soon as an IO error occurs in log write") Cc: stable@vger.kernel.org # v5.7+: 462582b99b607: gfs2: add some much needed cleanup for log flushes that fail Signed-off-by: Andreas Gruenbacher agruenba@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/gfs2/log.c | 61 ++++++++++++++++++++++++++++---------------------------- fs/gfs2/super.c | 2 + fs/gfs2/util.h | 10 +++++++++ 3 files changed, 43 insertions(+), 30 deletions(-)
--- a/fs/gfs2/log.c +++ b/fs/gfs2/log.c @@ -954,10 +954,8 @@ void gfs2_log_flush(struct gfs2_sbd *sdp goto out;
/* Log might have been flushed while we waited for the flush lock */ - if (gl && !test_bit(GLF_LFLUSH, &gl->gl_flags)) { - up_write(&sdp->sd_log_flush_lock); - return; - } + if (gl && !test_bit(GLF_LFLUSH, &gl->gl_flags)) + goto out; trace_gfs2_log_flush(sdp, 1, flags);
if (flags & GFS2_LOG_HEAD_FLUSH_SHUTDOWN) @@ -971,25 +969,25 @@ void gfs2_log_flush(struct gfs2_sbd *sdp if (unlikely (state == SFS_FROZEN)) if (gfs2_assert_withdraw_delayed(sdp, !tr->tr_num_buf_new && !tr->tr_num_databuf_new)) - goto out; + goto out_withdraw; }
if (unlikely(state == SFS_FROZEN)) if (gfs2_assert_withdraw_delayed(sdp, !sdp->sd_log_num_revoke)) - goto out; + goto out_withdraw; if (gfs2_assert_withdraw_delayed(sdp, sdp->sd_log_num_revoke == sdp->sd_log_committed_revoke)) - goto out; + goto out_withdraw;
gfs2_ordered_write(sdp); if (gfs2_withdrawn(sdp)) - goto out; + goto out_withdraw; lops_before_commit(sdp, tr); if (gfs2_withdrawn(sdp)) - goto out; + goto out_withdraw; gfs2_log_submit_bio(&sdp->sd_log_bio, REQ_OP_WRITE); if (gfs2_withdrawn(sdp)) - goto out; + goto out_withdraw;
if (sdp->sd_log_head != sdp->sd_log_flush_head) { log_flush_wait(sdp); @@ -1000,7 +998,7 @@ void gfs2_log_flush(struct gfs2_sbd *sdp log_write_header(sdp, flags); } if (gfs2_withdrawn(sdp)) - goto out; + goto out_withdraw; lops_after_commit(sdp, tr);
gfs2_log_lock(sdp); @@ -1020,7 +1018,7 @@ void gfs2_log_flush(struct gfs2_sbd *sdp if (!sdp->sd_log_idle) { empty_ail1_list(sdp); if (gfs2_withdrawn(sdp)) - goto out; + goto out_withdraw; atomic_dec(&sdp->sd_log_blks_free); /* Adjust for unreserved buffer */ trace_gfs2_log_blocks(sdp, -1); log_write_header(sdp, flags); @@ -1033,27 +1031,30 @@ void gfs2_log_flush(struct gfs2_sbd *sdp atomic_set(&sdp->sd_freeze_state, SFS_FROZEN); }
-out: - if (gfs2_withdrawn(sdp)) { - trans_drain(tr); - /** - * If the tr_list is empty, we're withdrawing during a log - * flush that targets a transaction, but the transaction was - * never queued onto any of the ail lists. Here we add it to - * ail1 just so that ail_drain() will find and free it. - */ - spin_lock(&sdp->sd_ail_lock); - if (tr && list_empty(&tr->tr_list)) - list_add(&tr->tr_list, &sdp->sd_ail1_list); - spin_unlock(&sdp->sd_ail_lock); - ail_drain(sdp); /* frees all transactions */ - tr = NULL; - } - +out_end: trace_gfs2_log_flush(sdp, 0, flags); +out: up_write(&sdp->sd_log_flush_lock); - gfs2_trans_free(sdp, tr); + if (gfs2_withdrawing(sdp)) + gfs2_withdraw(sdp); + return; + +out_withdraw: + trans_drain(tr); + /** + * If the tr_list is empty, we're withdrawing during a log + * flush that targets a transaction, but the transaction was + * never queued onto any of the ail lists. Here we add it to + * ail1 just so that ail_drain() will find and free it. + */ + spin_lock(&sdp->sd_ail_lock); + if (tr && list_empty(&tr->tr_list)) + list_add(&tr->tr_list, &sdp->sd_ail1_list); + spin_unlock(&sdp->sd_ail_lock); + ail_drain(sdp); /* frees all transactions */ + tr = NULL; + goto out_end; }
/** --- a/fs/gfs2/super.c +++ b/fs/gfs2/super.c @@ -702,6 +702,8 @@ restart: if (error) gfs2_io_error(sdp); } + WARN_ON(gfs2_withdrawing(sdp)); + /* At this point, we're through modifying the disk */
/* Release stuff */ --- a/fs/gfs2/util.h +++ b/fs/gfs2/util.h @@ -205,6 +205,16 @@ static inline bool gfs2_withdrawn(struct test_bit(SDF_WITHDRAWING, &sdp->sd_flags); }
+/** + * gfs2_withdrawing - check if a withdraw is pending + * @sdp: the superblock + */ +static inline bool gfs2_withdrawing(struct gfs2_sbd *sdp) +{ + return test_bit(SDF_WITHDRAWING, &sdp->sd_flags) && + !test_bit(SDF_WITHDRAWN, &sdp->sd_flags); +} + #define gfs2_tune_get(sdp, field) \ gfs2_tune_get_i(&(sdp)->sd_tune, &(sdp)->sd_tune.field)
From: Bob Peterson rpeterso@redhat.com
commit 2ffed5290b3bff7562d29fd06621be4705704242 upstream.
Only initialize gl_delete for iopen glocks, but more importantly, only access it for iopen glocks in flush_delete_work: flush_delete_work is called for different types of glocks including rgrp glocks, and those use gl_vm which is in a union with gl_delete. Without this fix, we'll end up clobbering gl_vm, which results in general memory corruption.
Fixes: a0e3cc65fa29 ("gfs2: Turn gl_delete into a delayed work") Cc: stable@vger.kernel.org # v5.8+ Signed-off-by: Bob Peterson rpeterso@redhat.com Signed-off-by: Andreas Gruenbacher agruenba@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/gfs2/glock.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-)
--- a/fs/gfs2/glock.c +++ b/fs/gfs2/glock.c @@ -1054,7 +1054,8 @@ int gfs2_glock_get(struct gfs2_sbd *sdp, gl->gl_object = NULL; gl->gl_hold_time = GL_GLOCK_DFT_HOLD; INIT_DELAYED_WORK(&gl->gl_work, glock_work_func); - INIT_DELAYED_WORK(&gl->gl_delete, delete_work_func); + if (gl->gl_name.ln_type == LM_TYPE_IOPEN) + INIT_DELAYED_WORK(&gl->gl_delete, delete_work_func);
mapping = gfs2_glock2aspace(gl); if (mapping) { @@ -1906,9 +1907,11 @@ bool gfs2_delete_work_queued(const struc
static void flush_delete_work(struct gfs2_glock *gl) { - if (cancel_delayed_work(&gl->gl_delete)) { - queue_delayed_work(gfs2_delete_workqueue, - &gl->gl_delete, 0); + if (gl->gl_name.ln_type == LM_TYPE_IOPEN) { + if (cancel_delayed_work(&gl->gl_delete)) { + queue_delayed_work(gfs2_delete_workqueue, + &gl->gl_delete, 0); + } } gfs2_glock_queue_work(gl, 0); }
From: Benjamin Coddington bcodding@redhat.com
commit b4868b44c5628995fdd8ef2e24dda73cef963a75 upstream.
Since commit 0e0cb35b417f ("NFSv4: Handle NFS4ERR_OLD_STATEID in CLOSE/OPEN_DOWNGRADE") the following livelock may occur if a CLOSE races with the update of the nfs_state:
Process 1 Process 2 Server ========= ========= ======== OPEN file OPEN file Reply OPEN (1) Reply OPEN (2) Update state (1) CLOSE file (1) Reply OLD_STATEID (1) CLOSE file (2) Reply CLOSE (-1) Update state (2) wait for state change OPEN file wake CLOSE file OPEN file wake CLOSE file ... ...
We can avoid this situation by not issuing an immediate retry with a bumped seqid when CLOSE/OPEN_DOWNGRADE receives NFS4ERR_OLD_STATEID. Instead, take the same approach used by OPEN and wait at least 5 seconds for outstanding stateid updates to complete if we can detect that we're out of sequence.
Note that after this change it is still possible (though unlikely) that CLOSE waits a full 5 seconds, bumps the seqid, and retries -- and that attempt races with another OPEN at the same time. In order to avoid this race (which would result in the livelock), update nfs_need_update_open_stateid() to handle the case where: - the state is NFS_OPEN_STATE, and - the stateid doesn't match the current open stateid
Finally, nfs_need_update_open_stateid() is modified to be idempotent and renamed to better suit the purpose of signaling that the stateid passed is the next stateid in sequence.
Fixes: 0e0cb35b417f ("NFSv4: Handle NFS4ERR_OLD_STATEID in CLOSE/OPEN_DOWNGRADE") Cc: stable@vger.kernel.org # v5.4+ Signed-off-by: Benjamin Coddington bcodding@redhat.com Signed-off-by: Anna Schumaker Anna.Schumaker@Netapp.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/nfs/nfs4_fs.h | 8 +++++ fs/nfs/nfs4proc.c | 81 ++++++++++++++++++++++++++++++----------------------- fs/nfs/nfs4trace.h | 1 3 files changed, 56 insertions(+), 34 deletions(-)
--- a/fs/nfs/nfs4_fs.h +++ b/fs/nfs/nfs4_fs.h @@ -599,6 +599,14 @@ static inline bool nfs4_stateid_is_newer return (s32)(be32_to_cpu(s1->seqid) - be32_to_cpu(s2->seqid)) > 0; }
+static inline bool nfs4_stateid_is_next(const nfs4_stateid *s1, const nfs4_stateid *s2) +{ + u32 seq1 = be32_to_cpu(s1->seqid); + u32 seq2 = be32_to_cpu(s2->seqid); + + return seq2 == seq1 + 1U || (seq2 == 1U && seq1 == 0xffffffffU); +} + static inline bool nfs4_stateid_match_or_older(const nfs4_stateid *dst, const nfs4_stateid *src) { return nfs4_stateid_match_other(dst, src) && --- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -1547,19 +1547,6 @@ static void nfs_state_log_update_open_st wake_up_all(&state->waitq); }
-static void nfs_state_log_out_of_order_open_stateid(struct nfs4_state *state, - const nfs4_stateid *stateid) -{ - u32 state_seqid = be32_to_cpu(state->open_stateid.seqid); - u32 stateid_seqid = be32_to_cpu(stateid->seqid); - - if (stateid_seqid == state_seqid + 1U || - (stateid_seqid == 1U && state_seqid == 0xffffffffU)) - nfs_state_log_update_open_stateid(state); - else - set_bit(NFS_STATE_CHANGE_WAIT, &state->flags); -} - static void nfs_test_and_clear_all_open_stateid(struct nfs4_state *state) { struct nfs_client *clp = state->owner->so_server->nfs_client; @@ -1585,21 +1572,19 @@ static void nfs_test_and_clear_all_open_ * i.e. The stateid seqids have to be initialised to 1, and * are then incremented on every state transition. */ -static bool nfs_need_update_open_stateid(struct nfs4_state *state, +static bool nfs_stateid_is_sequential(struct nfs4_state *state, const nfs4_stateid *stateid) { - if (test_bit(NFS_OPEN_STATE, &state->flags) == 0 || - !nfs4_stateid_match_other(stateid, &state->open_stateid)) { + if (test_bit(NFS_OPEN_STATE, &state->flags)) { + /* The common case - we're updating to a new sequence number */ + if (nfs4_stateid_match_other(stateid, &state->open_stateid) && + nfs4_stateid_is_next(&state->open_stateid, stateid)) { + return true; + } + } else { + /* This is the first OPEN in this generation */ if (stateid->seqid == cpu_to_be32(1)) - nfs_state_log_update_open_stateid(state); - else - set_bit(NFS_STATE_CHANGE_WAIT, &state->flags); - return true; - } - - if (nfs4_stateid_is_newer(stateid, &state->open_stateid)) { - nfs_state_log_out_of_order_open_stateid(state, stateid); - return true; + return true; } return false; } @@ -1673,16 +1658,16 @@ static void nfs_set_open_stateid_locked( int status = 0; for (;;) {
- if (!nfs_need_update_open_stateid(state, stateid)) - return; - if (!test_bit(NFS_STATE_CHANGE_WAIT, &state->flags)) + if (nfs_stateid_is_sequential(state, stateid)) break; + if (status) break; /* Rely on seqids for serialisation with NFSv4.0 */ if (!nfs4_has_session(NFS_SERVER(state->inode)->nfs_client)) break;
+ set_bit(NFS_STATE_CHANGE_WAIT, &state->flags); prepare_to_wait(&state->waitq, &wait, TASK_KILLABLE); /* * Ensure we process the state changes in the same order @@ -1693,6 +1678,7 @@ static void nfs_set_open_stateid_locked( spin_unlock(&state->owner->so_lock); rcu_read_unlock(); trace_nfs4_open_stateid_update_wait(state->inode, stateid, 0); + if (!signal_pending(current)) { if (schedule_timeout(5*HZ) == 0) status = -EAGAIN; @@ -3435,7 +3421,8 @@ static bool nfs4_refresh_open_old_statei __be32 seqid_open; u32 dst_seqid; bool ret; - int seq; + int seq, status = -EAGAIN; + DEFINE_WAIT(wait);
for (;;) { ret = false; @@ -3447,15 +3434,41 @@ static bool nfs4_refresh_open_old_statei continue; break; } + + write_seqlock(&state->seqlock); seqid_open = state->open_stateid.seqid; - if (read_seqretry(&state->seqlock, seq)) - continue;
dst_seqid = be32_to_cpu(dst->seqid); - if ((s32)(dst_seqid - be32_to_cpu(seqid_open)) >= 0) - dst->seqid = cpu_to_be32(dst_seqid + 1); - else + + /* Did another OPEN bump the state's seqid? try again: */ + if ((s32)(be32_to_cpu(seqid_open) - dst_seqid) > 0) { dst->seqid = seqid_open; + write_sequnlock(&state->seqlock); + ret = true; + break; + } + + /* server says we're behind but we haven't seen the update yet */ + set_bit(NFS_STATE_CHANGE_WAIT, &state->flags); + prepare_to_wait(&state->waitq, &wait, TASK_KILLABLE); + write_sequnlock(&state->seqlock); + trace_nfs4_close_stateid_update_wait(state->inode, dst, 0); + + if (signal_pending(current)) + status = -EINTR; + else + if (schedule_timeout(5*HZ) != 0) + status = 0; + + finish_wait(&state->waitq, &wait); + + if (!status) + continue; + if (status == -EINTR) + break; + + /* we slept the whole 5 seconds, we must have lost a seqid */ + dst->seqid = cpu_to_be32(dst_seqid + 1); ret = true; break; } --- a/fs/nfs/nfs4trace.h +++ b/fs/nfs/nfs4trace.h @@ -1511,6 +1511,7 @@ DEFINE_NFS4_INODE_STATEID_EVENT(nfs4_set DEFINE_NFS4_INODE_STATEID_EVENT(nfs4_delegreturn); DEFINE_NFS4_INODE_STATEID_EVENT(nfs4_open_stateid_update); DEFINE_NFS4_INODE_STATEID_EVENT(nfs4_open_stateid_update_wait); +DEFINE_NFS4_INODE_STATEID_EVENT(nfs4_close_stateid_update_wait);
DECLARE_EVENT_CLASS(nfs4_getattr_event, TP_PROTO(
From: Olga Kornievskaia kolga@netapp.com
commit 8c39076c276be0b31982e44654e2c2357473258a upstream.
RFC 7862 introduced a new flag that either client or server is allowed to set: EXCHGID4_FLAG_SUPP_FENCE_OPS.
Client needs to update its bitmask to allow for this flag value.
v2: changed minor version argument to unsigned int
Signed-off-by: Olga Kornievskaia kolga@netapp.com CC: stable@vger.kernel.org Signed-off-by: Anna Schumaker Anna.Schumaker@Netapp.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/nfs/nfs4proc.c | 9 ++++++--- include/uapi/linux/nfs4.h | 3 +++ 2 files changed, 9 insertions(+), 3 deletions(-)
--- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -8052,9 +8052,11 @@ int nfs4_proc_secinfo(struct inode *dir, * both PNFS and NON_PNFS flags set, and not having one of NON_PNFS, PNFS, or * DS flags set. */ -static int nfs4_check_cl_exchange_flags(u32 flags) +static int nfs4_check_cl_exchange_flags(u32 flags, u32 version) { - if (flags & ~EXCHGID4_FLAG_MASK_R) + if (version >= 2 && (flags & ~EXCHGID4_2_FLAG_MASK_R)) + goto out_inval; + else if (version < 2 && (flags & ~EXCHGID4_FLAG_MASK_R)) goto out_inval; if ((flags & EXCHGID4_FLAG_USE_PNFS_MDS) && (flags & EXCHGID4_FLAG_USE_NON_PNFS)) @@ -8467,7 +8469,8 @@ static int _nfs4_proc_exchange_id(struct if (status != 0) goto out;
- status = nfs4_check_cl_exchange_flags(resp->flags); + status = nfs4_check_cl_exchange_flags(resp->flags, + clp->cl_mvops->minor_version); if (status != 0) goto out;
--- a/include/uapi/linux/nfs4.h +++ b/include/uapi/linux/nfs4.h @@ -139,6 +139,8 @@
#define EXCHGID4_FLAG_UPD_CONFIRMED_REC_A 0x40000000 #define EXCHGID4_FLAG_CONFIRMED_R 0x80000000 + +#define EXCHGID4_FLAG_SUPP_FENCE_OPS 0x00000004 /* * Since the validity of these bits depends on whether * they're set in the argument or response, have separate @@ -146,6 +148,7 @@ */ #define EXCHGID4_FLAG_MASK_A 0x40070103 #define EXCHGID4_FLAG_MASK_R 0x80070103 +#define EXCHGID4_2_FLAG_MASK_R 0x80070107
#define SEQ4_STATUS_CB_PATH_DOWN 0x00000001 #define SEQ4_STATUS_CB_GSS_CONTEXTS_EXPIRING 0x00000002
From: Chuck Lever chuck.lever@oracle.com
commit 6b3dccd48de8a4c650b01499a0b09d1e2279649e upstream.
There's no protection in nfsd_dispatch() against a NULL .pc_func helpers. A malicious NFS client can trigger a crash by invoking the unused/unsupported NFSv2 ROOT or WRITECACHE procedures.
The current NFSD dispatcher does not support returning a void reply to a non-NULL procedure, so the reply to both of these is wrong, for the moment.
Cc: stable@vger.kernel.org Signed-off-by: Chuck Lever chuck.lever@oracle.com Signed-off-by: J. Bruce Fields bfields@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/nfsd/nfsproc.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+)
--- a/fs/nfsd/nfsproc.c +++ b/fs/nfsd/nfsproc.c @@ -118,6 +118,13 @@ done: return nfsd_return_attrs(nfserr, resp); }
+/* Obsolete, replaced by MNTPROC_MNT. */ +static __be32 +nfsd_proc_root(struct svc_rqst *rqstp) +{ + return nfs_ok; +} + /* * Look up a path name component * Note: the dentry in the resp->fh may be negative if the file @@ -203,6 +210,13 @@ nfsd_proc_read(struct svc_rqst *rqstp) return fh_getattr(&resp->fh, &resp->stat); }
+/* Reserved */ +static __be32 +nfsd_proc_writecache(struct svc_rqst *rqstp) +{ + return nfs_ok; +} + /* * Write data to a file * N.B. After this call resp->fh needs an fh_put @@ -617,6 +631,7 @@ static const struct svc_procedure nfsd_p .pc_xdrressize = ST+AT, }, [NFSPROC_ROOT] = { + .pc_func = nfsd_proc_root, .pc_decode = nfssvc_decode_void, .pc_encode = nfssvc_encode_void, .pc_argsize = sizeof(struct nfsd_void), @@ -654,6 +669,7 @@ static const struct svc_procedure nfsd_p .pc_xdrressize = ST+AT+1+NFSSVC_MAXBLKSIZE_V2/4, }, [NFSPROC_WRITECACHE] = { + .pc_func = nfsd_proc_writecache, .pc_decode = nfssvc_decode_void, .pc_encode = nfssvc_encode_void, .pc_argsize = sizeof(struct nfsd_void),
From: Zhihao Cheng chengzhihao1@huawei.com
commit 58f6e78a65f1fcbf732f60a7478ccc99873ff3ba upstream.
Fix some potential memory leaks in error handling branches while iterating dent entries. For example, function dbg_check_dir() forgets to free pdent if it exists.
Signed-off-by: Zhihao Cheng chengzhihao1@huawei.com Cc: stable@vger.kernel.org Fixes: 1e51764a3c2ac05a2 ("UBIFS: add new flash file system") Signed-off-by: Richard Weinberger richard@nod.at Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ubifs/debug.c | 1 + 1 file changed, 1 insertion(+)
--- a/fs/ubifs/debug.c +++ b/fs/ubifs/debug.c @@ -1123,6 +1123,7 @@ int dbg_check_dir(struct ubifs_info *c, err = PTR_ERR(dent); if (err == -ENOENT) break; + kfree(pdent); return err; }
From: Zhihao Cheng chengzhihao1@huawei.com
commit f2aae745b82c842221f4f233051f9ac641790959 upstream.
Fix some potential memory leaks in error handling branches while iterating xattr entries. For example, function ubifs_tnc_remove_ino() forgets to free pxent if it exists. Similar problems also exist in ubifs_purge_xattrs(), ubifs_add_orphan() and ubifs_jnl_write_inode().
Signed-off-by: Zhihao Cheng chengzhihao1@huawei.com Cc: stable@vger.kernel.org Fixes: 1e51764a3c2ac05a2 ("UBIFS: add new flash file system") Signed-off-by: Richard Weinberger richard@nod.at Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ubifs/journal.c | 2 ++ fs/ubifs/orphan.c | 2 ++ fs/ubifs/tnc.c | 3 +++ fs/ubifs/xattr.c | 2 ++ 4 files changed, 9 insertions(+)
--- a/fs/ubifs/journal.c +++ b/fs/ubifs/journal.c @@ -894,6 +894,7 @@ int ubifs_jnl_write_inode(struct ubifs_i if (err == -ENOENT) break;
+ kfree(pxent); goto out_release; }
@@ -906,6 +907,7 @@ int ubifs_jnl_write_inode(struct ubifs_i ubifs_err(c, "dead directory entry '%s', error %d", xent->name, err); ubifs_ro_mode(c, err); + kfree(pxent); kfree(xent); goto out_release; } --- a/fs/ubifs/orphan.c +++ b/fs/ubifs/orphan.c @@ -173,6 +173,7 @@ int ubifs_add_orphan(struct ubifs_info * err = PTR_ERR(xent); if (err == -ENOENT) break; + kfree(pxent); return err; }
@@ -182,6 +183,7 @@ int ubifs_add_orphan(struct ubifs_info *
xattr_orphan = orphan_add(c, xattr_inum, orphan); if (IS_ERR(xattr_orphan)) { + kfree(pxent); kfree(xent); return PTR_ERR(xattr_orphan); } --- a/fs/ubifs/tnc.c +++ b/fs/ubifs/tnc.c @@ -2885,6 +2885,7 @@ int ubifs_tnc_remove_ino(struct ubifs_in err = PTR_ERR(xent); if (err == -ENOENT) break; + kfree(pxent); return err; }
@@ -2898,6 +2899,7 @@ int ubifs_tnc_remove_ino(struct ubifs_in fname_len(&nm) = le16_to_cpu(xent->nlen); err = ubifs_tnc_remove_nm(c, &key1, &nm); if (err) { + kfree(pxent); kfree(xent); return err; } @@ -2906,6 +2908,7 @@ int ubifs_tnc_remove_ino(struct ubifs_in highest_ino_key(c, &key2, xattr_inum); err = ubifs_tnc_remove_range(c, &key1, &key2); if (err) { + kfree(pxent); kfree(xent); return err; } --- a/fs/ubifs/xattr.c +++ b/fs/ubifs/xattr.c @@ -522,6 +522,7 @@ int ubifs_purge_xattrs(struct inode *hos xent->name, err); ubifs_ro_mode(c, err); kfree(pxent); + kfree(xent); return err; }
@@ -531,6 +532,7 @@ int ubifs_purge_xattrs(struct inode *hos err = remove_xattr(c, host, xino, &nm); if (err) { kfree(pxent); + kfree(xent); iput(xino); ubifs_err(c, "cannot remove xattr, error %d", err); return err;
From: Richard Weinberger richard@nod.at
commit 78c7d49f55d8631b67c09f9bfbe8155211a9ea06 upstream.
When removing the last reference of an inode the size of an auth node is already part of write_len. So we must not call ubifs_add_auth_dirt(). Call it only when needed.
Cc: stable@vger.kernel.org Cc: Sascha Hauer s.hauer@pengutronix.de Cc: Kristof Havasi havasiefr@gmail.com Fixes: 6a98bc4614de ("ubifs: Add authentication nodes to journal") Reported-and-tested-by: Kristof Havasi havasiefr@gmail.com Reviewed-by: Sascha Hauer s.hauer@pengutronix.de Signed-off-by: Richard Weinberger richard@nod.at Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ubifs/journal.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/fs/ubifs/journal.c +++ b/fs/ubifs/journal.c @@ -938,8 +938,6 @@ int ubifs_jnl_write_inode(struct ubifs_i inode->i_ino); release_head(c, BASEHD);
- ubifs_add_auth_dirt(c, lnum); - if (last_reference) { err = ubifs_tnc_remove_ino(c, inode->i_ino); if (err) @@ -949,6 +947,8 @@ int ubifs_jnl_write_inode(struct ubifs_i } else { union ubifs_key key;
+ ubifs_add_auth_dirt(c, lnum); + ino_key_init(c, &key, inode->i_ino); err = ubifs_tnc_add(c, &key, lnum, offs, ilen, hash); }
From: Zhihao Cheng chengzhihao1@huawei.com
commit 47f6d9ce45b03a40c34b668a9884754c58122b39 upstream.
Fix a memory leak after dumping authentication mount options in error handling branch.
Signed-off-by: Zhihao Cheng chengzhihao1@huawei.com Cc: stable@vger.kernel.org # 4.20+ Fixes: d8a22773a12c6d7 ("ubifs: Enable authentication support") Reviewed-by: Sascha Hauer s.hauer@pengutronix.de Signed-off-by: Richard Weinberger richard@nod.at Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ubifs/super.c | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-)
--- a/fs/ubifs/super.c +++ b/fs/ubifs/super.c @@ -1141,6 +1141,18 @@ static int ubifs_parse_options(struct ub return 0; }
+/* + * ubifs_release_options - release mount parameters which have been dumped. + * @c: UBIFS file-system description object + */ +static void ubifs_release_options(struct ubifs_info *c) +{ + kfree(c->auth_key_name); + c->auth_key_name = NULL; + kfree(c->auth_hash_name); + c->auth_hash_name = NULL; +} + /** * destroy_journal - destroy journal data structures. * @c: UBIFS file-system description object @@ -1650,8 +1662,7 @@ static void ubifs_umount(struct ubifs_in ubifs_lpt_free(c, 0); ubifs_exit_authentication(c);
- kfree(c->auth_key_name); - kfree(c->auth_hash_name); + ubifs_release_options(c); kfree(c->cbuf); kfree(c->rcvrd_mst_node); kfree(c->mst_node); @@ -2219,6 +2230,7 @@ out_umount: out_unlock: mutex_unlock(&c->umount_mutex); out_close: + ubifs_release_options(c); ubi_close_volume(c->ubi); out: return err;
From: Zhihao Cheng chengzhihao1@huawei.com
commit bb674a4d4de1032837fcbf860a63939e66f0b7ad upstream.
There is no need to dump authentication options while remounting, because authentication initialization can only be doing once in the first mount process. Dumping authentication mount options in remount process may cause memory leak if UBIFS has already been mounted with old authentication mount options.
Signed-off-by: Zhihao Cheng chengzhihao1@huawei.com Cc: stable@vger.kernel.org # 4.20+ Fixes: d8a22773a12c6d7 ("ubifs: Enable authentication support") Reviewed-by: Sascha Hauer s.hauer@pengutronix.de Signed-off-by: Richard Weinberger richard@nod.at Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ubifs/super.c | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-)
--- a/fs/ubifs/super.c +++ b/fs/ubifs/super.c @@ -1110,14 +1110,20 @@ static int ubifs_parse_options(struct ub break; } case Opt_auth_key: - c->auth_key_name = kstrdup(args[0].from, GFP_KERNEL); - if (!c->auth_key_name) - return -ENOMEM; + if (!is_remount) { + c->auth_key_name = kstrdup(args[0].from, + GFP_KERNEL); + if (!c->auth_key_name) + return -ENOMEM; + } break; case Opt_auth_hash_name: - c->auth_hash_name = kstrdup(args[0].from, GFP_KERNEL); - if (!c->auth_hash_name) - return -ENOMEM; + if (!is_remount) { + c->auth_hash_name = kstrdup(args[0].from, + GFP_KERNEL); + if (!c->auth_hash_name) + return -ENOMEM; + } break; case Opt_ignore: break;
From: Zhihao Cheng chengzhihao1@huawei.com
commit e2a05cc7f8229e150243cdae40f2af9021d67a4a upstream.
Release the authentication related resource in some error handling branches in mount_ubifs().
Signed-off-by: Zhihao Cheng chengzhihao1@huawei.com Cc: stable@vger.kernel.org # 4.20+ Fixes: d8a22773a12c6d7 ("ubifs: Enable authentication support") Reviewed-by: Sascha Hauer s.hauer@pengutronix.de Signed-off-by: Richard Weinberger richard@nod.at Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ubifs/super.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-)
--- a/fs/ubifs/super.c +++ b/fs/ubifs/super.c @@ -1331,7 +1331,7 @@ static int mount_ubifs(struct ubifs_info
err = ubifs_read_superblock(c); if (err) - goto out_free; + goto out_auth;
c->probing = 0;
@@ -1343,18 +1343,18 @@ static int mount_ubifs(struct ubifs_info ubifs_err(c, "'compressor "%s" is not compiled in", ubifs_compr_name(c, c->default_compr)); err = -ENOTSUPP; - goto out_free; + goto out_auth; }
err = init_constants_sb(c); if (err) - goto out_free; + goto out_auth;
sz = ALIGN(c->max_idx_node_sz, c->min_io_size) * 2; c->cbuf = kmalloc(sz, GFP_NOFS); if (!c->cbuf) { err = -ENOMEM; - goto out_free; + goto out_auth; }
err = alloc_wbufs(c); @@ -1629,6 +1629,8 @@ out_wbufs: free_wbufs(c); out_cbuf: kfree(c->cbuf); +out_auth: + ubifs_exit_authentication(c); out_free: kfree(c->write_reserve_buf); kfree(c->bu.buf);
From: Kim Phillips kim.phillips@amd.com
commit 60d804521ec4cd01217a96f33cd1bb29e295333d upstream.
Later revisions of PPRs that post-date the original Family 17h events submission patch add these events.
Specifically, they were not in this 2017 revision of the F17h PPR:
Processor Programming Reference (PPR) for AMD Family 17h Model 01h, Revision B1 Processors Rev 1.14 - April 15, 2017
But e.g., are included in this 2019 version of the PPR:
Processor Programming Reference (PPR) for AMD Family 17h Model 18h, Revision B1 Processors Rev. 3.14 - Sep 26, 2019
Fixes: 98c07a8f74f8 ("perf vendor events amd: perf PMU events for AMD Family 17h") Link: https://bugzilla.kernel.org/show_bug.cgi?id=206537 Signed-off-by: Kim Phillips kim.phillips@amd.com Reviewed-by: Ian Rogers irogers@google.com Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Andi Kleen ak@linux.intel.com Cc: Borislav Petkov bp@suse.de Cc: Jin Yao yao.jin@linux.intel.com Cc: Jiri Olsa jolsa@redhat.com Cc: John Garry john.garry@huawei.com Cc: Jon Grimm jon.grimm@amd.com Cc: Kan Liang kan.liang@linux.intel.com Cc: Mark Rutland mark.rutland@arm.com Cc: Martin Jambor mjambor@suse.cz Cc: Martin Liška mliska@suse.cz Cc: Michael Petlan mpetlan@redhat.com Cc: Namhyung Kim namhyung@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: stable@vger.kernel.org Cc: Stephane Eranian eranian@google.com Cc: Vijay Thakkar vijaythakkar@me.com Cc: William Cohen wcohen@redhat.com Cc: Yunfeng Ye yeyunfeng@huawei.com Link: http://lore.kernel.org/lkml/20200901220944.277505-1-kim.phillips@amd.com Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- tools/perf/pmu-events/arch/x86/amdzen1/cache.json | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+)
--- a/tools/perf/pmu-events/arch/x86/amdzen1/cache.json +++ b/tools/perf/pmu-events/arch/x86/amdzen1/cache.json @@ -250,6 +250,24 @@ "UMask": "0x1" }, { + "EventName": "l2_pf_hit_l2", + "EventCode": "0x70", + "BriefDescription": "L2 prefetch hit in L2.", + "UMask": "0xff" + }, + { + "EventName": "l2_pf_miss_l2_hit_l3", + "EventCode": "0x71", + "BriefDescription": "L2 prefetcher hits in L3. Counts all L2 prefetches accepted by the L2 pipeline which miss the L2 cache and hit the L3.", + "UMask": "0xff" + }, + { + "EventName": "l2_pf_miss_l2_l3", + "EventCode": "0x72", + "BriefDescription": "L2 prefetcher misses in L3. All L2 prefetches accepted by the L2 pipeline which miss the L2 and the L3 caches.", + "UMask": "0xff" + }, + { "EventName": "l3_request_g1.caching_l3_cache_accesses", "EventCode": "0x01", "BriefDescription": "Caching: L3 cache accesses",
From: Jiri Olsa jolsa@kernel.org
commit 6fcd5ddc3b1467b3586972ef785d0d926ae4cdf4 upstream.
Hagen reported broken strings in python3 tracepoint scripts:
make PYTHON=python3 perf record -e sched:sched_switch -a -- sleep 5 perf script --gen-script py perf script -s ./perf-script.py
[..] sched__sched_switch 7 563231.759525792 0 swapper prev_comm=bytearray(b'swapper/7\x00\x00\x00\x00\x00\x00\x00'), prev_pid=0, prev_prio=120, prev_state=, next_comm=bytearray(b'mutex-thread-co\x00'),
The problem is in the is_printable_array function that does not take the zero byte into account and claim such string as not printable, so the code will create byte array instead of string.
Committer testing:
After this fix:
sched__sched_switch 3 484522.497072626 1158680 kworker/3:0-eve prev_comm=kworker/3:0, prev_pid=1158680, prev_prio=120, prev_state=I, next_comm=swapper/3, next_pid=0, next_prio=120 Sample: {addr=0, cpu=3, datasrc=84410401, datasrc_decode=N/A|SNP N/A|TLB N/A|LCK N/A, ip=18446744071841817196, period=1, phys_addr=0, pid=1158680, tid=1158680, time=484522497072626, transaction=0, values=[(0, 0)], weight=0}
sched__sched_switch 4 484522.497085610 1225814 perf prev_comm=perf, prev_pid=1225814, prev_prio=120, prev_state=, next_comm=migration/4, next_pid=30, next_prio=0 Sample: {addr=0, cpu=4, datasrc=84410401, datasrc_decode=N/A|SNP N/A|TLB N/A|LCK N/A, ip=18446744071841817196, period=1, phys_addr=0, pid=1225814, tid=1225814, time=484522497085610, transaction=0, values=[(0, 0)], weight=0}
Fixes: 249de6e07458 ("perf script python: Fix string vs byte array resolving") Signed-off-by: Jiri Olsa jolsa@kernel.org Tested-by: Arnaldo Carvalho de Melo acme@redhat.com Tested-by: Hagen Paul Pfeifer hagen@jauu.net Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Mark Rutland mark.rutland@arm.com Cc: Michael Petlan mpetlan@redhat.com Cc: Namhyung Kim namhyung@kernel.org Cc: Peter Zijlstra peterz@infradead.org Cc: stable@vger.kernel.org Link: http://lore.kernel.org/lkml/20200928201135.3633850-1-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo acme@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- tools/perf/util/print_binary.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/tools/perf/util/print_binary.c +++ b/tools/perf/util/print_binary.c @@ -50,7 +50,7 @@ int is_printable_array(char *p, unsigned
len--;
- for (i = 0; i < len; i++) { + for (i = 0; i < len && p[i]; i++) { if (!isprint(p[i]) && !isspace(p[i])) return 0; }
From: Vineet Gupta vgupta@synopsys.com
commit 8c42a5c02bec6c7eccf08957be3c6c8fccf9790b upstream.
commit feb92d7d3813456c11dce21 "(ARC: perf: don't bail setup if pct irq missing in device-tree)" introduced a silly brown-paper bag bug: The assignment and comparison in an if statement were not bracketed correctly leaving the order of evaluation undefined.
| | if (has_interrupts && (irq = platform_get_irq(pdev, 0) >= 0)) { | ^^^ ^^^^
And given such a chance, the compiler will bite you hard, fully entitled to generating this piece of beauty:
| | # if (has_interrupts && (irq = platform_get_irq(pdev, 0) >= 0)) { | | bl.d @platform_get_irq <-- irq returned in r0 | | setge r2, r0, 0 <-- r2 is bool 1 or 0 if irq >= 0 true/false | brlt.d r0, 0, @.L114 | | st_s r2,[sp] <-- irq saved is bool 1 or 0, not actual return val | st 1,[r3,160] # arc_pmu.18_29->irq <-- drops bool and assumes 1 | | # return __request_percpu_irq(irq, handler, 0, | | bl.d @__request_percpu_irq; | mov_s r0,1 <-- drops even bool and assumes 1 which fails
With the snafu fixed, everything is as expected.
| bl.d @platform_get_irq <-- returns irq in r0 | | mov_s r2,r0 | brlt.d r2, 0, @.L112 | | st_s r0,[sp] <-- irq isaved is actual return value above | st r0,[r13,160] #arc_pmu.18_27->irq | | bl.d @__request_percpu_irq <-- r0 unchanged so actual irq returned | add r4,r4,r12 #, tmp363, __ptr
Cc: stable@vger.kernel.org Signed-off-by: Vineet Gupta vgupta@synopsys.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/arc/kernel/perf_event.c | 29 +++++++++++++++++++---------- 1 file changed, 19 insertions(+), 10 deletions(-)
--- a/arch/arc/kernel/perf_event.c +++ b/arch/arc/kernel/perf_event.c @@ -562,7 +562,7 @@ static int arc_pmu_device_probe(struct p { struct arc_reg_pct_build pct_bcr; struct arc_reg_cc_build cc_bcr; - int i, has_interrupts, irq; + int i, has_interrupts, irq = -1; int counter_size; /* in bits */
union cc_name { @@ -637,18 +637,27 @@ static int arc_pmu_device_probe(struct p .attr_groups = arc_pmu->attr_groups, };
- if (has_interrupts && (irq = platform_get_irq(pdev, 0) >= 0)) { + if (has_interrupts) { + irq = platform_get_irq(pdev, 0); + if (irq >= 0) { + int ret; + + arc_pmu->irq = irq; + + /* intc map function ensures irq_set_percpu_devid() called */ + ret = request_percpu_irq(irq, arc_pmu_intr, "ARC perf counters", + this_cpu_ptr(&arc_pmu_cpu)); + + if (!ret) + on_each_cpu(arc_cpu_pmu_irq_init, &irq, 1); + else + irq = -1; + }
- arc_pmu->irq = irq; - - /* intc map function ensures irq_set_percpu_devid() called */ - request_percpu_irq(irq, arc_pmu_intr, "ARC perf counters", - this_cpu_ptr(&arc_pmu_cpu)); + }
- on_each_cpu(arc_cpu_pmu_irq_init, &irq, 1); - } else { + if (irq == -1) arc_pmu->pmu.capabilities |= PERF_PMU_CAP_NO_INTERRUPT; - }
/* * perf parser doesn't really like '-' symbol in events name, so let's
From: Zhihao Cheng chengzhihao1@huawei.com
commit d005f8c6588efcfbe88099b6edafc6f58c84a9c1 upstream.
A detach hung is possible when a race occurs between the detach process and the ubi background thread. The following sequences outline the race:
ubi thread: if (list_empty(&ubi->works)...
ubi detach: set_bit(KTHREAD_SHOULD_STOP, &kthread->flags) => by kthread_stop() wake_up_process() => ubi thread is still running, so 0 is returned
ubi thread: set_current_state(TASK_INTERRUPTIBLE) schedule() => ubi thread will never be scheduled again
ubi detach: wait_for_completion() => hung task!
To fix that, we need to check kthread_should_stop() after we set the task state, so the ubi thread will either see the stop bit and exit or the task state is reset to runnable such that it isn't scheduled out indefinitely.
Signed-off-by: Zhihao Cheng chengzhihao1@huawei.com Cc: stable@vger.kernel.org Fixes: 801c135ce73d5df1ca ("UBI: Unsorted Block Images") Reported-by: syzbot+853639d0cb16c31c7a14@syzkaller.appspotmail.com Signed-off-by: Richard Weinberger richard@nod.at Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/mtd/ubi/wl.c | 13 +++++++++++++ 1 file changed, 13 insertions(+)
--- a/drivers/mtd/ubi/wl.c +++ b/drivers/mtd/ubi/wl.c @@ -1639,6 +1639,19 @@ int ubi_thread(void *u) !ubi->thread_enabled || ubi_dbg_is_bgt_disabled(ubi)) { set_current_state(TASK_INTERRUPTIBLE); spin_unlock(&ubi->wl_lock); + + /* + * Check kthread_should_stop() after we set the task + * state to guarantee that we either see the stop bit + * and exit or the task state is reset to runnable such + * that it's not scheduled out indefinitely and detects + * the stop bit at kthread_should_stop(). + */ + if (kthread_should_stop()) { + set_current_state(TASK_RUNNING); + break; + } + schedule(); continue; }
From: Krzysztof Kozlowski krzk@kernel.org
commit 7404840d87557c4092bf0272bce5e0354c774bf9 upstream.
Fix linkage error when CONFIG_BINFMT_ELF is selected but CONFIG_COREDUMP is not:
ia64-linux-ld: arch/ia64/kernel/elfcore.o: in function `elf_core_write_extra_phdrs': elfcore.c:(.text+0x172): undefined reference to `dump_emit' ia64-linux-ld: arch/ia64/kernel/elfcore.o: in function `elf_core_write_extra_data': elfcore.c:(.text+0x2b2): undefined reference to `dump_emit'
Fixes: 1fcccbac89f5 ("elf coredump: replace ELF_CORE_EXTRA_* macros by functions") Reported-by: kernel test robot lkp@intel.com Signed-off-by: Krzysztof Kozlowski krzk@kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Cc: Tony Luck tony.luck@intel.com Cc: Fenghua Yu fenghua.yu@intel.com Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20200819064146.12529-1-krzk@kernel.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/ia64/kernel/Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/ia64/kernel/Makefile +++ b/arch/ia64/kernel/Makefile @@ -41,7 +41,7 @@ obj-y += esi_stub.o # must be in kern endif obj-$(CONFIG_INTEL_IOMMU) += pci-dma.o
-obj-$(CONFIG_BINFMT_ELF) += elfcore.o +obj-$(CONFIG_ELF_CORE) += elfcore.o
# fp_emulate() expects f2-f5,f16-f31 to contain the user-level state. CFLAGS_traps.o += -mfixed-range=f2-f5,f16-f31
From: Bartosz Golaszewski bgolaszewski@baylibre.com
commit d3b14296da69adb7825022f3224ac6137eb30abf upstream.
The way the driver is implemented is buggy for the (admittedly unlikely) use case where there are two RTCs with one having an interrupt configured and the second not. This is caused by the fact that we use a global rtc_class_ops struct which we modify depending on whether the irq number is present or not.
Fix it by using two const ops structs with and without alarm operations. While at it: not being able to request a configured interrupt is an error so don't ignore it and bail out of probe().
Fixes: ed13d89b08e3 ("rtc: Add Epson RX8010SJ RTC driver") Signed-off-by: Bartosz Golaszewski bgolaszewski@baylibre.com Signed-off-by: Alexandre Belloni alexandre.belloni@bootlin.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200914154601.32245-2-brgl@bgdev.pl Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/rtc/rtc-rx8010.c | 24 +++++++++++++++++------- 1 file changed, 17 insertions(+), 7 deletions(-)
--- a/drivers/rtc/rtc-rx8010.c +++ b/drivers/rtc/rtc-rx8010.c @@ -407,16 +407,26 @@ static int rx8010_ioctl(struct device *d } }
-static struct rtc_class_ops rx8010_rtc_ops = { +static const struct rtc_class_ops rx8010_rtc_ops_default = { .read_time = rx8010_get_time, .set_time = rx8010_set_time, .ioctl = rx8010_ioctl, };
+static const struct rtc_class_ops rx8010_rtc_ops_alarm = { + .read_time = rx8010_get_time, + .set_time = rx8010_set_time, + .ioctl = rx8010_ioctl, + .read_alarm = rx8010_read_alarm, + .set_alarm = rx8010_set_alarm, + .alarm_irq_enable = rx8010_alarm_irq_enable, +}; + static int rx8010_probe(struct i2c_client *client, const struct i2c_device_id *id) { struct i2c_adapter *adapter = client->adapter; + const struct rtc_class_ops *rtc_ops; struct rx8010_data *rx8010; int err = 0;
@@ -447,16 +457,16 @@ static int rx8010_probe(struct i2c_clien
if (err) { dev_err(&client->dev, "unable to request IRQ\n"); - client->irq = 0; - } else { - rx8010_rtc_ops.read_alarm = rx8010_read_alarm; - rx8010_rtc_ops.set_alarm = rx8010_set_alarm; - rx8010_rtc_ops.alarm_irq_enable = rx8010_alarm_irq_enable; + return err; } + + rtc_ops = &rx8010_rtc_ops_alarm; + } else { + rtc_ops = &rx8010_rtc_ops_default; }
rx8010->rtc = devm_rtc_device_register(&client->dev, client->name, - &rx8010_rtc_ops, THIS_MODULE); + rtc_ops, THIS_MODULE);
if (IS_ERR(rx8010->rtc)) { dev_err(&client->dev, "unable to register the class device\n");
From: Krzysztof Kozlowski krzk@kernel.org
commit e50e4f0b85be308a01b830c5fbdffc657e1a6dd0 upstream.
If interrupt comes late, during probe error path or device remove (could be triggered with CONFIG_DEBUG_SHIRQ), the interrupt handler i2c_imx_isr() will access registers with the clock being disabled. This leads to external abort on non-linefetch on Toradex Colibri VF50 module (with Vybrid VF5xx):
Unhandled fault: external abort on non-linefetch (0x1008) at 0x8882d003 Internal error: : 1008 [#1] ARM Modules linked in: CPU: 0 PID: 1 Comm: swapper Not tainted 5.7.0 #607 Hardware name: Freescale Vybrid VF5xx/VF6xx (Device Tree) (i2c_imx_isr) from [<8017009c>] (free_irq+0x25c/0x3b0) (free_irq) from [<805844ec>] (release_nodes+0x178/0x284) (release_nodes) from [<80580030>] (really_probe+0x10c/0x348) (really_probe) from [<80580380>] (driver_probe_device+0x60/0x170) (driver_probe_device) from [<80580630>] (device_driver_attach+0x58/0x60) (device_driver_attach) from [<805806bc>] (__driver_attach+0x84/0xc0) (__driver_attach) from [<8057e228>] (bus_for_each_dev+0x68/0xb4) (bus_for_each_dev) from [<8057f3ec>] (bus_add_driver+0x144/0x1ec) (bus_add_driver) from [<80581320>] (driver_register+0x78/0x110) (driver_register) from [<8010213c>] (do_one_initcall+0xa8/0x2f4) (do_one_initcall) from [<80c0100c>] (kernel_init_freeable+0x178/0x1dc) (kernel_init_freeable) from [<80807048>] (kernel_init+0x8/0x110) (kernel_init) from [<80100114>] (ret_from_fork+0x14/0x20)
Additionally, the i2c_imx_isr() could wake up the wait queue (imx_i2c_struct->queue) before its initialization happens.
The resource-managed framework should not be used for interrupt handling, because the resource will be released too late - after disabling clocks. The interrupt handler is not prepared for such case.
Fixes: 1c4b6c3bcf30 ("i2c: imx: implement bus recovery") Cc: stable@vger.kernel.org Signed-off-by: Krzysztof Kozlowski krzk@kernel.org Acked-by: Oleksij Rempel o.rempel@pengutronix.de Signed-off-by: Wolfram Sang wsa@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/i2c/busses/i2c-imx.c | 24 +++++++++++++----------- 1 file changed, 13 insertions(+), 11 deletions(-)
--- a/drivers/i2c/busses/i2c-imx.c +++ b/drivers/i2c/busses/i2c-imx.c @@ -1171,14 +1171,6 @@ static int i2c_imx_probe(struct platform return ret; }
- /* Request IRQ */ - ret = devm_request_irq(&pdev->dev, irq, i2c_imx_isr, IRQF_SHARED, - pdev->name, i2c_imx); - if (ret) { - dev_err(&pdev->dev, "can't claim irq %d\n", irq); - goto clk_disable; - } - /* Init queue */ init_waitqueue_head(&i2c_imx->queue);
@@ -1197,6 +1189,14 @@ static int i2c_imx_probe(struct platform if (ret < 0) goto rpm_disable;
+ /* Request IRQ */ + ret = request_threaded_irq(irq, i2c_imx_isr, NULL, IRQF_SHARED, + pdev->name, i2c_imx); + if (ret) { + dev_err(&pdev->dev, "can't claim irq %d\n", irq); + goto rpm_disable; + } + /* Set up clock divider */ i2c_imx->bitrate = I2C_MAX_STANDARD_MODE_FREQ; ret = of_property_read_u32(pdev->dev.of_node, @@ -1239,13 +1239,12 @@ static int i2c_imx_probe(struct platform
clk_notifier_unregister: clk_notifier_unregister(i2c_imx->clk, &i2c_imx->clk_change_nb); + free_irq(irq, i2c_imx); rpm_disable: pm_runtime_put_noidle(&pdev->dev); pm_runtime_disable(&pdev->dev); pm_runtime_set_suspended(&pdev->dev); pm_runtime_dont_use_autosuspend(&pdev->dev); - -clk_disable: clk_disable_unprepare(i2c_imx->clk); return ret; } @@ -1253,7 +1252,7 @@ clk_disable: static int i2c_imx_remove(struct platform_device *pdev) { struct imx_i2c_struct *i2c_imx = platform_get_drvdata(pdev); - int ret; + int irq, ret;
ret = pm_runtime_get_sync(&pdev->dev); if (ret < 0) @@ -1273,6 +1272,9 @@ static int i2c_imx_remove(struct platfor imx_i2c_write_reg(0, i2c_imx, IMX_I2C_I2SR);
clk_notifier_unregister(i2c_imx->clk, &i2c_imx->clk_change_nb); + irq = platform_get_irq(pdev, 0); + if (irq >= 0) + free_irq(irq, i2c_imx); clk_disable_unprepare(i2c_imx->clk);
pm_runtime_put_noidle(&pdev->dev);
From: Madhav Chauhan madhav.chauhan@amd.com
commit c4aa8dff6091cc9536aeb255e544b0b4ba29faf4 upstream.
2MB area is reserved at top inside VM.
Suggested-by: Christian König christian.koenig@amd.com Signed-off-by: Madhav Chauhan madhav.chauhan@amd.com Reviewed-by: Christian König christian.koenig@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c @@ -596,6 +596,7 @@ int amdgpu_gem_va_ioctl(struct drm_devic struct ww_acquire_ctx ticket; struct list_head list, duplicates; uint64_t va_flags; + uint64_t vm_size; int r = 0;
if (args->va_address < AMDGPU_VA_RESERVED_SIZE) { @@ -616,6 +617,15 @@ int amdgpu_gem_va_ioctl(struct drm_devic
args->va_address &= AMDGPU_GMC_HOLE_MASK;
+ vm_size = adev->vm_manager.max_pfn * AMDGPU_GPU_PAGE_SIZE; + vm_size -= AMDGPU_VA_RESERVED_SIZE; + if (args->va_address + args->map_size > vm_size) { + dev_dbg(&dev->pdev->dev, + "va_address 0x%llx is in top reserved area 0x%llx\n", + args->va_address + args->map_size, vm_size); + return -EINVAL; + } + if ((args->flags & ~valid_flags) && (args->flags & ~prt_flags)) { dev_dbg(&dev->pdev->dev, "invalid flags combination 0x%08X\n", args->flags);
From: David Galiffi David.Galiffi@amd.com
commit 651111be24aa4c8b62c10f6fff51d9ad82411249 upstream.
[Why] Typo in backlight refactor introduced wrong register offset.
[How] SR(BIOS_SCRATCH_2) to NBIO_SR(BIOS_SCRATCH_2).
Signed-off-by: David Galiffi David.Galiffi@amd.com Reviewed-by: Anthony Koo Anthony.Koo@amd.com Acked-by: Qingqing Zhuo qingqing.zhuo@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/amd/display/dc/dce/dce_panel_cntl.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_panel_cntl.h +++ b/drivers/gpu/drm/amd/display/dc/dce/dce_panel_cntl.h @@ -54,7 +54,7 @@ SR(BL_PWM_CNTL2), \ SR(BL_PWM_PERIOD_CNTL), \ SR(BL_PWM_GRP1_REG_LOCK), \ - SR(BIOS_SCRATCH_2) + NBIO_SR(BIOS_SCRATCH_2)
#define DCE_PANEL_CNTL_SF(reg_name, field_name, post_fix)\ .field_name = reg_name ## __ ## field_name ## post_fix
From: Wesley Chalmers Wesley.Chalmers@amd.com
commit 37b7cb10f07c1174522faafc1d51c6591b1501d4 upstream.
[WHY] When disabling DP video, the current REG_WAIT timeout of 50ms is too low for certain cases with very high VSYNC intervals.
[HOW] Increase the timeout to 102ms, so that refresh rates as low as 10Hz can be handled properly.
Signed-off-by: Wesley Chalmers Wesley.Chalmers@amd.com Reviewed-by: Aric Cyr Aric.Cyr@amd.com Acked-by: Qingqing Zhuo qingqing.zhuo@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.c +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.c @@ -896,10 +896,10 @@ void enc1_stream_encoder_dp_blank( */ REG_UPDATE(DP_VID_STREAM_CNTL, DP_VID_STREAM_DIS_DEFER, 2); /* Larger delay to wait until VBLANK - use max retry of - * 10us*5000=50ms. This covers 41.7ms of minimum 24 Hz mode + + * 10us*10200=102ms. This covers 100.0ms of minimum 10 Hz mode + * a little more because we may not trust delay accuracy. */ - max_retries = DP_BLANK_MAX_RETRY * 250; + max_retries = DP_BLANK_MAX_RETRY * 501;
/* disable DP stream */ REG_UPDATE(DP_VID_STREAM_CNTL, DP_VID_STREAM_ENABLE, 0);
From: Veerabadhran G vegopala@amd.com
commit 187561dd76531945126b15c9486fec7cfa5f0415 upstream.
Synchronize the ring usage for vcn1 and jpeg1 to workaround a hardware bug.
Signed-off-by: Veerabadhran Gopalakrishnan veerabadhran.gopalakrishnan@amd.com Acked-by: Christian König christian.koenig@amd.com Reviewed-by: Christian König christian.koenig@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 2 ++ drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h | 1 + drivers/gpu/drm/amd/amdgpu/jpeg_v1_0.c | 24 ++++++++++++++++++++++-- drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c | 28 ++++++++++++++++++++++++---- drivers/gpu/drm/amd/amdgpu/vcn_v1_0.h | 3 ++- 5 files changed, 51 insertions(+), 7 deletions(-)
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c @@ -68,6 +68,7 @@ int amdgpu_vcn_sw_init(struct amdgpu_dev
INIT_DELAYED_WORK(&adev->vcn.idle_work, amdgpu_vcn_idle_work_handler); mutex_init(&adev->vcn.vcn_pg_lock); + mutex_init(&adev->vcn.vcn1_jpeg1_workaround); atomic_set(&adev->vcn.total_submission_cnt, 0); for (i = 0; i < adev->vcn.num_vcn_inst; i++) atomic_set(&adev->vcn.inst[i].dpg_enc_submission_cnt, 0); @@ -237,6 +238,7 @@ int amdgpu_vcn_sw_fini(struct amdgpu_dev }
release_firmware(adev->vcn.fw); + mutex_destroy(&adev->vcn.vcn1_jpeg1_workaround); mutex_destroy(&adev->vcn.vcn_pg_lock);
return 0; --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h @@ -220,6 +220,7 @@ struct amdgpu_vcn { struct amdgpu_vcn_inst inst[AMDGPU_MAX_VCN_INSTANCES]; struct amdgpu_vcn_reg internal; struct mutex vcn_pg_lock; + struct mutex vcn1_jpeg1_workaround; atomic_t total_submission_cnt;
unsigned harvest_config; --- a/drivers/gpu/drm/amd/amdgpu/jpeg_v1_0.c +++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v1_0.c @@ -33,6 +33,7 @@
static void jpeg_v1_0_set_dec_ring_funcs(struct amdgpu_device *adev); static void jpeg_v1_0_set_irq_funcs(struct amdgpu_device *adev); +static void jpeg_v1_0_ring_begin_use(struct amdgpu_ring *ring);
static void jpeg_v1_0_decode_ring_patch_wreg(struct amdgpu_ring *ring, uint32_t *ptr, uint32_t reg_offset, uint32_t val) { @@ -564,8 +565,8 @@ static const struct amdgpu_ring_funcs jp .insert_start = jpeg_v1_0_decode_ring_insert_start, .insert_end = jpeg_v1_0_decode_ring_insert_end, .pad_ib = amdgpu_ring_generic_pad_ib, - .begin_use = vcn_v1_0_ring_begin_use, - .end_use = amdgpu_vcn_ring_end_use, + .begin_use = jpeg_v1_0_ring_begin_use, + .end_use = vcn_v1_0_ring_end_use, .emit_wreg = jpeg_v1_0_decode_ring_emit_wreg, .emit_reg_wait = jpeg_v1_0_decode_ring_emit_reg_wait, .emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper, @@ -586,3 +587,22 @@ static void jpeg_v1_0_set_irq_funcs(stru { adev->jpeg.inst->irq.funcs = &jpeg_v1_0_irq_funcs; } + +static void jpeg_v1_0_ring_begin_use(struct amdgpu_ring *ring) +{ + struct amdgpu_device *adev = ring->adev; + bool set_clocks = !cancel_delayed_work_sync(&adev->vcn.idle_work); + int cnt = 0; + + mutex_lock(&adev->vcn.vcn1_jpeg1_workaround); + + if (amdgpu_fence_wait_empty(&adev->vcn.inst->ring_dec)) + DRM_ERROR("JPEG dec: vcn dec ring may not be empty\n"); + + for (cnt = 0; cnt < adev->vcn.num_enc_rings; cnt++) { + if (amdgpu_fence_wait_empty(&adev->vcn.inst->ring_enc[cnt])) + DRM_ERROR("JPEG dec: vcn enc ring[%d] may not be empty\n", cnt); + } + + vcn_v1_0_set_pg_for_begin_use(ring, set_clocks); +} --- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c @@ -54,6 +54,7 @@ static int vcn_v1_0_pause_dpg_mode(struc int inst_idx, struct dpg_pause_state *new_state);
static void vcn_v1_0_idle_work_handler(struct work_struct *work); +static void vcn_v1_0_ring_begin_use(struct amdgpu_ring *ring);
/** * vcn_v1_0_early_init - set function pointers @@ -1804,11 +1805,24 @@ static void vcn_v1_0_idle_work_handler(s } }
-void vcn_v1_0_ring_begin_use(struct amdgpu_ring *ring) +static void vcn_v1_0_ring_begin_use(struct amdgpu_ring *ring) { - struct amdgpu_device *adev = ring->adev; + struct amdgpu_device *adev = ring->adev; bool set_clocks = !cancel_delayed_work_sync(&adev->vcn.idle_work);
+ mutex_lock(&adev->vcn.vcn1_jpeg1_workaround); + + if (amdgpu_fence_wait_empty(&ring->adev->jpeg.inst->ring_dec)) + DRM_ERROR("VCN dec: jpeg dec ring may not be empty\n"); + + vcn_v1_0_set_pg_for_begin_use(ring, set_clocks); + +} + +void vcn_v1_0_set_pg_for_begin_use(struct amdgpu_ring *ring, bool set_clocks) +{ + struct amdgpu_device *adev = ring->adev; + if (set_clocks) { amdgpu_gfx_off_ctrl(adev, false); if (adev->pm.dpm_enabled) @@ -1844,6 +1858,12 @@ void vcn_v1_0_ring_begin_use(struct amdg } }
+void vcn_v1_0_ring_end_use(struct amdgpu_ring *ring) +{ + schedule_delayed_work(&ring->adev->vcn.idle_work, VCN_IDLE_TIMEOUT); + mutex_unlock(&ring->adev->vcn.vcn1_jpeg1_workaround); +} + static const struct amd_ip_funcs vcn_v1_0_ip_funcs = { .name = "vcn_v1_0", .early_init = vcn_v1_0_early_init, @@ -1891,7 +1911,7 @@ static const struct amdgpu_ring_funcs vc .insert_end = vcn_v1_0_dec_ring_insert_end, .pad_ib = amdgpu_ring_generic_pad_ib, .begin_use = vcn_v1_0_ring_begin_use, - .end_use = amdgpu_vcn_ring_end_use, + .end_use = vcn_v1_0_ring_end_use, .emit_wreg = vcn_v1_0_dec_ring_emit_wreg, .emit_reg_wait = vcn_v1_0_dec_ring_emit_reg_wait, .emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper, @@ -1923,7 +1943,7 @@ static const struct amdgpu_ring_funcs vc .insert_end = vcn_v1_0_enc_ring_insert_end, .pad_ib = amdgpu_ring_generic_pad_ib, .begin_use = vcn_v1_0_ring_begin_use, - .end_use = amdgpu_vcn_ring_end_use, + .end_use = vcn_v1_0_ring_end_use, .emit_wreg = vcn_v1_0_enc_ring_emit_wreg, .emit_reg_wait = vcn_v1_0_enc_ring_emit_reg_wait, .emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper, --- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.h +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.h @@ -24,7 +24,8 @@ #ifndef __VCN_V1_0_H__ #define __VCN_V1_0_H__
-void vcn_v1_0_ring_begin_use(struct amdgpu_ring *ring); +void vcn_v1_0_ring_end_use(struct amdgpu_ring *ring); +void vcn_v1_0_set_pg_for_begin_use(struct amdgpu_ring *ring, bool set_clocks);
extern const struct amdgpu_ip_block_version vcn_v1_0_ip_block;
From: Likun Gao Likun.Gao@amd.com
commit 0d142232d9436acab3578ee995472f58adcbf201 upstream.
Update golden setting for sienna_cichlid.
Signed-off-by: Likun Gao Likun.Gao@amd.com Acked-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org # 5.9.x Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c @@ -3091,6 +3091,7 @@ static const struct soc15_reg_golden gol SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2C_ADDR_MATCH_MASK, 0xffffffff, 0xffffffcf), SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2C_CM_CTRL1, 0xff8fff0f, 0x580f1008), SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2C_CTRL3, 0xf7ffffff, 0x10f80988), + SOC15_REG_GOLDEN_VALUE(GC, 0, mmLDS_CONFIG, 0x00000020, 0x00000020), SOC15_REG_GOLDEN_VALUE(GC, 0, mmPA_CL_ENHANCE, 0xf17fffff, 0x01200007), SOC15_REG_GOLDEN_VALUE(GC, 0, mmPA_SC_BINNER_TIMEOUT_COUNTER, 0xffffffff, 0x00000800), SOC15_REG_GOLDEN_VALUE(GC, 0, mmPA_SC_ENHANCE_2, 0xffffffbf, 0x00000820),
From: Evan Quan evan.quan@amd.com
commit 207ac684792560acdb9e06f9d707ebf63c84b0e0 upstream.
Current code wrongly treat all cases as job == NULL.
Signed-off-by: Evan Quan evan.quan@amd.com Reviewed-and-tested-by: Jane Jian Jane.Jian@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -4374,7 +4374,7 @@ int amdgpu_device_gpu_recover(struct amd retry: /* Rest of adevs pre asic reset from XGMI hive. */ list_for_each_entry(tmp_adev, device_list_handle, gmc.xgmi.head) { r = amdgpu_device_pre_asic_reset(tmp_adev, - NULL, + (tmp_adev == adev) ? job : NULL, &need_full_reset); /*TODO Should we stop ?*/ if (r) {
From: Jay Cornwall jay.cornwall@amd.com
commit d56b1980d7efe9ef08469e856fc0703d0cef65e4 upstream.
0 causes instruction fetch stall at cache line boundary under some conditions on Navi10. A non-zero prefetch is the preferred default in any case.
Fixes soft hang in Luxmark.
Signed-off-by: Jay Cornwall jay.cornwall@amd.com Reviewed-by: Felix Kuehling Felix.Kuehling@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v10.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v10.c +++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v10.c @@ -58,8 +58,9 @@ static int update_qpd_v10(struct device_ /* check if sh_mem_config register already configured */ if (qpd->sh_mem_config == 0) { qpd->sh_mem_config = - SH_MEM_ALIGNMENT_MODE_UNALIGNED << - SH_MEM_CONFIG__ALIGNMENT_MODE__SHIFT; + (SH_MEM_ALIGNMENT_MODE_UNALIGNED << + SH_MEM_CONFIG__ALIGNMENT_MODE__SHIFT) | + (3 << SH_MEM_CONFIG__INITIAL_INST_PREFETCH__SHIFT); #if 0 /* TODO: * This shouldn't be an issue with Navi10. Verify.
From: Andrey Grodzovsky andrey.grodzovsky@amd.com
commit 5dff80bdce9e385af5716ed083f9e33e814484ab upstream.
On connector destruction call drm_dp_mst_topology_mgr_destroy to release resources allocated in drm_dp_mst_topology_mgr_init. Do it only if MST manager was initilized before otherwsie a crash is seen on driver unload/device unplug.
Reviewed-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com Signed-off-by: Andrey Grodzovsky andrey.grodzovsky@amd.com Acked-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 7 +++++++ 1 file changed, 7 insertions(+)
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -4882,6 +4882,13 @@ static void amdgpu_dm_connector_destroy( struct amdgpu_device *adev = connector->dev->dev_private; struct amdgpu_display_manager *dm = &adev->dm;
+ /* + * Call only if mst_mgr was iniitalized before since it's not done + * for all connector types. + */ + if (aconnector->mst_mgr.dev) + drm_dp_mst_topology_mgr_destroy(&aconnector->mst_mgr); + #if defined(CONFIG_BACKLIGHT_CLASS_DEVICE) ||\ defined(CONFIG_BACKLIGHT_CLASS_DEVICE_MODULE)
From: Likun Gao Likun.Gao@amd.com
commit 274c240c760ed4647ddae1f1b994e0dd3f71cbb1 upstream.
Add function for sienna_cichlid to force PBB workload mode to zero by checking whether there have SE been harvested.
Signed-off-by: Likun Gao Likun.Gao@amd.com Reviewed-by: Hawking Zhang Hawking.Zhang@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org # 5.9.x Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 62 +++++++++++++++++++++++++++++++++ 1 file changed, 62 insertions(+)
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c @@ -112,6 +112,22 @@ #define mmCP_HYP_ME_UCODE_DATA 0x5817 #define mmCP_HYP_ME_UCODE_DATA_BASE_IDX 1
+//CC_GC_SA_UNIT_DISABLE +#define mmCC_GC_SA_UNIT_DISABLE 0x0fe9 +#define mmCC_GC_SA_UNIT_DISABLE_BASE_IDX 0 +#define CC_GC_SA_UNIT_DISABLE__SA_DISABLE__SHIFT 0x8 +#define CC_GC_SA_UNIT_DISABLE__SA_DISABLE_MASK 0x0000FF00L +//GC_USER_SA_UNIT_DISABLE +#define mmGC_USER_SA_UNIT_DISABLE 0x0fea +#define mmGC_USER_SA_UNIT_DISABLE_BASE_IDX 0 +#define GC_USER_SA_UNIT_DISABLE__SA_DISABLE__SHIFT 0x8 +#define GC_USER_SA_UNIT_DISABLE__SA_DISABLE_MASK 0x0000FF00L +//PA_SC_ENHANCE_3 +#define mmPA_SC_ENHANCE_3 0x1085 +#define mmPA_SC_ENHANCE_3_BASE_IDX 0 +#define PA_SC_ENHANCE_3__FORCE_PBB_WORKLOAD_MODE_TO_ZERO__SHIFT 0x3 +#define PA_SC_ENHANCE_3__FORCE_PBB_WORKLOAD_MODE_TO_ZERO_MASK 0x00000008L + MODULE_FIRMWARE("amdgpu/navi10_ce.bin"); MODULE_FIRMWARE("amdgpu/navi10_pfp.bin"); MODULE_FIRMWARE("amdgpu/navi10_me.bin"); @@ -3189,6 +3205,8 @@ static int gfx_v10_0_wait_for_rlc_autolo static void gfx_v10_0_ring_emit_ce_meta(struct amdgpu_ring *ring, bool resume); static void gfx_v10_0_ring_emit_de_meta(struct amdgpu_ring *ring, bool resume); static void gfx_v10_0_ring_emit_frame_cntl(struct amdgpu_ring *ring, bool start, bool secure); +static u32 gfx_v10_3_get_disabled_sa(struct amdgpu_device *adev); +static void gfx_v10_3_program_pbb_mode(struct amdgpu_device *adev);
static void gfx10_kiq_set_resources(struct amdgpu_ring *kiq_ring, uint64_t queue_mask) { @@ -6929,6 +6947,9 @@ static int gfx_v10_0_hw_init(void *handl if (r) return r;
+ if (adev->asic_type == CHIP_SIENNA_CICHLID) + gfx_v10_3_program_pbb_mode(adev); + return r; }
@@ -8774,6 +8795,47 @@ static int gfx_v10_0_get_cu_info(struct return 0; }
+static u32 gfx_v10_3_get_disabled_sa(struct amdgpu_device *adev) +{ + uint32_t efuse_setting, vbios_setting, disabled_sa, max_sa_mask; + + efuse_setting = RREG32_SOC15(GC, 0, mmCC_GC_SA_UNIT_DISABLE); + efuse_setting &= CC_GC_SA_UNIT_DISABLE__SA_DISABLE_MASK; + efuse_setting >>= CC_GC_SA_UNIT_DISABLE__SA_DISABLE__SHIFT; + + vbios_setting = RREG32_SOC15(GC, 0, mmGC_USER_SA_UNIT_DISABLE); + vbios_setting &= GC_USER_SA_UNIT_DISABLE__SA_DISABLE_MASK; + vbios_setting >>= GC_USER_SA_UNIT_DISABLE__SA_DISABLE__SHIFT; + + max_sa_mask = amdgpu_gfx_create_bitmask(adev->gfx.config.max_sh_per_se * + adev->gfx.config.max_shader_engines); + disabled_sa = efuse_setting | vbios_setting; + disabled_sa &= max_sa_mask; + + return disabled_sa; +} + +static void gfx_v10_3_program_pbb_mode(struct amdgpu_device *adev) +{ + uint32_t max_sa_per_se, max_sa_per_se_mask, max_shader_engines; + uint32_t disabled_sa_mask, se_index, disabled_sa_per_se; + + disabled_sa_mask = gfx_v10_3_get_disabled_sa(adev); + + max_sa_per_se = adev->gfx.config.max_sh_per_se; + max_sa_per_se_mask = (1 << max_sa_per_se) - 1; + max_shader_engines = adev->gfx.config.max_shader_engines; + + for (se_index = 0; max_shader_engines > se_index; se_index++) { + disabled_sa_per_se = disabled_sa_mask >> (se_index * max_sa_per_se); + disabled_sa_per_se &= max_sa_per_se_mask; + if (disabled_sa_per_se == max_sa_per_se_mask) { + WREG32_FIELD15(GC, 0, PA_SC_ENHANCE_3, FORCE_PBB_WORKLOAD_MODE_TO_ZERO, 1); + break; + } + } +} + const struct amdgpu_ip_block_version gfx_v10_0_ip_block = { .type = AMD_IP_BLOCK_TYPE_GFX,
From: Christian König christian.koenig@amd.com
commit 55bb919be4e4973cd037a04f527ecc6686800437 upstream.
Ideally this should be a multiple of the VM block size. 2MB should at least fit for Vega/Navi.
Signed-off-by: Christian König christian.koenig@amd.com Reviewed-by: Madhav Chauhan madhav.chauhan@amd.com Reviewed-by: Felix Kuehling Felix.Kuehling@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h @@ -112,8 +112,8 @@ struct amdgpu_bo_list_entry; #define AMDGPU_MMHUB_0 1 #define AMDGPU_MMHUB_1 2
-/* hardcode that limit for now */ -#define AMDGPU_VA_RESERVED_SIZE (1ULL << 20) +/* Reserve 2MB at top/bottom of address space for kernel use */ +#define AMDGPU_VA_RESERVED_SIZE (2ULL << 20)
/* max vmids dedicated for process */ #define AMDGPU_VM_MAX_RESERVED_VMID 1
From: Takashi Iwai tiwai@suse.de
commit 8b7dc1fe1a5c1093551f6cd7dfbb941bd9081c2e upstream.
ASSERT_CRITICAL() invokes kgdb_breakpoint() whenever either CONFIG_KGDB or CONFIG_HAVE_KGDB is set. This, however, may lead to a kernel panic when no kdb stuff is attached, since the kgdb_breakpoint() call issues INT3. It's nothing but a surprise for normal end-users.
For avoiding the pitfall, make the kgdb_breakpoint() call only when CONFIG_DEBUG_KERNEL_DC is set.
https://bugzilla.opensuse.org/show_bug.cgi?id=1177973 Cc: stable@vger.kernel.org Acked-by: Alex Deucher alexander.deucher@amd.com Reviewed-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/amd/display/dc/os_types.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/gpu/drm/amd/display/dc/os_types.h +++ b/drivers/gpu/drm/amd/display/dc/os_types.h @@ -90,7 +90,7 @@ * general debug capabilities * */ -#if defined(CONFIG_HAVE_KGDB) || defined(CONFIG_KGDB) +#if defined(CONFIG_DEBUG_KERNEL_DC) && (defined(CONFIG_HAVE_KGDB) || defined(CONFIG_KGDB)) #define ASSERT_CRITICAL(expr) do { \ if (WARN_ON(!(expr))) { \ kgdb_breakpoint(); \
From: Takashi Iwai tiwai@suse.de
commit 920bb38c518408fa2600eaefa0af9e82cf48f166 upstream.
Currently both error code paths handled in dal_gpio_open_ex() issues ASSERT_CRITICAL(), and this leads to a kernel panic unnecessarily if CONFIG_KGDB is enabled. Since basically both are non-critical errors and can be recovered, drop those assert calls and use a safer one, BREAK_TO_DEBUGGER(), for allowing the debugging, instead.
BugLink: https://bugzilla.opensuse.org/show_bug.cgi?id=1177973 Cc: stable@vger.kernel.org Acked-by: Alex Deucher alexander.deucher@amd.com Reviewed-by: Nicholas Kazlauskas nicholas.kazlauskas@amd.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/amd/display/dc/gpio/gpio_base.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/gpu/drm/amd/display/dc/gpio/gpio_base.c +++ b/drivers/gpu/drm/amd/display/dc/gpio/gpio_base.c @@ -63,13 +63,13 @@ enum gpio_result dal_gpio_open_ex( enum gpio_mode mode) { if (gpio->pin) { - ASSERT_CRITICAL(false); + BREAK_TO_DEBUGGER(); return GPIO_RESULT_ALREADY_OPENED; }
// No action if allocation failed during gpio construct if (!gpio->hw_container.ddc) { - ASSERT_CRITICAL(false); + BREAK_TO_DEBUGGER(); return GPIO_RESULT_NON_SPECIFIC_ERROR; } gpio->mode = mode;
From: Matthew Wilcox (Oracle) willy@infradead.org
commit c403c3a2fbe24d4ed33e10cabad048583ebd4edf upstream.
On 32-bit systems, this shift will overflow for files larger than 4GB.
Cc: stable@vger.kernel.org Fixes: 61f68816211e ("ceph: check caps in filemap_fault and page_mkwrite") Signed-off-by: Matthew Wilcox (Oracle) willy@infradead.org Reviewed-by: Jeff Layton jlayton@kernel.org Signed-off-by: Ilya Dryomov idryomov@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ceph/addr.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -1522,7 +1522,7 @@ static vm_fault_t ceph_filemap_fault(str struct ceph_inode_info *ci = ceph_inode(inode); struct ceph_file_info *fi = vma->vm_file->private_data; struct page *pinned_page = NULL; - loff_t off = vmf->pgoff << PAGE_SHIFT; + loff_t off = (loff_t)vmf->pgoff << PAGE_SHIFT; int want, got, err; sigset_t oldset; vm_fault_t ret = VM_FAULT_SIGBUS;
From: Ilya Dryomov idryomov@gmail.com
commit 28e1581c3b4ea5f98530064a103c6217bedeea73 upstream.
con->out_msg must be cleared on Policy::stateful_server (!CEPH_MSG_CONNECT_LOSSY) faults. Not doing so botches the reconnection attempt, because after writing the banner the messenger moves on to writing the data section of that message (either from where it got interrupted by the connection reset or from the beginning) instead of writing struct ceph_msg_connect. This results in a bizarre error message because the server sends CEPH_MSGR_TAG_BADPROTOVER but we think we wrote struct ceph_msg_connect:
libceph: mds0 (1)172.21.15.45:6828 socket error on write ceph: mds0 reconnect start libceph: mds0 (1)172.21.15.45:6829 socket closed (con state OPEN) libceph: mds0 (1)172.21.15.45:6829 protocol version mismatch, my 32 != server's 32 libceph: mds0 (1)172.21.15.45:6829 protocol version mismatch
AFAICT this bug goes back to the dawn of the kernel client. The reason it survived for so long is that only MDS sessions are stateful and only two MDS messages have a data section: CEPH_MSG_CLIENT_RECONNECT (always, but reconnecting is rare) and CEPH_MSG_CLIENT_REQUEST (only when xattrs are involved). The connection has to get reset precisely when such message is being sent -- in this case it was the former.
Cc: stable@vger.kernel.org Link: https://tracker.ceph.com/issues/47723 Signed-off-by: Ilya Dryomov idryomov@gmail.com Reviewed-by: Jeff Layton jlayton@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- net/ceph/messenger.c | 5 +++++ 1 file changed, 5 insertions(+)
--- a/net/ceph/messenger.c +++ b/net/ceph/messenger.c @@ -2998,6 +2998,11 @@ static void con_fault(struct ceph_connec ceph_msg_put(con->in_msg); con->in_msg = NULL; } + if (con->out_msg) { + BUG_ON(con->out_msg->con != con); + ceph_msg_put(con->out_msg); + con->out_msg = NULL; + }
/* Requeue anything that hasn't been acked */ list_splice_init(&con->out_sent, &con->out_queue);
From: Matthew Wilcox (Oracle) willy@infradead.org
commit f5f7ab168b9a60e12a4b8f2bb6fcc91321dc23c1 upstream.
On 32-bit systems, this multiplication will overflow for files larger than 4GB.
Link: http://lkml.kernel.org/r/20201004180428.14494-2-willy@infradead.org Cc: stable@vger.kernel.org Fixes: fb89b45cdfdc ("9P: introduction of a new cache=mmap model.") Signed-off-by: Matthew Wilcox (Oracle) willy@infradead.org Signed-off-by: Dominique Martinet asmadeus@codewreck.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/9p/vfs_file.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/fs/9p/vfs_file.c +++ b/fs/9p/vfs_file.c @@ -612,9 +612,9 @@ static void v9fs_mmap_vm_close(struct vm struct writeback_control wbc = { .nr_to_write = LONG_MAX, .sync_mode = WB_SYNC_ALL, - .range_start = vma->vm_pgoff * PAGE_SIZE, + .range_start = (loff_t)vma->vm_pgoff * PAGE_SIZE, /* absolute end, byte at end included */ - .range_end = vma->vm_pgoff * PAGE_SIZE + + .range_end = (loff_t)vma->vm_pgoff * PAGE_SIZE + (vma->vm_end - vma->vm_start - 1), };
From: Artur Molchanov arturmolchanov@gmail.com
commit c09f56b8f68d4d536bff518227aea323b835b2ce upstream.
Fix returning value for sysctl sunrpc.transports. Return error code from sysctl proc_handler function proc_do_xprt instead of number of the written bytes. Otherwise sysctl returns random garbage for this key.
Since v1: - Handle negative returned value from memory_read_from_buffer as an error
Signed-off-by: Artur Molchanov arturmolchanov@gmail.com Cc: stable@vger.kernel.org Signed-off-by: J. Bruce Fields bfields@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- net/sunrpc/sysctl.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
--- a/net/sunrpc/sysctl.c +++ b/net/sunrpc/sysctl.c @@ -70,7 +70,13 @@ static int proc_do_xprt(struct ctl_table return 0; } len = svc_print_xprts(tmpbuf, sizeof(tmpbuf)); - return memory_read_from_buffer(buffer, *lenp, ppos, tmpbuf, len); + *lenp = memory_read_from_buffer(buffer, *lenp, ppos, tmpbuf, len); + + if (*lenp < 0) { + *lenp = 0; + return -EINVAL; + } + return 0; }
static int
From: Ansuel Smith ansuelsmth@gmail.com
commit d3d4d028afb785e52c55024d779089654f8302e7 upstream.
Qsdk U-Boot can incorrectly leave the PCIe interface in an undefined state if bootm command is used instead of bootipq. This is caused by the not deinit of PCIe when bootm is called. Reset the PCIe before init anyway to fix this U-Boot bug.
Link: https://lore.kernel.org/r/20200901124955.137-1-ansuelsmth@gmail.com Fixes: 82a823833f4e ("PCI: qcom: Add Qualcomm PCIe controller driver") Signed-off-by: Ansuel Smith ansuelsmth@gmail.com Signed-off-by: Lorenzo Pieralisi lorenzo.pieralisi@arm.com Reviewed-by: Bjorn Andersson bjorn.andersson@linaro.org Cc: stable@vger.kernel.org # v4.19+ Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/pci/controller/dwc/pcie-qcom.c | 13 +++++++++++++ 1 file changed, 13 insertions(+)
--- a/drivers/pci/controller/dwc/pcie-qcom.c +++ b/drivers/pci/controller/dwc/pcie-qcom.c @@ -302,6 +302,9 @@ static void qcom_pcie_deinit_2_1_0(struc reset_control_assert(res->por_reset); reset_control_assert(res->ext_reset); reset_control_assert(res->phy_reset); + + writel(1, pcie->parf + PCIE20_PARF_PHY_CTRL); + regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies); }
@@ -314,6 +317,16 @@ static int qcom_pcie_init_2_1_0(struct q u32 val; int ret;
+ /* reset the PCIe interface as uboot can leave it undefined state */ + reset_control_assert(res->pci_reset); + reset_control_assert(res->axi_reset); + reset_control_assert(res->ahb_reset); + reset_control_assert(res->por_reset); + reset_control_assert(res->ext_reset); + reset_control_assert(res->phy_reset); + + writel(1, pcie->parf + PCIE20_PARF_PHY_CTRL); + ret = regulator_bulk_enable(ARRAY_SIZE(res->supplies), res->supplies); if (ret < 0) { dev_err(dev, "cannot enable regulators\n");
From: Qiujun Huang hqjagain@gmail.com
commit 0a1754b2a97efa644aa6e84d1db5b17c42251483 upstream.
We don't need to check the new buffer size, and the return value had confused resize_buffer_duplicate_size(). ... ret = ring_buffer_resize(trace_buf->buffer, per_cpu_ptr(size_buf->data,cpu_id)->entries, cpu_id); if (ret == 0) per_cpu_ptr(trace_buf->data, cpu_id)->entries = per_cpu_ptr(size_buf->data, cpu_id)->entries; ...
Link: https://lkml.kernel.org/r/20201019142242.11560-1-hqjagain@gmail.com
Cc: stable@vger.kernel.org Fixes: d60da506cbeb3 ("tracing: Add a resize function to make one buffer equivalent to another buffer") Signed-off-by: Qiujun Huang hqjagain@gmail.com Signed-off-by: Steven Rostedt (VMware) rostedt@goodmis.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- kernel/trace/ring_buffer.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
--- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -1952,18 +1952,18 @@ int ring_buffer_resize(struct trace_buff { struct ring_buffer_per_cpu *cpu_buffer; unsigned long nr_pages; - int cpu, err = 0; + int cpu, err;
/* * Always succeed at resizing a non-existent buffer: */ if (!buffer) - return size; + return 0;
/* Make sure the requested buffer exists */ if (cpu_id != RING_BUFFER_ALL_CPUS && !cpumask_test_cpu(cpu_id, buffer->cpumask)) - return size; + return 0;
nr_pages = DIV_ROUND_UP(size, BUF_PAGE_SIZE);
@@ -2119,7 +2119,7 @@ int ring_buffer_resize(struct trace_buff }
mutex_unlock(&buffer->mutex); - return size; + return 0;
out_err: for_each_buffer_cpu(buffer, cpu) {
From: Mel Gorman mgorman@techsingularity.net
commit 75af76d0a34e048651a6af311781d7206b6964c7 upstream.
e6d4f08a6776 ("intel_idle: Use ACPI _CST on server systems") avoids enabling c-states that have been disabled by the platform with the exception of C1E.
Unfortunately, BIOS implementations are not always consistent in terms of how capabilities are advertised and control cannot always be handed over. If control cannot be handed over then intel_idle reports that "ACPI _CST not found or not usable" but does not clear acpi_state_table.count meaning the information is still partially used.
This patch ignores ACPI information if CST control cannot be requested from the platform. This was only observed on a number of Haswell platforms that had identical CPUs but not identical BIOS versions. While this problem may be rare overall, 24 separate test cases bisected to this specific commit across 4 separate test machines and is worth addressing. If the situation occurs, the kernel behaves as it did before commit e6d4f08a6776 and uses any c-states that are discovered.
The affected test cases were all ones that involved a small number of processes -- exec microbenchmark, pipe microbenchmark, git test suite, netperf, tbench with one client and system call microbenchmark. Each case benefits from being able to use turboboost which is prevented if the lower c-states are unavailable. This may mask real regressions specific to older hardware so it is worth addressing.
C-state status before and after the patch
5.9.0-vanilla POLL latency:0 disabled:0 default:enabled 5.9.0-vanilla C1 latency:2 disabled:0 default:enabled 5.9.0-vanilla C1E latency:10 disabled:0 default:enabled 5.9.0-vanilla C3 latency:33 disabled:1 default:disabled 5.9.0-vanilla C6 latency:133 disabled:1 default:disabled 5.9.0-ignore-cst-v1r1 POLL latency:0 disabled:0 default:enabled 5.9.0-ignore-cst-v1r1 C1 latency:2 disabled:0 default:enabled 5.9.0-ignore-cst-v1r1 C1E latency:10 disabled:0 default:enabled 5.9.0-ignore-cst-v1r1 C3 latency:33 disabled:0 default:enabled 5.9.0-ignore-cst-v1r1 C6 latency:133 disabled:0 default:enabled
Patch enables C3/C6.
Netperf UDP_STREAM
netperf-udp 5.5.0 5.9.0 vanilla ignore-cst-v1r1 Hmean send-64 193.41 ( 0.00%) 226.54 * 17.13%* Hmean send-128 392.16 ( 0.00%) 450.54 * 14.89%* Hmean send-256 769.94 ( 0.00%) 881.85 * 14.53%* Hmean send-1024 2994.21 ( 0.00%) 3468.95 * 15.85%* Hmean send-2048 5725.60 ( 0.00%) 6628.99 * 15.78%* Hmean send-3312 8468.36 ( 0.00%) 10288.02 * 21.49%* Hmean send-4096 10135.46 ( 0.00%) 12387.57 * 22.22%* Hmean send-8192 17142.07 ( 0.00%) 19748.11 * 15.20%* Hmean send-16384 28539.71 ( 0.00%) 30084.45 * 5.41%*
Fixes: e6d4f08a6776 ("intel_idle: Use ACPI _CST on server systems") Signed-off-by: Mel Gorman mgorman@techsingularity.net Cc: 5.6+ stable@vger.kernel.org # 5.6+ Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/idle/intel_idle.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)
--- a/drivers/idle/intel_idle.c +++ b/drivers/idle/intel_idle.c @@ -1212,14 +1212,13 @@ static bool __init intel_idle_acpi_cst_e if (!intel_idle_cst_usable()) continue;
- if (!acpi_processor_claim_cst_control()) { - acpi_state_table.count = 0; - return false; - } + if (!acpi_processor_claim_cst_control()) + break;
return true; }
+ acpi_state_table.count = 0; pr_debug("ACPI _CST not found or not usable\n"); return false; }
From: Chen Yu yu.c.chen@intel.com
commit 4e0ba5577dba686f96c1c10ef4166380667fdec7 upstream.
Currently intel_idle driver gets the c-state information from ACPI _CST if the processor model is not recognized by it. However the c-state in _CST starts with index 1 which is different from the index in intel_idle driver's internal c-state table.
While intel_idle_max_cstate_reached() was previously introduced to deal with intel_idle driver's internal c-state table, re-using this function directly on _CST is incorrect.
Fix this by subtracting 1 from the index when checking max_cstate in the _CST case.
For example, append intel_idle.max_cstate=1 in boot command line, Before the patch: grep . /sys/devices/system/cpu/cpu0/cpuidle/state*/name POLL After the patch: grep . /sys/devices/system/cpu/cpu0/cpuidle/state*/name /sys/devices/system/cpu/cpu0/cpuidle/state0/name:POLL /sys/devices/system/cpu/cpu0/cpuidle/state1/name:C1_ACPI
Fixes: 18734958e9bf ("intel_idle: Use ACPI _CST for processor models without C-state tables") Reported-by: Pengfei Xu pengfei.xu@intel.com Cc: 5.6+ stable@vger.kernel.org # 5.6+ Signed-off-by: Chen Yu yu.c.chen@intel.com [ rjw: Changelog edits ] Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/idle/intel_idle.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/idle/intel_idle.c +++ b/drivers/idle/intel_idle.c @@ -1235,7 +1235,7 @@ static void __init intel_idle_init_cstat struct acpi_processor_cx *cx; struct cpuidle_state *state;
- if (intel_idle_max_cstate_reached(cstate)) + if (intel_idle_max_cstate_reached(cstate - 1)) break;
cx = &acpi_state_table.states[cstate];
From: Rafael J. Wysocki rafael.j.wysocki@intel.com
commit db865272d9c4687520dc29f77e701a1b2669872f upstream.
Commit 33aa46f252c7 ("cpufreq: intel_pstate: Use passive mode by default without HWP") was meant to cause intel_pstate to be used in the passive mode with the schedutil governor on top of it, but it missed the case in which either "ondemand" or "conservative" was selected as the default governor in the existing kernel config, in which case the previous old governor configuration would be used, causing the default legacy governor to be used on top of intel_pstate instead of schedutil.
Address this by preventing "ondemand" and "conservative" from being configured as the default cpufreq governor in the case when schedutil is the default choice for the default governor setting.
[Note that the default cpufreq governor can still be set via the kernel command line if need be and that choice is not limited, so if anyone really wants to use one of the legacy governors by default, it can be achieved this way.]
Fixes: 33aa46f252c7 ("cpufreq: intel_pstate: Use passive mode by default without HWP") Reported-by: Julia Lawall julia.lawall@inria.fr Cc: 5.8+ stable@vger.kernel.org # 5.8+ Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Acked-by: Viresh Kumar viresh.kumar@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/cpufreq/Kconfig | 2 ++ 1 file changed, 2 insertions(+)
--- a/drivers/cpufreq/Kconfig +++ b/drivers/cpufreq/Kconfig @@ -71,6 +71,7 @@ config CPU_FREQ_DEFAULT_GOV_USERSPACE
config CPU_FREQ_DEFAULT_GOV_ONDEMAND bool "ondemand" + depends on !(X86_INTEL_PSTATE && SMP) select CPU_FREQ_GOV_ONDEMAND select CPU_FREQ_GOV_PERFORMANCE help @@ -83,6 +84,7 @@ config CPU_FREQ_DEFAULT_GOV_ONDEMAND
config CPU_FREQ_DEFAULT_GOV_CONSERVATIVE bool "conservative" + depends on !(X86_INTEL_PSTATE && SMP) select CPU_FREQ_GOV_CONSERVATIVE select CPU_FREQ_GOV_PERFORMANCE help
From: Rafael J. Wysocki rafael.j.wysocki@intel.com
commit 1c534352f47fd83eb08075ac2474f707e74bf7f7 upstream.
Generally, a cpufreq driver may need to update some internal upper and lower frequency boundaries on policy max and min changes, respectively, but currently this does not work if the target frequency does not change along with the policy limit.
Namely, if the target frequency does not change along with the policy min or max, the "target_freq == policy->cur" check in __cpufreq_driver_target() prevents driver callbacks from being invoked and they do not even have a chance to update the corresponding internal boundary.
This particularly affects the "powersave" and "performance" governors that always set the target frequency to one of the policy limits and it never changes when the other limit is updated.
To allow cpufreq the drivers needing to update internal frequency boundaries on policy limits changes to avoid this issue, introduce a new driver flag, CPUFREQ_NEED_UPDATE_LIMITS, that (when set) will neutralize the check mentioned above.
Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Acked-by: Viresh Kumar viresh.kumar@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/cpufreq/cpufreq.c | 3 ++- include/linux/cpufreq.h | 10 +++++++++- 2 files changed, 11 insertions(+), 2 deletions(-)
--- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -2166,7 +2166,8 @@ int __cpufreq_driver_target(struct cpufr * exactly same freq is called again and so we can save on few function * calls. */ - if (target_freq == policy->cur) + if (target_freq == policy->cur && + !(cpufreq_driver->flags & CPUFREQ_NEED_UPDATE_LIMITS)) return 0;
/* Save last value to restore later on errors */ --- a/include/linux/cpufreq.h +++ b/include/linux/cpufreq.h @@ -293,7 +293,7 @@ __ATTR(_name, 0644, show_##_name, store_
struct cpufreq_driver { char name[CPUFREQ_NAME_LEN]; - u8 flags; + u16 flags; void *driver_data;
/* needed by all drivers */ @@ -417,6 +417,14 @@ struct cpufreq_driver { */ #define CPUFREQ_IS_COOLING_DEV BIT(7)
+/* + * Set by drivers that need to update internale upper and lower boundaries along + * with the target frequency and so the core and governors should also invoke + * the diver if the target frequency does not change, but the policy min or max + * may have changed. + */ +#define CPUFREQ_NEED_UPDATE_LIMITS BIT(8) + int cpufreq_register_driver(struct cpufreq_driver *driver_data); int cpufreq_unregister_driver(struct cpufreq_driver *driver_data);
From: Rafael J. Wysocki rafael.j.wysocki@intel.com
commit e0be38ed4ab413ddd492118cf146369b86ee0ab5 upstream.
If the cpufreq policy max limit is changed when intel_pstate operates in the passive mode with HWP enabled and the "powersave" governor is used on top of it, the HWP max limit is not updated as appropriate.
Namely, in the "powersave" governor case, the target P-state is always equal to the policy min limit, so if the latter does not change, intel_cpufreq_adjust_hwp() is not invoked to update the HWP Request MSR due to the "target_pstate != old_pstate" check in intel_cpufreq_update_pstate(), so the HWP max limit is not updated as a result.
Also, if the CPUFREQ_NEED_UPDATE_LIMITS flag is not set for the driver and the target frequency does not change along with the policy max limit, the "target_freq == policy->cur" check in __cpufreq_driver_target() prevents the driver's ->target() callback from being invoked at all, so the HWP max limit is not updated.
To prevent that occurring, set the CPUFREQ_NEED_UPDATE_LIMITS flag in the intel_cpufreq driver structure if HWP is enabled and modify intel_cpufreq_update_pstate() to do the "target_pstate != old_pstate" check only in the non-HWP case and let intel_cpufreq_adjust_hwp() always run in the HWP case (it will update HWP Request only if the cached value of the register is different from the new one including the limits, so if neither the target P-state value nor the max limit changes, the register write will still be avoided).
Fixes: f6ebbcf08f37 ("cpufreq: intel_pstate: Implement passive mode with HWP enabled") Reported-by: Zhang Rui rui.zhang@intel.com Cc: 5.9+ stable@vger.kernel.org # 5.9+: 1c534352f47f cpufreq: Introduce CPUFREQ_NEED_UPDATE_LIMITS ... Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Acked-by: Viresh Kumar viresh.kumar@linaro.org Tested-by: Zhang Rui rui.zhang@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/cpufreq/intel_pstate.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-)
--- a/drivers/cpufreq/intel_pstate.c +++ b/drivers/cpufreq/intel_pstate.c @@ -2550,14 +2550,12 @@ static int intel_cpufreq_update_pstate(s int old_pstate = cpu->pstate.current_pstate;
target_pstate = intel_pstate_prepare_request(cpu, target_pstate); - if (target_pstate != old_pstate) { + if (hwp_active) { + intel_cpufreq_adjust_hwp(cpu, target_pstate, fast_switch); + cpu->pstate.current_pstate = target_pstate; + } else if (target_pstate != old_pstate) { + intel_cpufreq_adjust_perf_ctl(cpu, target_pstate, fast_switch); cpu->pstate.current_pstate = target_pstate; - if (hwp_active) - intel_cpufreq_adjust_hwp(cpu, target_pstate, - fast_switch); - else - intel_cpufreq_adjust_perf_ctl(cpu, target_pstate, - fast_switch); }
intel_cpufreq_trace(cpu, fast_switch ? INTEL_PSTATE_TRACE_FAST_SWITCH : @@ -3014,6 +3012,7 @@ static int __init intel_pstate_init(void hwp_mode_bdw = id->driver_data; intel_pstate.attr = hwp_cpufreq_attrs; intel_cpufreq.attr = hwp_cpufreq_attrs; + intel_cpufreq.flags |= CPUFREQ_NEED_UPDATE_LIMITS; if (!default_driver) default_driver = &intel_pstate;
From: Stefano Garzarella sgarzare@redhat.com
commit 5745bcfbbf89b158416075374254d3c013488f21 upstream.
If riov and wiov are both defined and they point to different objects, only riov is initialized. If the wiov is not initialized by the caller, the function fails returning -EINVAL and printing "Readable desc 0x... after writable" error message.
This issue happens when descriptors have both readable and writable buffers (eg. virtio-blk devices has virtio_blk_outhdr in the readable buffer and status as last byte of writable buffer) and we call __vringh_iov() to get both type of buffers in two different iovecs.
Let's replace the 'else if' clause with 'if' to initialize both riov and wiov if they are not NULL.
As checkpatch pointed out, we also avoid crashing the kernel when riov and wiov are both NULL, replacing BUG() with WARN_ON() and returning -EINVAL.
Fixes: f87d0fbb5798 ("vringh: host-side implementation of virtio rings.") Cc: stable@vger.kernel.org Signed-off-by: Stefano Garzarella sgarzare@redhat.com Link: https://lore.kernel.org/r/20201008204256.162292-1-sgarzare@redhat.com Signed-off-by: Michael S. Tsirkin mst@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/vhost/vringh.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-)
--- a/drivers/vhost/vringh.c +++ b/drivers/vhost/vringh.c @@ -284,13 +284,14 @@ __vringh_iov(struct vringh *vrh, u16 i, desc_max = vrh->vring.num; up_next = -1;
+ /* You must want something! */ + if (WARN_ON(!riov && !wiov)) + return -EINVAL; + if (riov) riov->i = riov->used = 0; - else if (wiov) + if (wiov) wiov->i = wiov->used = 0; - else - /* You must want something! */ - BUG();
for (;;) { void *addr;
From: Eric Biggers ebiggers@google.com
commit cb8d53d2c97369029cc638c9274ac7be0a316c75 upstream.
ext4_unregister_sysfs() only deletes the kobject. The reference to it needs to be put separately, like ext4_put_super() does.
This addresses the syzbot report "memory leak in kobject_set_name_vargs (3)" (https://syzkaller.appspot.com/bug?extid=9f864abad79fae7c17e1).
Reported-by: syzbot+9f864abad79fae7c17e1@syzkaller.appspotmail.com Fixes: 72ba74508b28 ("ext4: release sysfs kobject when failing to enable quotas on mount") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers ebiggers@google.com Link: https://lore.kernel.org/r/20200922162456.93657-1-ebiggers@kernel.org Reviewed-by: Jan Kara jack@suse.cz Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ext4/super.c | 1 + 1 file changed, 1 insertion(+)
--- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -4872,6 +4872,7 @@ cantfind_ext4:
failed_mount8: ext4_unregister_sysfs(sb); + kobject_put(&sbi->s_kobj); failed_mount7: ext4_unregister_li_request(sb); failed_mount6:
From: Dinghao Liu dinghao.liu@zju.edu.cn
commit c9e87161cc621cbdcfc472fa0b2d81c63780c8f5 upstream.
When ext4_journal_get_write_access() fails, we should terminate the execution flow and release n_group_desc, iloc.bh, dind and gdb_bh.
Cc: stable@kernel.org Signed-off-by: Dinghao Liu dinghao.liu@zju.edu.cn Reviewed-by: Andreas Dilger adilger@dilger.ca Link: https://lore.kernel.org/r/20200829025403.3139-1-dinghao.liu@zju.edu.cn Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ext4/resize.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/fs/ext4/resize.c +++ b/fs/ext4/resize.c @@ -843,8 +843,10 @@ static int add_new_gdb(handle_t *handle,
BUFFER_TRACE(dind, "get_write_access"); err = ext4_journal_get_write_access(handle, dind); - if (unlikely(err)) + if (unlikely(err)) { ext4_std_error(sb, err); + goto errout; + }
/* ext4_reserve_inode_write() gets a reference on the iloc */ err = ext4_reserve_inode_write(handle, inode, &iloc);
From: Ritesh Harjani riteshh@linux.ibm.com
commit 0e6895ba00b7be45f3ab0d2107dda3ef1245f5b4 upstream.
After moving ext4's bmap to iomap interface, swapon functionality on files created using fallocate (which creates unwritten extents) are failing. This is since iomap_bmap interface returns 0 for unwritten extents and thus generic_swapfile_activate considers this as holes and hence bail out with below kernel msg :-
[340.915835] swapon: swapfile has holes
To fix this we need to implement ->swap_activate aops in ext4 which will use ext4_iomap_report_ops. Since we only need to return the list of extents so ext4_iomap_report_ops should be enough.
Cc: stable@kernel.org Reported-by: Yuxuan Shui yshuiv7@gmail.com Fixes: ac58e4fb03f ("ext4: move ext4 bmap to use iomap infrastructure") Signed-off-by: Ritesh Harjani riteshh@linux.ibm.com Link: https://lore.kernel.org/r/20200904091653.1014334-1-riteshh@linux.ibm.com Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ext4/inode.c | 11 +++++++++++ 1 file changed, 11 insertions(+)
--- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -3601,6 +3601,13 @@ static int ext4_set_page_dirty(struct pa return __set_page_dirty_buffers(page); }
+static int ext4_iomap_swap_activate(struct swap_info_struct *sis, + struct file *file, sector_t *span) +{ + return iomap_swapfile_activate(sis, file, span, + &ext4_iomap_report_ops); +} + static const struct address_space_operations ext4_aops = { .readpage = ext4_readpage, .readahead = ext4_readahead, @@ -3616,6 +3623,7 @@ static const struct address_space_operat .migratepage = buffer_migrate_page, .is_partially_uptodate = block_is_partially_uptodate, .error_remove_page = generic_error_remove_page, + .swap_activate = ext4_iomap_swap_activate, };
static const struct address_space_operations ext4_journalled_aops = { @@ -3632,6 +3640,7 @@ static const struct address_space_operat .direct_IO = noop_direct_IO, .is_partially_uptodate = block_is_partially_uptodate, .error_remove_page = generic_error_remove_page, + .swap_activate = ext4_iomap_swap_activate, };
static const struct address_space_operations ext4_da_aops = { @@ -3649,6 +3658,7 @@ static const struct address_space_operat .migratepage = buffer_migrate_page, .is_partially_uptodate = block_is_partially_uptodate, .error_remove_page = generic_error_remove_page, + .swap_activate = ext4_iomap_swap_activate, };
static const struct address_space_operations ext4_dax_aops = { @@ -3657,6 +3667,7 @@ static const struct address_space_operat .set_page_dirty = noop_set_page_dirty, .bmap = ext4_bmap, .invalidatepage = noop_invalidatepage, + .swap_activate = ext4_iomap_swap_activate, };
void ext4_set_aops(struct inode *inode)
From: Luo Meng luomeng12@huawei.com
commit 1322181170bb01bce3c228b82ae3d5c6b793164f upstream.
During the stability test, there are some errors: ext4_lookup:1590: inode #6967: comm fsstress: iget: checksum invalid.
If the inode->i_iblocks too big and doesn't set huge file flag, checksum will not be recalculated when update the inode information to it's buffer. If other inode marks the buffer dirty, then the inconsistent inode will be flushed to disk.
Fix this problem by checking i_blocks in advance.
Cc: stable@kernel.org Signed-off-by: Luo Meng luomeng12@huawei.com Reviewed-by: Darrick J. Wong darrick.wong@oracle.com Link: https://lore.kernel.org/r/20201020013631.3796673-1-luomeng12@huawei.com Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ext4/inode.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-)
--- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -4982,6 +4982,12 @@ static int ext4_do_update_inode(handle_t if (ext4_test_inode_state(inode, EXT4_STATE_NEW)) memset(raw_inode, 0, EXT4_SB(inode->i_sb)->s_inode_size);
+ err = ext4_inode_blocks_set(handle, raw_inode, ei); + if (err) { + spin_unlock(&ei->i_raw_lock); + goto out_brelse; + } + raw_inode->i_mode = cpu_to_le16(inode->i_mode); i_uid = i_uid_read(inode); i_gid = i_gid_read(inode); @@ -5015,11 +5021,6 @@ static int ext4_do_update_inode(handle_t EXT4_INODE_SET_XTIME(i_atime, inode, raw_inode); EXT4_EINODE_SET_XTIME(i_crtime, ei, raw_inode);
- err = ext4_inode_blocks_set(handle, raw_inode, ei); - if (err) { - spin_unlock(&ei->i_raw_lock); - goto out_brelse; - } raw_inode->i_dtime = cpu_to_le32(ei->i_dtime); raw_inode->i_flags = cpu_to_le32(ei->i_flags & 0xFFFFFFFF); if (likely(!test_opt2(inode->i_sb, HURD_COMPAT)))
From: Constantine Sapuntzakis costa@purestorage.com
commit acaa532687cdc3a03757defafece9c27aa667546 upstream.
The race condition could cause the persisted superblock checksum to not match the contents of the superblock, causing the superblock to be considered corrupt.
An example of the race follows. A first thread is interrupted in the middle of a checksum calculation. Then, another thread changes the superblock, calculates a new checksum, and sets it. Then, the first thread resumes and sets the checksum based on the older superblock.
To fix, serialize the superblock checksum calculation using the buffer header lock. While a spinlock is sufficient, the buffer header is already there and there is precedent for locking it (e.g. in ext4_commit_super).
Tested the patch by booting up a kernel with the patch, creating a filesystem and some files (including some orphans), and then unmounting and remounting the file system.
Cc: stable@kernel.org Signed-off-by: Constantine Sapuntzakis costa@purestorage.com Reviewed-by: Jan Kara jack@suse.cz Suggested-by: Jan Kara jack@suse.cz Link: https://lore.kernel.org/r/20200914161014.22275-1-costa@purestorage.com Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ext4/super.c | 11 +++++++++++ 1 file changed, 11 insertions(+)
--- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -201,7 +201,18 @@ void ext4_superblock_csum_set(struct sup if (!ext4_has_metadata_csum(sb)) return;
+ /* + * Locking the superblock prevents the scenario + * where: + * 1) a first thread pauses during checksum calculation. + * 2) a second thread updates the superblock, recalculates + * the checksum, and updates s_checksum + * 3) the first thread resumes and finishes its checksum calculation + * and updates s_checksum with a potentially stale or torn value. + */ + lock_buffer(EXT4_SB(sb)->s_sbh); es->s_checksum = ext4_superblock_csum(sb, es); + unlock_buffer(EXT4_SB(sb)->s_sbh); }
ext4_fsblk_t ext4_block_bitmap(struct super_block *sb,
From: zhangyi (F) yi.zhang@huawei.com
commit d9befedaafcf3a111428baa7c45b02923eab2d87 upstream.
The metadata buffer is no longer trusted after we read it from disk again because it is not uptodate for some reasons (e.g. failed to write back). Otherwise we may get below memory corruption problem in ext4_ext_split()->memset() if we read stale data from the newly allocated extent block on disk which has been failed to async write out but miss verify again since the verified bit has already been set on the buffer.
[ 29.774674] BUG: unable to handle kernel paging request at ffff88841949d000 ... [ 29.783317] Oops: 0002 [#2] SMP [ 29.784219] R10: 00000000000f4240 R11: 0000000000002e28 R12: ffff88842fa1c800 [ 29.784627] CPU: 1 PID: 126 Comm: kworker/u4:3 Tainted: G D W [ 29.785546] R13: ffffffff9cddcc20 R14: ffffffff9cddd420 R15: ffff88842fa1c2f8 [ 29.786679] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),BIOS ?-20190727_0738364 [ 29.787588] FS: 0000000000000000(0000) GS:ffff88842fa00000(0000) knlGS:0000000000000000 [ 29.789288] Workqueue: writeback wb_workfn [ 29.790319] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 29.790321] (flush-8:0) [ 29.790844] CR2: 0000000000000008 CR3: 00000004234f2000 CR4: 00000000000006f0 [ 29.791924] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 29.792839] RIP: 0010:__memset+0x24/0x30 [ 29.793739] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 29.794256] Code: 90 90 90 90 90 90 0f 1f 44 00 00 49 89 f9 48 89 d1 83 e2 07 48 c1 e9 033 [ 29.795161] Kernel panic - not syncing: Fatal exception in interrupt ... [ 29.808149] Call Trace: [ 29.808475] ext4_ext_insert_extent+0x102e/0x1be0 [ 29.809085] ext4_ext_map_blocks+0xa89/0x1bb0 [ 29.809652] ext4_map_blocks+0x290/0x8a0 [ 29.809085] ext4_ext_map_blocks+0xa89/0x1bb0 [ 29.809652] ext4_map_blocks+0x290/0x8a0 [ 29.810161] ext4_writepages+0xc85/0x17c0 ...
Fix this by clearing buffer's verified bit if we read meta block from disk again.
Signed-off-by: zhangyi (F) yi.zhang@huawei.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200924073337.861472-2-yi.zhang@huawei.com Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ext4/balloc.c | 1 + fs/ext4/extents.c | 1 + fs/ext4/ialloc.c | 1 + fs/ext4/inode.c | 5 ++++- fs/ext4/super.c | 1 + 5 files changed, 8 insertions(+), 1 deletion(-)
--- a/fs/ext4/balloc.c +++ b/fs/ext4/balloc.c @@ -494,6 +494,7 @@ ext4_read_block_bitmap_nowait(struct sup * submit the buffer_head for reading */ set_buffer_new(bh); + clear_buffer_verified(bh); trace_ext4_read_block_bitmap_load(sb, block_group, ignore_locked); bh->b_end_io = ext4_end_bitmap_read; get_bh(bh); --- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -501,6 +501,7 @@ __read_extent_tree_block(const char *fun
if (!bh_uptodate_or_lock(bh)) { trace_ext4_ext_load_extent(inode, pblk, _RET_IP_); + clear_buffer_verified(bh); err = bh_submit_read(bh); if (err < 0) goto errout; --- a/fs/ext4/ialloc.c +++ b/fs/ext4/ialloc.c @@ -188,6 +188,7 @@ ext4_read_inode_bitmap(struct super_bloc /* * submit the buffer_head for reading */ + clear_buffer_verified(bh); trace_ext4_load_inode_bitmap(sb, block_group); bh->b_end_io = ext4_end_bitmap_read; get_bh(bh); --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -884,6 +884,7 @@ struct buffer_head *ext4_bread(handle_t return bh; if (!bh || ext4_buffer_uptodate(bh)) return bh; + clear_buffer_verified(bh); ll_rw_block(REQ_OP_READ, REQ_META | REQ_PRIO, 1, &bh); wait_on_buffer(bh); if (buffer_uptodate(bh)) @@ -909,9 +910,11 @@ int ext4_bread_batch(struct inode *inode
for (i = 0; i < bh_count; i++) /* Note that NULL bhs[i] is valid because of holes. */ - if (bhs[i] && !ext4_buffer_uptodate(bhs[i])) + if (bhs[i] && !ext4_buffer_uptodate(bhs[i])) { + clear_buffer_verified(bhs[i]); ll_rw_block(REQ_OP_READ, REQ_META | REQ_PRIO, 1, &bhs[i]); + }
if (!wait) return 0; --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -156,6 +156,7 @@ ext4_sb_bread(struct super_block *sb, se return ERR_PTR(-ENOMEM); if (ext4_buffer_uptodate(bh)) return bh; + clear_buffer_verified(bh); ll_rw_block(REQ_OP_READ, REQ_META | op_flags, 1, &bh); wait_on_buffer(bh); if (buffer_uptodate(bh))
From: Zhang Xiaoxu zhangxiaoxu5@huawei.com
commit 9704a322ea67fdb05fc66cf431fdd01c2424bbd9 upstream.
Consider a situation when a filesystem was uncleanly shutdown and the orphan list is not empty and a read-only mount is attempted. The orphan list cleanup during mount will fail with:
ext4_check_bdev_write_error:193: comm mount: Error while async write back metadata
This happens because sbi->s_bdev_wb_err is not initialized when mounting the filesystem in read only mode and so ext4_check_bdev_write_error() falsely triggers.
Initialize sbi->s_bdev_wb_err unconditionally to avoid this problem.
Fixes: bc71726c7257 ("ext4: abort the filesystem if failed to async write metadata buffer") Cc: stable@kernel.org Signed-off-by: Zhang Xiaoxu zhangxiaoxu5@huawei.com Reviewed-by: Jan Kara jack@suse.cz Link: https://lore.kernel.org/r/20200928020556.710971-1-zhangxiaoxu5@huawei.com Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ext4/super.c | 13 ++----------- 1 file changed, 2 insertions(+), 11 deletions(-)
--- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -4826,9 +4826,8 @@ no_journal: * used to detect the metadata async write error. */ spin_lock_init(&sbi->s_bdev_wb_lock); - if (!sb_rdonly(sb)) - errseq_check_and_advance(&sb->s_bdev->bd_inode->i_mapping->wb_err, - &sbi->s_bdev_wb_err); + errseq_check_and_advance(&sb->s_bdev->bd_inode->i_mapping->wb_err, + &sbi->s_bdev_wb_err); sb->s_bdev->bd_super = sb; EXT4_SB(sb)->s_mount_state |= EXT4_ORPHAN_FS; ext4_orphan_cleanup(sb, es); @@ -5721,14 +5720,6 @@ static int ext4_remount(struct super_blo }
/* - * Update the original bdev mapping's wb_err value - * which could be used to detect the metadata async - * write error. - */ - errseq_check_and_advance(&sb->s_bdev->bd_inode->i_mapping->wb_err, - &sbi->s_bdev_wb_err); - - /* * Mounting a RDONLY partition read-write, so reread * and store the current valid flag. (It may have * been changed by e2fsck since we originally mounted
From: Ritesh Harjani riteshh@linux.ibm.com
commit d1e18b8824dd50cff255e6cecf515ea598eaf9f0 upstream.
left shifting m_lblk by blkbits was causing value overflow and hence it was not able to convert unwritten to written extent. So, make sure we typecast it to loff_t before do left shift operation. Also in func ext4_convert_unwritten_io_end_vec(), make sure to initialize ret variable to avoid accidentally returning an uninitialized ret.
This patch fixes the issue reported in ext4 for bs < ps with dioread_nolock mount option.
Fixes: c8cc88163f40df39e50c ("ext4: Add support for blocksize < pagesize in dioread_nolock") Cc: stable@kernel.org Reported-by: Aneesh Kumar K.V aneesh.kumar@linux.ibm.com Signed-off-by: Ritesh Harjani riteshh@linux.ibm.com Reviewed-by: Jan Kara jack@suse.cz Link: https://lore.kernel.org/r/af902b5db99e8b73980c795d84ad7bb417487e76.160216886... Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ext4/extents.c | 2 +- fs/ext4/inode.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-)
--- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -4770,7 +4770,7 @@ int ext4_convert_unwritten_extents(handl
int ext4_convert_unwritten_io_end_vec(handle_t *handle, ext4_io_end_t *io_end) { - int ret, err = 0; + int ret = 0, err = 0; struct ext4_io_end_vec *io_end_vec;
/* --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -2257,7 +2257,7 @@ static int mpage_process_page(struct mpa err = PTR_ERR(io_end_vec); goto out; } - io_end_vec->offset = mpd->map.m_lblk << blkbits; + io_end_vec->offset = (loff_t)mpd->map.m_lblk << blkbits; } *map_bh = true; goto out;
From: yangerkun yangerkun@huawei.com
commit d7dce9e08595e80bf8039a81794809c66fe26431 upstream.
ext4_ext_search_right() will read more extent blocks and call put_bh after we get the information we need. However, ret_ex will break this and may cause use-after-free once pagecache has been freed. Fix it by copying the extent structure if needed.
Signed-off-by: yangerkun yangerkun@huawei.com Link: https://lore.kernel.org/r/20201028055617.2569255-1-yangerkun@huawei.com Signed-off-by: Theodore Ts'o tytso@mit.edu Cc: stable@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ext4/extents.c | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-)
--- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -1472,16 +1472,16 @@ static int ext4_ext_search_left(struct i }
/* - * search the closest allocated block to the right for *logical - * and returns it at @logical + it's physical address at @phys - * if *logical is the largest allocated block, the function - * returns 0 at @phys - * return value contains 0 (success) or error code + * Search the closest allocated block to the right for *logical + * and returns it at @logical + it's physical address at @phys. + * If not exists, return 0 and @phys is set to 0. We will return + * 1 which means we found an allocated block and ret_ex is valid. + * Or return a (< 0) error code. */ static int ext4_ext_search_right(struct inode *inode, struct ext4_ext_path *path, ext4_lblk_t *logical, ext4_fsblk_t *phys, - struct ext4_extent **ret_ex) + struct ext4_extent *ret_ex) { struct buffer_head *bh = NULL; struct ext4_extent_header *eh; @@ -1575,10 +1575,11 @@ got_index: found_extent: *logical = le32_to_cpu(ex->ee_block); *phys = ext4_ext_pblock(ex); - *ret_ex = ex; + if (ret_ex) + *ret_ex = *ex; if (bh) put_bh(bh); - return 0; + return 1; }
/* @@ -2869,8 +2870,8 @@ again: */ lblk = ex_end + 1; err = ext4_ext_search_right(inode, path, &lblk, &pblk, - &ex); - if (err) + NULL); + if (err < 0) goto out; if (pblk) { partial.pclu = EXT4_B2C(sbi, pblk); @@ -4038,7 +4039,7 @@ int ext4_ext_map_blocks(handle_t *handle struct ext4_map_blocks *map, int flags) { struct ext4_ext_path *path = NULL; - struct ext4_extent newex, *ex, *ex2; + struct ext4_extent newex, *ex, ex2; struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb); ext4_fsblk_t newblock = 0, pblk; int err = 0, depth, ret; @@ -4174,15 +4175,14 @@ int ext4_ext_map_blocks(handle_t *handle if (err) goto out; ar.lright = map->m_lblk; - ex2 = NULL; err = ext4_ext_search_right(inode, path, &ar.lright, &ar.pright, &ex2); - if (err) + if (err < 0) goto out;
/* Check if the extent after searching to the right implies a * cluster we can use. */ - if ((sbi->s_cluster_ratio > 1) && ex2 && - get_implied_cluster_alloc(inode->i_sb, map, ex2, path)) { + if ((sbi->s_cluster_ratio > 1) && err && + get_implied_cluster_alloc(inode->i_sb, map, &ex2, path)) { ar.len = allocated = map->m_len; newblock = map->m_pblk; goto got_allocated_blocks;
From: Dave Airlie airlied@redhat.com
commit fea456d82c19d201c21313864105876deabe148b upstream.
This was adding size to start, but pfn and start are in pages, so it should be using num_pages.
Not sure this fixes anything in the real world, just noticed it during refactoring.
Signed-off-by: Dave Airlie airlied@redhat.com Reviewed-by: Christian König christian.koenig@amd.com Cc: stable@vger.kernel.org Link: https://patchwork.freedesktop.org/patch/msgid/20201019222257.1684769-2-airli... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/ttm/ttm_bo.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -694,7 +694,7 @@ bool ttm_bo_eviction_valuable(struct ttm /* Don't evict this BO if it's outside of the * requested placement range */ - if (place->fpfn >= (bo->mem.start + bo->mem.size) || + if (place->fpfn >= (bo->mem.start + bo->mem.num_pages) || (place->lpfn && place->lpfn <= bo->mem.start)) return false;
From: Yangbo Lu yangbo.lu@nxp.com
commit 011fde48394b7dc8dfd6660d1013b26a00157b80 upstream.
For eMMC HS400 mode initialization, the DLL reset is a required step if DLL is enabled to use previously, like in bootloader. This step has not been documented in reference manual, but the RM will be fixed sooner or later.
This patch is to add the step of DLL reset, and make sure delay chain locked for HS400.
Signed-off-by: Yangbo Lu yangbo.lu@nxp.com Acked-by: Adrian Hunter adrian.hunter@intel.com Link: https://lore.kernel.org/r/20201020081116.20918-1-yangbo.lu@nxp.com Fixes: 54e08d9a95ca ("mmc: sdhci-of-esdhc: add hs400 mode support") Cc: stable@vger.kernel.org Signed-off-by: Ulf Hansson ulf.hansson@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/mmc/host/sdhci-esdhc.h | 2 ++ drivers/mmc/host/sdhci-of-esdhc.c | 17 +++++++++++++++++ 2 files changed, 19 insertions(+)
--- a/drivers/mmc/host/sdhci-esdhc.h +++ b/drivers/mmc/host/sdhci-esdhc.h @@ -5,6 +5,7 @@ * Copyright (c) 2007 Freescale Semiconductor, Inc. * Copyright (c) 2009 MontaVista Software, Inc. * Copyright (c) 2010 Pengutronix e.K. + * Copyright 2020 NXP * Author: Wolfram Sang kernel@pengutronix.de */
@@ -88,6 +89,7 @@ /* DLL Config 0 Register */ #define ESDHC_DLLCFG0 0x160 #define ESDHC_DLL_ENABLE 0x80000000 +#define ESDHC_DLL_RESET 0x40000000 #define ESDHC_DLL_FREQ_SEL 0x08000000
/* DLL Config 1 Register */ --- a/drivers/mmc/host/sdhci-of-esdhc.c +++ b/drivers/mmc/host/sdhci-of-esdhc.c @@ -4,6 +4,7 @@ * * Copyright (c) 2007, 2010, 2012 Freescale Semiconductor, Inc. * Copyright (c) 2009 MontaVista Software, Inc. + * Copyright 2020 NXP * * Authors: Xiaobo Xie X.Xie@freescale.com * Anton Vorontsov avorontsov@ru.mvista.com @@ -19,6 +20,7 @@ #include <linux/clk.h> #include <linux/ktime.h> #include <linux/dma-mapping.h> +#include <linux/iopoll.h> #include <linux/mmc/host.h> #include <linux/mmc/mmc.h> #include "sdhci-pltfm.h" @@ -743,6 +745,21 @@ static void esdhc_of_set_clock(struct sd if (host->mmc->actual_clock == MMC_HS200_MAX_DTR) temp |= ESDHC_DLL_FREQ_SEL; sdhci_writel(host, temp, ESDHC_DLLCFG0); + + temp |= ESDHC_DLL_RESET; + sdhci_writel(host, temp, ESDHC_DLLCFG0); + udelay(1); + temp &= ~ESDHC_DLL_RESET; + sdhci_writel(host, temp, ESDHC_DLLCFG0); + + /* Wait max 20 ms */ + if (read_poll_timeout(sdhci_readl, temp, + temp & ESDHC_DLL_STS_SLV_LOCK, + 10, 20000, false, + host, ESDHC_DLLSTAT0)) + pr_err("%s: timeout for delay chain lock.\n", + mmc_hostname(host->mmc)); + temp = sdhci_readl(host, ESDHC_TBCTL); sdhci_writel(host, temp | ESDHC_HS400_WNDW_ADJUST, ESDHC_TBCTL);
From: Michael Walle michael@walle.cc
commit 0add6e9b88d0632a25323aaf4987dbacb0e4ae64 upstream.
On rare occations there is the following error:
mmc0: Tuning timeout, falling back to fixed sampling clock
There are SD cards which takes a significant longer time to reply to the first CMD19 command. The eSDHC takes the data timeout value into account during the tuning period. The SDHCI core doesn't explicitly set this timeout for the tuning procedure. Thus on the slow cards, there might be a spurious "Buffer Read Ready" interrupt, which in turn triggers a wrong sequence of events. In the end this will lead to an unsuccessful tuning procedure and to the above error.
To workaround this, set the timeout to the maximum value (which is the best we can do) and the SDHCI core will take care of the proper timeout handling.
Fixes: ba49cbd0936e ("mmc: sdhci-of-esdhc: add tuning support") Signed-off-by: Michael Walle michael@walle.cc Acked-by: Adrian Hunter adrian.hunter@intel.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20201022222337.19857-1-michael@walle.cc Signed-off-by: Ulf Hansson ulf.hansson@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/mmc/host/sdhci-of-esdhc.c | 11 +++++++++++ 1 file changed, 11 insertions(+)
--- a/drivers/mmc/host/sdhci-of-esdhc.c +++ b/drivers/mmc/host/sdhci-of-esdhc.c @@ -1069,6 +1069,17 @@ static int esdhc_execute_tuning(struct m
esdhc_tuning_block_enable(host, true);
+ /* + * The eSDHC controller takes the data timeout value into account + * during tuning. If the SD card is too slow sending the response, the + * timer will expire and a "Buffer Read Ready" interrupt without data + * is triggered. This leads to tuning errors. + * + * Just set the timeout to the maximum value because the core will + * already take care of it in sdhci_send_tuning(). + */ + sdhci_writeb(host, 0xe, SDHCI_TIMEOUT_CONTROL); + hs400_tuning = host->flags & SDHCI_HS400_TUNING;
do {
From: Jisheng Zhang Jisheng.Zhang@synaptics.com
commit b3e1ea16fb39fb6e1a1cf1dbdd6738531de3dc7d upstream.
sdhci-of-dwcmshc meets an eMMC read performance regression with below command after commit 427b6514d095 ("mmc: sdhci: Add Auto CMD Auto Select support"):
dd if=/dev/mmcblk0 of=/dev/null bs=8192 count=100000
Before the commit, the above command gives 120MB/s After the commit, the above command gives 51.3 MB/s
So it looks like sdhci-of-dwcmshc expects Version 4 Mode for Auto CMD Auto Select. Fix the performance degradation by ensuring v4_mode is true to use Auto CMD Auto Select.
Fixes: 427b6514d095 ("mmc: sdhci: Add Auto CMD Auto Select support") Signed-off-by: Jisheng Zhang Jisheng.Zhang@synaptics.com Acked-by: Adrian Hunter adrian.hunter@intel.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20201015174115.4cf2c19a@xhacker.debian Signed-off-by: Ulf Hansson ulf.hansson@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/mmc/host/sdhci.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
--- a/drivers/mmc/host/sdhci.c +++ b/drivers/mmc/host/sdhci.c @@ -1384,9 +1384,11 @@ static inline void sdhci_auto_cmd_select /* * In case of Version 4.10 or later, use of 'Auto CMD Auto * Select' is recommended rather than use of 'Auto CMD12 - * Enable' or 'Auto CMD23 Enable'. + * Enable' or 'Auto CMD23 Enable'. We require Version 4 Mode + * here because some controllers (e.g sdhci-of-dwmshc) expect it. */ - if (host->version >= SDHCI_SPEC_410 && (use_cmd12 || use_cmd23)) { + if (host->version >= SDHCI_SPEC_410 && host->v4_mode && + (use_cmd12 || use_cmd23)) { *mode |= SDHCI_TRNS_AUTO_SEL;
ctrl2 = sdhci_readw(host, SDHCI_HOST_CONTROL2);
From: Thierry Reding treding@nvidia.com
commit ea90f66f2a8629dde07328df0b8314aae5e54a47 upstream.
Commit 63a613fdb16c ("memory: tegra: Add gr2d and gr3d to DRM IOMMU group") added the GPU to the DRM IOMMU group, which doesn't make any sense. This causes problems when Nouveau tries to attach to the SMMU and causes it to fall back to using the DMA API.
Remove the GPU from the DRM groups to restore the old behaviour. The GPU should always have its own IOMMU domain to make sure it can map buffers into contiguous chunks (for big page support) without getting in the way of mappings from the DRM group.
Cc: stable@vger.kernel.org Fixes: 63a613fdb16c ("memory: tegra: Add gr2d and gr3d to DRM IOMMU group") Reported-by: Matias Zuniga matias.nicolas.zc@gmail.com Signed-off-by: Thierry Reding treding@nvidia.com Reviewed-by: Dmitry Osipenko digetx@gmail.com Link: https://lore.kernel.org/r/20200901153248.1831263-1-thierry.reding@gmail.com Signed-off-by: Krzysztof Kozlowski krzk@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/memory/tegra/tegra124.c | 1 - 1 file changed, 1 deletion(-)
--- a/drivers/memory/tegra/tegra124.c +++ b/drivers/memory/tegra/tegra124.c @@ -957,7 +957,6 @@ static const struct tegra_smmu_swgroup t static const unsigned int tegra124_group_drm[] = { TEGRA_SWGROUP_DC, TEGRA_SWGROUP_DCB, - TEGRA_SWGROUP_GPU, TEGRA_SWGROUP_VIC, };
From: Alex Dewar alex.dewar90@gmail.com
commit 4da1edcf8f226d53c02c6b0e3077d581115b30d0 upstream.
In brcmstb_dpfe_download_firmware(), memory is allocated to variable fw by firmware_request_nowarn(), but never released. Fix up to release fw on all return paths.
Cc: stable@vger.kernel.org Fixes: 2f330caff577 ("memory: brcmstb: Add driver for DPFE") Signed-off-by: Alex Dewar alex.dewar90@gmail.com Acked-by: Markus Mayer mmayer@broadcom.com Acked-by: Florian Fainelli f.fainelli@gmail.com Link: https://lore.kernel.org/r/20200820172118.781324-1-alex.dewar90@gmail.com Signed-off-by: Krzysztof Kozlowski krzk@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/memory/brcmstb_dpfe.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-)
--- a/drivers/memory/brcmstb_dpfe.c +++ b/drivers/memory/brcmstb_dpfe.c @@ -656,8 +656,10 @@ static int brcmstb_dpfe_download_firmwar return (ret == -ENOENT) ? -EPROBE_DEFER : ret;
ret = __verify_firmware(&init, fw); - if (ret) - return -EFAULT; + if (ret) { + ret = -EFAULT; + goto release_fw; + }
__disable_dcpu(priv);
@@ -676,18 +678,20 @@ static int brcmstb_dpfe_download_firmwar
ret = __write_firmware(priv->dmem, dmem, dmem_size, is_big_endian); if (ret) - return ret; + goto release_fw; ret = __write_firmware(priv->imem, imem, imem_size, is_big_endian); if (ret) - return ret; + goto release_fw;
ret = __verify_fw_checksum(&init, priv, header, init.chksum); if (ret) - return ret; + goto release_fw;
__enable_dcpu(priv);
- return 0; +release_fw: + release_firmware(fw); + return ret; }
static ssize_t generic_show(unsigned int command, u32 response[],
From: Andrei Vagin avagin@gmail.com
commit c2f7d08cccf4af2ce6992feaabb9e68e4ae0bff3 upstream.
For all commands except FUTEX_WAIT, the timeout is interpreted as an absolute value. This absolute value is inside the task's time namespace and has to be converted to the host's time.
Fixes: 5a590f35add9 ("posix-clocks: Wire up clock_gettime() with timens offsets") Reported-by: Hans van der Laan j.h.vanderlaan@student.utwente.nl Signed-off-by: Andrei Vagin avagin@gmail.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Reviewed-by: Dmitry Safonov 0x7f454c46@gmail.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20201015160020.293748-1-avagin@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- kernel/futex.c | 5 +++++ 1 file changed, 5 insertions(+)
--- a/kernel/futex.c +++ b/kernel/futex.c @@ -39,6 +39,7 @@ #include <linux/freezer.h> #include <linux/memblock.h> #include <linux/fault-inject.h> +#include <linux/time_namespace.h>
#include <asm/futex.h>
@@ -3799,6 +3800,8 @@ SYSCALL_DEFINE6(futex, u32 __user *, uad t = timespec64_to_ktime(ts); if (cmd == FUTEX_WAIT) t = ktime_add_safe(ktime_get(), t); + else if (!(op & FUTEX_CLOCK_REALTIME)) + t = timens_ktime_to_host(CLOCK_MONOTONIC, t); tp = &t; } /* @@ -3991,6 +3994,8 @@ SYSCALL_DEFINE6(futex_time32, u32 __user t = timespec64_to_ktime(ts); if (cmd == FUTEX_WAIT) t = ktime_add_safe(ktime_get(), t); + else if (!(op & FUTEX_CLOCK_REALTIME)) + t = timens_ktime_to_host(CLOCK_MONOTONIC, t); tp = &t; } if (cmd == FUTEX_REQUEUE || cmd == FUTEX_CMP_REQUEUE ||
From: Alex Deucher alexander.deucher@amd.com
commit 10105d0c9763f058f6a9a09f78397d5bf94dc94c upstream.
Stop registering the SMU i2c bus on navi1x. This leads to instability issues when userspace processes mess with the bus and also seems to cause display stability issues in some cases.
Bug: https://gitlab.freedesktop.org/drm/amd/-/issues/1314 Bug: https://gitlab.freedesktop.org/drm/amd/-/issues/1341 Reviewed-by: Evan Quan evan.quan@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/amd/powerplay/navi10_ppt.c | 26 -------------------------- 1 file changed, 26 deletions(-)
--- a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c +++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c @@ -2463,37 +2463,11 @@ static const struct i2c_algorithm navi10 .functionality = navi10_i2c_func, };
-static int navi10_i2c_control_init(struct smu_context *smu, struct i2c_adapter *control) -{ - struct amdgpu_device *adev = to_amdgpu_device(control); - int res; - - control->owner = THIS_MODULE; - control->class = I2C_CLASS_SPD; - control->dev.parent = &adev->pdev->dev; - control->algo = &navi10_i2c_algo; - snprintf(control->name, sizeof(control->name), "AMDGPU SMU"); - - res = i2c_add_adapter(control); - if (res) - DRM_ERROR("Failed to register hw i2c, err: %d\n", res); - - return res; -} - -static void navi10_i2c_control_fini(struct smu_context *smu, struct i2c_adapter *control) -{ - i2c_del_adapter(control); -} - - static const struct pptable_funcs navi10_ppt_funcs = { .get_allowed_feature_mask = navi10_get_allowed_feature_mask, .set_default_dpm_table = navi10_set_default_dpm_table, .dpm_set_vcn_enable = navi10_dpm_set_vcn_enable, .dpm_set_jpeg_enable = navi10_dpm_set_jpeg_enable, - .i2c_init = navi10_i2c_control_init, - .i2c_fini = navi10_i2c_control_fini, .print_clk_levels = navi10_print_clk_levels, .force_clk_levels = navi10_force_clk_levels, .populate_umd_state_clk = navi10_populate_umd_state_clk,
From: Evan Quan evan.quan@amd.com
commit 83da6eea3af669ee0b1f1bc05ffd6150af984994 upstream.
To avoid underflow seen on Polaris10 with some 3440x1440 144Hz displays. As the threshold of 190 us cuts too close to minVBlankTime of 192 us.
Signed-off-by: Evan Quan evan.quan@amd.com Acked-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c @@ -2873,7 +2873,7 @@ static int smu7_vblank_too_short(struct if (hwmgr->is_kicker) switch_limit_us = data->is_memory_gddr5 ? 450 : 150; else - switch_limit_us = data->is_memory_gddr5 ? 190 : 150; + switch_limit_us = data->is_memory_gddr5 ? 200 : 150; break; case CHIP_VEGAM: switch_limit_us = 30;
From: Kenneth Feng kenneth.feng@amd.com
commit 392d256fa26d943fb0a019fea4be80382780d3b1 upstream.
fclk value is missing in pp_dpm_fclk. add this to correctly show the current value.
Signed-off-by: Kenneth Feng kenneth.feng@amd.com Reviewed-by: Likun Gao Likun.Gao@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org # 5.9.x Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/amd/powerplay/sienna_cichlid_ppt.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/drivers/gpu/drm/amd/powerplay/sienna_cichlid_ppt.c +++ b/drivers/gpu/drm/amd/powerplay/sienna_cichlid_ppt.c @@ -447,6 +447,9 @@ static int sienna_cichlid_get_smu_metric case METRICS_CURR_DCEFCLK: *value = metrics->CurrClock[PPCLK_DCEFCLK]; break; + case METRICS_CURR_FCLK: + *value = metrics->CurrClock[PPCLK_FCLK]; + break; case METRICS_AVERAGE_GFXCLK: if (metrics->AverageGfxActivity <= SMU_11_0_7_GFX_BUSY_THRESHOLD) *value = metrics->AverageGfxclkFrequencyPostDs;
From: Kevin Wang kevin1.wang@amd.com
commit d48d7484d8dca1d4577fc53f1f826e68420d00eb upstream.
it will cause smu sysfs node of "pp_features" show error.
Signed-off-by: Kevin Wang kevin1.wang@amd.com Reviewed-by: Likun Gao Likun.Gao@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org # 5.9.x Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/amd/powerplay/inc/smu_types.h | 1 + drivers/gpu/drm/amd/powerplay/sienna_cichlid_ppt.c | 3 +++ 2 files changed, 4 insertions(+)
--- a/drivers/gpu/drm/amd/powerplay/inc/smu_types.h +++ b/drivers/gpu/drm/amd/powerplay/inc/smu_types.h @@ -217,6 +217,7 @@ enum smu_clk_type { __SMU_DUMMY_MAP(DPM_MP0CLK), \ __SMU_DUMMY_MAP(DPM_LINK), \ __SMU_DUMMY_MAP(DPM_DCEFCLK), \ + __SMU_DUMMY_MAP(DPM_XGMI), \ __SMU_DUMMY_MAP(DS_GFXCLK), \ __SMU_DUMMY_MAP(DS_SOCCLK), \ __SMU_DUMMY_MAP(DS_LCLK), \ --- a/drivers/gpu/drm/amd/powerplay/sienna_cichlid_ppt.c +++ b/drivers/gpu/drm/amd/powerplay/sienna_cichlid_ppt.c @@ -150,14 +150,17 @@ static struct cmn2asic_mapping sienna_ci FEA_MAP(DPM_GFXCLK), FEA_MAP(DPM_GFX_GPO), FEA_MAP(DPM_UCLK), + FEA_MAP(DPM_FCLK), FEA_MAP(DPM_SOCCLK), FEA_MAP(DPM_MP0CLK), FEA_MAP(DPM_LINK), FEA_MAP(DPM_DCEFCLK), + FEA_MAP(DPM_XGMI), FEA_MAP(MEM_VDDCI_SCALING), FEA_MAP(MEM_MVDD_SCALING), FEA_MAP(DS_GFXCLK), FEA_MAP(DS_SOCCLK), + FEA_MAP(DS_FCLK), FEA_MAP(DS_LCLK), FEA_MAP(DS_DCEFCLK), FEA_MAP(DS_UCLK),
From: Andrey Grodzovsky andrey.grodzovsky@amd.com
commit f1bcddffe46b349a82445a8d9efd5f5fcb72557f upstream.
psp sysfs not cleaned up on driver unload for sienna_cichlid
Fixes: ce87c98db428e7 ("drm/amdgpu: Include sienna_cichlid in USBC PD FW support.") Signed-off-by: Andrey Grodzovsky andrey.grodzovsky@amd.com Reviewed-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org # 5.9.x Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c @@ -206,7 +206,8 @@ static int psp_sw_fini(void *handle) adev->psp.ta_fw = NULL; }
- if (adev->asic_type == CHIP_NAVI10) + if (adev->asic_type == CHIP_NAVI10 || + adev->asic_type == CHIP_SIENNA_CICHLID) psp_sysfs_fini(adev);
return 0;
From: Likun Gao Likun.Gao@amd.com
commit 687e79c0feb4243b141b1e9a20adba3c0ec66f7f upstream.
Skip disabled sa to correct the cu_info and active_rbs for sienna cichlid.
Signed-off-by: Likun Gao Likun.Gao@amd.com Reviewed-by: Hawking Zhang Hawking.Zhang@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Cc: stable@vger.kernel.org # 5.9.x Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 9 +++++++++ 1 file changed, 9 insertions(+)
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c @@ -4537,12 +4537,17 @@ static void gfx_v10_0_setup_rb(struct am int i, j; u32 data; u32 active_rbs = 0; + u32 bitmap; u32 rb_bitmap_width_per_sh = adev->gfx.config.max_backends_per_se / adev->gfx.config.max_sh_per_se;
mutex_lock(&adev->grbm_idx_mutex); for (i = 0; i < adev->gfx.config.max_shader_engines; i++) { for (j = 0; j < adev->gfx.config.max_sh_per_se; j++) { + bitmap = i * adev->gfx.config.max_sh_per_se + j; + if ((adev->asic_type == CHIP_SIENNA_CICHLID) && + ((gfx_v10_3_get_disabled_sa(adev) >> bitmap) & 1)) + continue; gfx_v10_0_select_se_sh(adev, i, j, 0xffffffff); data = gfx_v10_0_get_rb_active_bitmap(adev); active_rbs |= data << ((i * adev->gfx.config.max_sh_per_se + j) * @@ -8761,6 +8766,10 @@ static int gfx_v10_0_get_cu_info(struct mutex_lock(&adev->grbm_idx_mutex); for (i = 0; i < adev->gfx.config.max_shader_engines; i++) { for (j = 0; j < adev->gfx.config.max_sh_per_se; j++) { + bitmap = i * adev->gfx.config.max_sh_per_se + j; + if ((adev->asic_type == CHIP_SIENNA_CICHLID) && + ((gfx_v10_3_get_disabled_sa(adev) >> bitmap) & 1)) + continue; mask = 1; ao_bitmap = 0; counter = 0;
From: Linus Torvalds torvalds@linux-foundation.org
commit 90bfdeef83f1d6c696039b6a917190dcbbad3220 upstream.
Some of the font tty ioctl's always used the current foreground VC for their operations. Don't do that then.
This fixes a data race on fg_console.
Side note: both Michael Ellerman and Jiri Slaby point out that all these ioctls are deprecated, and should probably have been removed long ago, and everything seems to be using the KDFONTOP ioctl instead.
In fact, Michael points out that it looks like busybox's loadfont program seems to have switched over to using KDFONTOP exactly _because_ of this bug (ahem.. 12 years ago ;-).
Reported-by: Minh Yuan yuanmingbuaa@gmail.com Acked-by: Michael Ellerman mpe@ellerman.id.au Acked-by: Jiri Slaby jirislaby@kernel.org Cc: Greg KH greg@kroah.com Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/tty/vt/vt_ioctl.c | 36 +++++++++++++++++++----------------- 1 file changed, 19 insertions(+), 17 deletions(-)
--- a/drivers/tty/vt/vt_ioctl.c +++ b/drivers/tty/vt/vt_ioctl.c @@ -485,7 +485,7 @@ static int vt_k_ioctl(struct tty_struct return 0; }
-static inline int do_fontx_ioctl(int cmd, +static inline int do_fontx_ioctl(struct vc_data *vc, int cmd, struct consolefontdesc __user *user_cfd, struct console_font_op *op) { @@ -503,15 +503,16 @@ static inline int do_fontx_ioctl(int cmd op->height = cfdarg.charheight; op->charcount = cfdarg.charcount; op->data = cfdarg.chardata; - return con_font_op(vc_cons[fg_console].d, op); - case GIO_FONTX: { + return con_font_op(vc, op); + + case GIO_FONTX: op->op = KD_FONT_OP_GET; op->flags = KD_FONT_FLAG_OLD; op->width = 8; op->height = cfdarg.charheight; op->charcount = cfdarg.charcount; op->data = cfdarg.chardata; - i = con_font_op(vc_cons[fg_console].d, op); + i = con_font_op(vc, op); if (i) return i; cfdarg.charheight = op->height; @@ -519,12 +520,11 @@ static inline int do_fontx_ioctl(int cmd if (copy_to_user(user_cfd, &cfdarg, sizeof(struct consolefontdesc))) return -EFAULT; return 0; - } } return -EINVAL; }
-static int vt_io_fontreset(struct console_font_op *op) +static int vt_io_fontreset(struct vc_data *vc, struct console_font_op *op) { int ret;
@@ -538,12 +538,12 @@ static int vt_io_fontreset(struct consol
op->op = KD_FONT_OP_SET_DEFAULT; op->data = NULL; - ret = con_font_op(vc_cons[fg_console].d, op); + ret = con_font_op(vc, op); if (ret) return ret;
console_lock(); - con_set_default_unimap(vc_cons[fg_console].d); + con_set_default_unimap(vc); console_unlock();
return 0; @@ -585,7 +585,7 @@ static int vt_io_ioctl(struct vc_data *v op.height = 0; op.charcount = 256; op.data = up; - return con_font_op(vc_cons[fg_console].d, &op); + return con_font_op(vc, &op);
case GIO_FONT: op.op = KD_FONT_OP_GET; @@ -594,7 +594,7 @@ static int vt_io_ioctl(struct vc_data *v op.height = 32; op.charcount = 256; op.data = up; - return con_font_op(vc_cons[fg_console].d, &op); + return con_font_op(vc, &op);
case PIO_CMAP: if (!perm) @@ -610,13 +610,13 @@ static int vt_io_ioctl(struct vc_data *v
fallthrough; case GIO_FONTX: - return do_fontx_ioctl(cmd, up, &op); + return do_fontx_ioctl(vc, cmd, up, &op);
case PIO_FONTRESET: if (!perm) return -EPERM;
- return vt_io_fontreset(&op); + return vt_io_fontreset(vc, &op);
case PIO_SCRNMAP: if (!perm) @@ -1067,8 +1067,9 @@ struct compat_consolefontdesc { };
static inline int -compat_fontx_ioctl(int cmd, struct compat_consolefontdesc __user *user_cfd, - int perm, struct console_font_op *op) +compat_fontx_ioctl(struct vc_data *vc, int cmd, + struct compat_consolefontdesc __user *user_cfd, + int perm, struct console_font_op *op) { struct compat_consolefontdesc cfdarg; int i; @@ -1086,7 +1087,8 @@ compat_fontx_ioctl(int cmd, struct compa op->height = cfdarg.charheight; op->charcount = cfdarg.charcount; op->data = compat_ptr(cfdarg.chardata); - return con_font_op(vc_cons[fg_console].d, op); + return con_font_op(vc, op); + case GIO_FONTX: op->op = KD_FONT_OP_GET; op->flags = KD_FONT_FLAG_OLD; @@ -1094,7 +1096,7 @@ compat_fontx_ioctl(int cmd, struct compa op->height = cfdarg.charheight; op->charcount = cfdarg.charcount; op->data = compat_ptr(cfdarg.chardata); - i = con_font_op(vc_cons[fg_console].d, op); + i = con_font_op(vc, op); if (i) return i; cfdarg.charheight = op->height; @@ -1184,7 +1186,7 @@ long vt_compat_ioctl(struct tty_struct * */ case PIO_FONTX: case GIO_FONTX: - return compat_fontx_ioctl(cmd, up, perm, &op); + return compat_fontx_ioctl(vc, cmd, up, perm, &op);
case KDFONTOP: return compat_kdfontop_ioctl(up, perm, &op, vc);
From: Jisheng Zhang Jisheng.Zhang@synaptics.com
commit b0fc70ce1f028e14a37c186d9f7a55e51439b83a upstream.
Berlin SoCs always contain some DW APB timers which can be used as an always-on broadcast timer.
Link: https://lore.kernel.org/r/20201009150536.214181fb@xhacker.debian Cc: stable@vger.kernel.org # v3.14+ Signed-off-by: Jisheng Zhang Jisheng.Zhang@synaptics.com Signed-off-by: Arnd Bergmann arnd@arndb.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/arm64/Kconfig.platforms | 1 + 1 file changed, 1 insertion(+)
--- a/arch/arm64/Kconfig.platforms +++ b/arch/arm64/Kconfig.platforms @@ -54,6 +54,7 @@ config ARCH_BCM_IPROC config ARCH_BERLIN bool "Marvell Berlin SoC Family" select DW_APB_ICTL + select DW_APB_TIMER_OF select GPIOLIB select PINCTRL help
From: Matthew Wilcox (Oracle) willy@infradead.org
commit 9480b4e75b7108ee68ecf5bc6b4bd68e8031c521 upstream.
If ->readpage returns an error, it has already unlocked the page.
Fixes: 5e929b33c393 ("CacheFiles: Handle truncate unlocking the page we're reading") Cc: stable@vger.kernel.org Signed-off-by: Matthew Wilcox (Oracle) willy@infradead.org Signed-off-by: David Howells dhowells@redhat.com Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/cachefiles/rdwr.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/fs/cachefiles/rdwr.c +++ b/fs/cachefiles/rdwr.c @@ -121,7 +121,7 @@ static int cachefiles_read_reissue(struc _debug("reissue read"); ret = bmapping->a_ops->readpage(NULL, backpage); if (ret < 0) - goto unlock_discard; + goto discard; }
/* but the page may have been read before the monitor was installed, so @@ -138,6 +138,7 @@ static int cachefiles_read_reissue(struc
unlock_discard: unlock_page(backpage); +discard: spin_lock_irq(&object->work_lock); list_del(&monitor->op_link); spin_unlock_irq(&object->work_lock);
From: Helge Deller deller@gmx.de
commit 879bc2d27904354b98ca295b6168718e045c4aa2 upstream.
When starting a HP machine with HIL driver but without an HIL keyboard or HIL mouse attached, it may happen that data written to the HIL loop gets stuck (e.g. because the transaction queue is full). Usually one will then have to reboot the machine because all you see is and endless output of: Transaction add failed: transaction already queued?
In the higher layers hp_sdc_enqueue_transaction() is called to queued up a HIL packet. This function returns an error code, and this patch adds the necessary checks for this return code and disables the HIL driver if further packets can't be sent.
Tested on a HP 730 and a HP 715/64 machine.
Signed-off-by: Helge Deller deller@gmx.de Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/input/serio/hil_mlc.c | 21 ++++++++++++++++++--- drivers/input/serio/hp_sdc_mlc.c | 8 ++++---- include/linux/hil_mlc.h | 2 +- 3 files changed, 23 insertions(+), 8 deletions(-)
--- a/drivers/input/serio/hil_mlc.c +++ b/drivers/input/serio/hil_mlc.c @@ -74,7 +74,7 @@ EXPORT_SYMBOL(hil_mlc_unregister); static LIST_HEAD(hil_mlcs); static DEFINE_RWLOCK(hil_mlcs_lock); static struct timer_list hil_mlcs_kicker; -static int hil_mlcs_probe; +static int hil_mlcs_probe, hil_mlc_stop;
static void hil_mlcs_process(unsigned long unused); static DECLARE_TASKLET_DISABLED_OLD(hil_mlcs_tasklet, hil_mlcs_process); @@ -702,9 +702,13 @@ static int hilse_donode(hil_mlc *mlc) if (!mlc->ostarted) { mlc->ostarted = 1; mlc->opacket = pack; - mlc->out(mlc); + rc = mlc->out(mlc); nextidx = HILSEN_DOZE; write_unlock_irqrestore(&mlc->lock, flags); + if (rc) { + hil_mlc_stop = 1; + return 1; + } break; } mlc->ostarted = 0; @@ -715,8 +719,13 @@ static int hilse_donode(hil_mlc *mlc)
case HILSE_CTS: write_lock_irqsave(&mlc->lock, flags); - nextidx = mlc->cts(mlc) ? node->bad : node->good; + rc = mlc->cts(mlc); + nextidx = rc ? node->bad : node->good; write_unlock_irqrestore(&mlc->lock, flags); + if (rc) { + hil_mlc_stop = 1; + return 1; + } break;
default: @@ -780,6 +789,12 @@ static void hil_mlcs_process(unsigned lo
static void hil_mlcs_timer(struct timer_list *unused) { + if (hil_mlc_stop) { + /* could not send packet - stop immediately. */ + pr_warn(PREFIX "HIL seems stuck - Disabling HIL MLC.\n"); + return; + } + hil_mlcs_probe = 1; tasklet_schedule(&hil_mlcs_tasklet); /* Re-insert the periodic task. */ --- a/drivers/input/serio/hp_sdc_mlc.c +++ b/drivers/input/serio/hp_sdc_mlc.c @@ -210,7 +210,7 @@ static int hp_sdc_mlc_cts(hil_mlc *mlc) priv->tseq[2] = 1; priv->tseq[3] = 0; priv->tseq[4] = 0; - __hp_sdc_enqueue_transaction(&priv->trans); + return __hp_sdc_enqueue_transaction(&priv->trans); busy: return 1; done: @@ -219,7 +219,7 @@ static int hp_sdc_mlc_cts(hil_mlc *mlc) return 0; }
-static void hp_sdc_mlc_out(hil_mlc *mlc) +static int hp_sdc_mlc_out(hil_mlc *mlc) { struct hp_sdc_mlc_priv_s *priv;
@@ -234,7 +234,7 @@ static void hp_sdc_mlc_out(hil_mlc *mlc) do_data: if (priv->emtestmode) { up(&mlc->osem); - return; + return 0; } /* Shouldn't be sending commands when loop may be busy */ BUG_ON(down_trylock(&mlc->csem)); @@ -296,7 +296,7 @@ static void hp_sdc_mlc_out(hil_mlc *mlc) BUG_ON(down_trylock(&mlc->csem)); } enqueue: - hp_sdc_enqueue_transaction(&priv->trans); + return hp_sdc_enqueue_transaction(&priv->trans); }
static int __init hp_sdc_mlc_init(void) --- a/include/linux/hil_mlc.h +++ b/include/linux/hil_mlc.h @@ -103,7 +103,7 @@ struct hilse_node {
/* Methods for back-end drivers, e.g. hp_sdc_mlc */ typedef int (hil_mlc_cts) (hil_mlc *mlc); -typedef void (hil_mlc_out) (hil_mlc *mlc); +typedef int (hil_mlc_out) (hil_mlc *mlc); typedef int (hil_mlc_in) (hil_mlc *mlc, suseconds_t timeout);
struct hil_mlc_devinfo {
From: Frank Wunderlich frank-w@public-files.de
commit 36f0a5fc5284838c544218666c63ee8cfa46a9c3 upstream.
port6 of mt7530 switch (= cpu port 0) on bananapi-r2 misses pause option which causes rx drops on running iperf.
Fixes: f4ff257cd160 ("arm: dts: mt7623: add support for Bananapi R2 (BPI-R2) board") Signed-off-by: Frank Wunderlich frank-w@public-files.de Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200907070517.51715-1-linux@fw-web.de Signed-off-by: Matthias Brugger matthias.bgg@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/arm/boot/dts/mt7623n-bananapi-bpi-r2.dts | 1 + 1 file changed, 1 insertion(+)
--- a/arch/arm/boot/dts/mt7623n-bananapi-bpi-r2.dts +++ b/arch/arm/boot/dts/mt7623n-bananapi-bpi-r2.dts @@ -192,6 +192,7 @@ fixed-link { speed = <1000>; full-duplex; + pause; }; }; };
From: Joel Stanley joel@jms.id.au
commit c82bf6e133d30e0f9172a20807814fa28aef0f67 upstream.
A feature was added to the aspeed vuart driver to configure the vuart interrupt (sirq) polarity according to the LPC/eSPI strapping register.
Systems that depend on a active low behaviour (sirq_polarity set to 0) such as OpenPower boxes also use LPC, so this relationship does not hold. Jeremy confirms that the s2600st which is strapped for eSPI also does not have this relationship.
The property was added for a Tyan S7106 system which is not supported in the kernel tree. Should this or other systems wish to use this feature of the driver they should add it to the machine specific device tree.
Fixes: c791fc76bc72 ("arm: dts: aspeed: Add vuart aspeed,sirq-polarity-sense...") Signed-off-by: Joel Stanley joel@jms.id.au Tested-by: Jeremy Kerr jk@ozlabs.org Reviewed-by: Jeremy Kerr jk@ozlabs.org Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200812112400.2406734-1-joel@jms.id.au Signed-off-by: Joel Stanley joel@jms.id.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/arm/boot/dts/aspeed-g5.dtsi | 1 - 1 file changed, 1 deletion(-)
--- a/arch/arm/boot/dts/aspeed-g5.dtsi +++ b/arch/arm/boot/dts/aspeed-g5.dtsi @@ -425,7 +425,6 @@ interrupts = <8>; clocks = <&syscon ASPEED_CLK_APB>; no-loopback-test; - aspeed,sirq-polarity-sense = <&syscon 0x70 25>; status = "disabled"; };
From: Krzysztof Kozlowski krzk@kernel.org
commit 2c6658c607a3af2ed7bd41dc57a3dd31537d023e upstream.
Fix typo in pinctrl property of "vibrator-en" fixed regulator in Aries family of boards. The error caused lack of pin configuration for the GPIO used in vibrator.
Fixes: 04568cb58a43 ("ARM: dts: s5pv210: Disable pull for vibrator enable GPIO on Aries boards") Signed-off-by: Krzysztof Kozlowski krzk@kernel.org Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200907161141.31034-5-krzk@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/arm/boot/dts/s5pv210-aries.dtsi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/arm/boot/dts/s5pv210-aries.dtsi +++ b/arch/arm/boot/dts/s5pv210-aries.dtsi @@ -66,7 +66,7 @@ gpio = <&gpj1 1 GPIO_ACTIVE_HIGH>;
pinctrl-names = "default"; - pinctr-0 = <&vibrator_ena>; + pinctrl-0 = <&vibrator_ena>; };
touchkey_vdd: regulator-fixed-1 {
From: Joel Stanley joel@jms.id.au
commit 98c3f0a1b3ef83f6be6b212c970bee795e1a0467 upstream.
In the 5.7 merge window the media kconfig was restructued. For most platforms these changes set CONFIG_MEDIA_SUPPORT_FILTER=y which keeps unwanted drivers disabled.
The exception is if a config sets EMBEDDED or EXPERT (see b0cd4fb27665). In that case the filter is set to =n, causing a bunch of DVB tuner drivers (MEDIA_TUNER_*) to be accidentally enabled. This was noticed as it blew out the build time for the Aspeed defconfigs.
Enabling the filter means the Aspeed config also needs to set CONFIG_MEDIA_PLATFORM_SUPPORT=y in order to have the CONFIG_VIDEO_ASPEED driver enabled.
Fixes: 06b93644f4d1 ("media: Kconfig: add an option to filter in/out platform drivers") Fixes: b0cd4fb27665 ("media: Kconfig: on !EMBEDDED && !EXPERT, enable driver filtering") Signed-off-by: Joel Stanley joel@jms.id.au Cc: stable@vger.kernel.org CC: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/arm/configs/aspeed_g4_defconfig | 3 ++- arch/arm/configs/aspeed_g5_defconfig | 3 ++- 2 files changed, 4 insertions(+), 2 deletions(-)
--- a/arch/arm/configs/aspeed_g4_defconfig +++ b/arch/arm/configs/aspeed_g4_defconfig @@ -160,7 +160,8 @@ CONFIG_SENSORS_TMP421=y CONFIG_SENSORS_W83773G=y CONFIG_WATCHDOG_SYSFS=y CONFIG_MEDIA_SUPPORT=y -CONFIG_MEDIA_CAMERA_SUPPORT=y +CONFIG_MEDIA_SUPPORT_FILTER=y +CONFIG_MEDIA_PLATFORM_SUPPORT=y CONFIG_V4L_PLATFORM_DRIVERS=y CONFIG_VIDEO_ASPEED=y CONFIG_DRM=y --- a/arch/arm/configs/aspeed_g5_defconfig +++ b/arch/arm/configs/aspeed_g5_defconfig @@ -175,7 +175,8 @@ CONFIG_SENSORS_TMP421=y CONFIG_SENSORS_W83773G=y CONFIG_WATCHDOG_SYSFS=y CONFIG_MEDIA_SUPPORT=y -CONFIG_MEDIA_CAMERA_SUPPORT=y +CONFIG_MEDIA_SUPPORT_FILTER=y +CONFIG_MEDIA_PLATFORM_SUPPORT=y CONFIG_V4L_PLATFORM_DRIVERS=y CONFIG_VIDEO_ASPEED=y CONFIG_DRM=y
From: Krzysztof Kozlowski krzk@kernel.org
commit 7be0d19c751b02db778ca95e3274d5ea7f31891c upstream.
Selecting CONFIG_SAMSUNG_PM_DEBUG (depending on CONFIG_DEBUG_LL) but without CONFIG_MMU leads to build errors:
arch/arm/plat-samsung/pm-debug.c: In function ‘s3c_pm_uart_base’: arch/arm/plat-samsung/pm-debug.c:57:2: error: implicit declaration of function ‘debug_ll_addr’ [-Werror=implicit-function-declaration]
Fixes: 99b2fc2b8b40 ("ARM: SAMSUNG: Use debug_ll_addr() to get UART base address") Reported-by: kernel test robot lkp@intel.com Signed-off-by: Krzysztof Kozlowski krzk@kernel.org Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200910154150.3318-1-krzk@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/arm/plat-samsung/Kconfig | 1 + 1 file changed, 1 insertion(+)
--- a/arch/arm/plat-samsung/Kconfig +++ b/arch/arm/plat-samsung/Kconfig @@ -241,6 +241,7 @@ config SAMSUNG_PM_DEBUG depends on PM && DEBUG_KERNEL depends on PLAT_S3C24XX || ARCH_S3C64XX || ARCH_S5PV210 depends on DEBUG_EXYNOS_UART || DEBUG_S3C24XX_UART || DEBUG_S3C2410_UART + depends on DEBUG_LL && MMU help Say Y here if you want verbose debugging from the PM Suspend and Resume code. See file:Documentation/arm/samsung-s3c24xx/suspend.rst
From: Krzysztof Kozlowski krzk@kernel.org
commit f6d7cde84f6c5551586c8b9b68d70f8e6dc9a000 upstream.
Commit f6361c6b3880 ("ARM: S3C24XX: remove separate restart code") removed usage of the watchdog reset platform code in favor of the Samsung SoC watchdog driver. However the latter was not selected thus S3C24xx platforms lost reset abilities.
Cc: stable@vger.kernel.org Fixes: f6361c6b3880 ("ARM: S3C24XX: remove separate restart code") Signed-off-by: Krzysztof Kozlowski krzk@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/arm/Kconfig | 2 ++ 1 file changed, 2 insertions(+)
--- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -506,8 +506,10 @@ config ARCH_S3C24XX select HAVE_S3C2410_WATCHDOG if WATCHDOG select HAVE_S3C_RTC if RTC_CLASS select NEED_MACH_IO_H + select S3C2410_WATCHDOG select SAMSUNG_ATAGS select USE_OF + select WATCHDOG help Samsung S3C2410, S3C2412, S3C2413, S3C2416, S3C2440, S3C2442, S3C2443 and S3C2450 SoCs based systems, such as the Simtec Electronics BAST
From: Fangrui Song maskray@google.com
commit ec9d78070de986ecf581ea204fd322af4d2477ec upstream.
Commit 39d114ddc682 ("arm64: add KASAN support") added .weak directives to arch/arm64/lib/mem*.S instead of changing the existing SYM_FUNC_START_PI macros. This can lead to the assembly snippet `.weak memcpy ... .globl memcpy` which will produce a STB_WEAK memcpy with GNU as but STB_GLOBAL memcpy with LLVM's integrated assembler before LLVM 12. LLVM 12 (since https://reviews.llvm.org/D90108) will error on such an overridden symbol binding.
Use the appropriate SYM_FUNC_START_WEAK_PI instead.
Fixes: 39d114ddc682 ("arm64: add KASAN support") Reported-by: Sami Tolvanen samitolvanen@google.com Signed-off-by: Fangrui Song maskray@google.com Tested-by: Sami Tolvanen samitolvanen@google.com Tested-by: Nick Desaulniers ndesaulniers@google.com Reviewed-by: Nick Desaulniers ndesaulniers@google.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20201029181951.1866093-1-maskray@google.com Signed-off-by: Will Deacon will@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/arm64/lib/memcpy.S | 3 +-- arch/arm64/lib/memmove.S | 3 +-- arch/arm64/lib/memset.S | 3 +-- 3 files changed, 3 insertions(+), 6 deletions(-)
--- a/arch/arm64/lib/memcpy.S +++ b/arch/arm64/lib/memcpy.S @@ -56,9 +56,8 @@ stp \reg1, \reg2, [\ptr], \val .endm
- .weak memcpy SYM_FUNC_START_ALIAS(__memcpy) -SYM_FUNC_START_PI(memcpy) +SYM_FUNC_START_WEAK_PI(memcpy) #include "copy_template.S" ret SYM_FUNC_END_PI(memcpy) --- a/arch/arm64/lib/memmove.S +++ b/arch/arm64/lib/memmove.S @@ -45,9 +45,8 @@ C_h .req x12 D_l .req x13 D_h .req x14
- .weak memmove SYM_FUNC_START_ALIAS(__memmove) -SYM_FUNC_START_PI(memmove) +SYM_FUNC_START_WEAK_PI(memmove) cmp dstin, src b.lo __memcpy add tmp1, src, count --- a/arch/arm64/lib/memset.S +++ b/arch/arm64/lib/memset.S @@ -42,9 +42,8 @@ dst .req x8 tmp3w .req w9 tmp3 .req x9
- .weak memset SYM_FUNC_START_ALIAS(__memset) -SYM_FUNC_START_PI(memset) +SYM_FUNC_START_WEAK_PI(memset) mov dst, dstin /* Preserve return value. */ and A_lw, val, #255 orr A_lw, A_lw, A_lw, lsl #8
From: Pali Rohár pali@kernel.org
commit b64d814257b027e29a474bcd660f6372490138c7 upstream.
Espressobin boards have 3 ethernet ports and some of them got assigned more then one MAC address. MAC addresses are stored in U-Boot environment.
Since commit a2c7023f7075c ("net: dsa: read mac address from DT for slave device") kernel can use MAC addresses from DT for particular DSA port.
Currently Espressobin DTS file contains alias just for ethernet0.
This patch defines additional ethernet aliases in Espressobin DTS files, so bootloader can fill correct MAC address for DSA switch ports if more MAC addresses were specified.
DT alias ethernet1 is used for wan port, DT aliases ethernet2 and ethernet3 are used for lan ports for both Espressobin revisions (V5 and V7).
Fixes: 5253cb8c00a6f ("arm64: dts: marvell: espressobin: add ethernet alias") Cc: stable@vger.kernel.org # a2c7023f7075c: dsa: read mac address Signed-off-by: Pali Rohár pali@kernel.org Reviewed-by: Andrew Lunn andrew@lunn.ch Reviewed-by: Andre Heider a.heider@gmail.com Signed-off-by: Gregory CLEMENT gregory.clement@bootlin.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/arm64/boot/dts/marvell/armada-3720-espressobin-v7-emmc.dts | 10 ++++++-- arch/arm64/boot/dts/marvell/armada-3720-espressobin-v7.dts | 10 ++++++-- arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi | 12 ++++++---- 3 files changed, 24 insertions(+), 8 deletions(-)
--- a/arch/arm64/boot/dts/marvell/armada-3720-espressobin-v7-emmc.dts +++ b/arch/arm64/boot/dts/marvell/armada-3720-espressobin-v7-emmc.dts @@ -20,17 +20,23 @@ compatible = "globalscale,espressobin-v7-emmc", "globalscale,espressobin-v7", "globalscale,espressobin", "marvell,armada3720", "marvell,armada3710"; + + aliases { + /* ethernet1 is wan port */ + ethernet1 = &switch0port3; + ethernet3 = &switch0port1; + }; };
&switch0 { ports { - port@1 { + switch0port1: port@1 { reg = <1>; label = "lan1"; phy-handle = <&switch0phy0>; };
- port@3 { + switch0port3: port@3 { reg = <3>; label = "wan"; phy-handle = <&switch0phy2>; --- a/arch/arm64/boot/dts/marvell/armada-3720-espressobin-v7.dts +++ b/arch/arm64/boot/dts/marvell/armada-3720-espressobin-v7.dts @@ -19,17 +19,23 @@ model = "Globalscale Marvell ESPRESSOBin Board V7"; compatible = "globalscale,espressobin-v7", "globalscale,espressobin", "marvell,armada3720", "marvell,armada3710"; + + aliases { + /* ethernet1 is wan port */ + ethernet1 = &switch0port3; + ethernet3 = &switch0port1; + }; };
&switch0 { ports { - port@1 { + switch0port1: port@1 { reg = <1>; label = "lan1"; phy-handle = <&switch0phy0>; };
- port@3 { + switch0port3: port@3 { reg = <3>; label = "wan"; phy-handle = <&switch0phy2>; --- a/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi +++ b/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi @@ -13,6 +13,10 @@ / { aliases { ethernet0 = ð0; + /* for dsa slave device */ + ethernet1 = &switch0port1; + ethernet2 = &switch0port2; + ethernet3 = &switch0port3; serial0 = &uart0; serial1 = &uart1; }; @@ -120,7 +124,7 @@ #address-cells = <1>; #size-cells = <0>;
- port@0 { + switch0port0: port@0 { reg = <0>; label = "cpu"; ethernet = <ð0>; @@ -131,19 +135,19 @@ }; };
- port@1 { + switch0port1: port@1 { reg = <1>; label = "wan"; phy-handle = <&switch0phy0>; };
- port@2 { + switch0port2: port@2 { reg = <2>; label = "lan0"; phy-handle = <&switch0phy1>; };
- port@3 { + switch0port3: port@3 { reg = <3>; label = "lan1"; phy-handle = <&switch0phy2>;
From: Kanchan Joshi joshi.k@samsung.com
commit 35bc10b2eafbb701064b94f283b77c54d3304842 upstream.
Parallel write,read,zone-mgmt operations accessing/altering zone state and write-pointer may get into race. Avoid the situation by using a new spinlock for zoned device. Concurrent zone-appends (on a zone) returning same write-pointer issue is also avoided using this lock.
Cc: stable@vger.kernel.org Fixes: e0489ed5daeb ("null_blk: Support REQ_OP_ZONE_APPEND") Signed-off-by: Kanchan Joshi joshi.k@samsung.com Reviewed-by: Damien Le Moal damien.lemoal@wdc.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/block/null_blk.h | 1 + drivers/block/null_blk_zoned.c | 22 ++++++++++++++++++---- 2 files changed, 19 insertions(+), 4 deletions(-)
--- a/drivers/block/null_blk.h +++ b/drivers/block/null_blk.h @@ -44,6 +44,7 @@ struct nullb_device { unsigned int nr_zones; struct blk_zone *zones; sector_t zone_size_sects; + spinlock_t zone_lock;
unsigned long size; /* device size in MB */ unsigned long completion_nsec; /* time in ns to complete a request */ --- a/drivers/block/null_blk_zoned.c +++ b/drivers/block/null_blk_zoned.c @@ -45,6 +45,7 @@ int null_init_zoned_dev(struct nullb_dev if (!dev->zones) return -ENOMEM;
+ spin_lock_init(&dev->zone_lock); if (dev->zone_nr_conv >= dev->nr_zones) { dev->zone_nr_conv = dev->nr_zones - 1; pr_info("changed the number of conventional zones to %u", @@ -131,8 +132,11 @@ int null_report_zones(struct gendisk *di * So use a local copy to avoid corruption of the device zone * array. */ + spin_lock_irq(&dev->zone_lock); memcpy(&zone, &dev->zones[first_zone + i], sizeof(struct blk_zone)); + spin_unlock_irq(&dev->zone_lock); + error = cb(&zone, i, data); if (error) return error; @@ -277,18 +281,28 @@ static blk_status_t null_zone_mgmt(struc blk_status_t null_process_zoned_cmd(struct nullb_cmd *cmd, enum req_opf op, sector_t sector, sector_t nr_sectors) { + blk_status_t sts; + struct nullb_device *dev = cmd->nq->dev; + + spin_lock_irq(&dev->zone_lock); switch (op) { case REQ_OP_WRITE: - return null_zone_write(cmd, sector, nr_sectors, false); + sts = null_zone_write(cmd, sector, nr_sectors, false); + break; case REQ_OP_ZONE_APPEND: - return null_zone_write(cmd, sector, nr_sectors, true); + sts = null_zone_write(cmd, sector, nr_sectors, true); + break; case REQ_OP_ZONE_RESET: case REQ_OP_ZONE_RESET_ALL: case REQ_OP_ZONE_OPEN: case REQ_OP_ZONE_CLOSE: case REQ_OP_ZONE_FINISH: - return null_zone_mgmt(cmd, op, sector); + sts = null_zone_mgmt(cmd, op, sector); + break; default: - return null_process_cmd(cmd, op, sector, nr_sectors); + sts = null_process_cmd(cmd, op, sector, nr_sectors); } + spin_unlock_irq(&dev->zone_lock); + + return sts; }
On 2020/11/04 5:52, Greg Kroah-Hartman wrote:
From: Kanchan Joshi joshi.k@samsung.com
commit 35bc10b2eafbb701064b94f283b77c54d3304842 upstream.
Parallel write,read,zone-mgmt operations accessing/altering zone state and write-pointer may get into race. Avoid the situation by using a new spinlock for zoned device. Concurrent zone-appends (on a zone) returning same write-pointer issue is also avoided using this lock.
Cc: stable@vger.kernel.org Fixes: e0489ed5daeb ("null_blk: Support REQ_OP_ZONE_APPEND") Signed-off-by: Kanchan Joshi joshi.k@samsung.com Reviewed-by: Damien Le Moal damien.lemoal@wdc.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
Greg,
I sent a followup patch fixing a bug introduced by this patch, but I forgot to mark it for stable. The patch is
commit aa1c09cb65e2 "null_blk: Fix locking in zoned mode"
Could you pickup that one too please ?
Best regards.
drivers/block/null_blk.h | 1 + drivers/block/null_blk_zoned.c | 22 ++++++++++++++++++---- 2 files changed, 19 insertions(+), 4 deletions(-)
--- a/drivers/block/null_blk.h +++ b/drivers/block/null_blk.h @@ -44,6 +44,7 @@ struct nullb_device { unsigned int nr_zones; struct blk_zone *zones; sector_t zone_size_sects;
- spinlock_t zone_lock;
unsigned long size; /* device size in MB */ unsigned long completion_nsec; /* time in ns to complete a request */ --- a/drivers/block/null_blk_zoned.c +++ b/drivers/block/null_blk_zoned.c @@ -45,6 +45,7 @@ int null_init_zoned_dev(struct nullb_dev if (!dev->zones) return -ENOMEM;
- spin_lock_init(&dev->zone_lock); if (dev->zone_nr_conv >= dev->nr_zones) { dev->zone_nr_conv = dev->nr_zones - 1; pr_info("changed the number of conventional zones to %u",
@@ -131,8 +132,11 @@ int null_report_zones(struct gendisk *di * So use a local copy to avoid corruption of the device zone * array. */
memcpy(&zone, &dev->zones[first_zone + i], sizeof(struct blk_zone));spin_lock_irq(&dev->zone_lock);
spin_unlock_irq(&dev->zone_lock);
- error = cb(&zone, i, data); if (error) return error;
@@ -277,18 +281,28 @@ static blk_status_t null_zone_mgmt(struc blk_status_t null_process_zoned_cmd(struct nullb_cmd *cmd, enum req_opf op, sector_t sector, sector_t nr_sectors) {
- blk_status_t sts;
- struct nullb_device *dev = cmd->nq->dev;
- spin_lock_irq(&dev->zone_lock); switch (op) { case REQ_OP_WRITE:
return null_zone_write(cmd, sector, nr_sectors, false);
sts = null_zone_write(cmd, sector, nr_sectors, false);
case REQ_OP_ZONE_APPEND:break;
return null_zone_write(cmd, sector, nr_sectors, true);
sts = null_zone_write(cmd, sector, nr_sectors, true);
case REQ_OP_ZONE_RESET: case REQ_OP_ZONE_RESET_ALL: case REQ_OP_ZONE_OPEN: case REQ_OP_ZONE_CLOSE: case REQ_OP_ZONE_FINISH:break;
return null_zone_mgmt(cmd, op, sector);
sts = null_zone_mgmt(cmd, op, sector);
default:break;
return null_process_cmd(cmd, op, sector, nr_sectors);
}sts = null_process_cmd(cmd, op, sector, nr_sectors);
- spin_unlock_irq(&dev->zone_lock);
- return sts;
}
On Tue, Nov 03, 2020 at 11:36:35PM +0000, Damien Le Moal wrote:
On 2020/11/04 5:52, Greg Kroah-Hartman wrote:
From: Kanchan Joshi joshi.k@samsung.com
commit 35bc10b2eafbb701064b94f283b77c54d3304842 upstream.
Parallel write,read,zone-mgmt operations accessing/altering zone state and write-pointer may get into race. Avoid the situation by using a new spinlock for zoned device. Concurrent zone-appends (on a zone) returning same write-pointer issue is also avoided using this lock.
Cc: stable@vger.kernel.org Fixes: e0489ed5daeb ("null_blk: Support REQ_OP_ZONE_APPEND") Signed-off-by: Kanchan Joshi joshi.k@samsung.com Reviewed-by: Damien Le Moal damien.lemoal@wdc.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
Greg,
I sent a followup patch fixing a bug introduced by this patch, but I forgot to mark it for stable. The patch is
commit aa1c09cb65e2 "null_blk: Fix locking in zoned mode"
Could you pickup that one too please ?
It doesn't apply cleanly at all, can you provide a backport?
thanks,
greg k-h
From: Suzuki K Poulose suzuki.poulose@arm.com
commit 80624263fa289b3416f7ca309491f1b75e579477 upstream.
With LOCKDEP enabled, CTI driver triggers the following splat due to uninitialized lock class for dynamically allocated attribute objects.
[ 5.372901] coresight etm0: CPU0: ETM v4.0 initialized [ 5.376694] coresight etm1: CPU1: ETM v4.0 initialized [ 5.380785] coresight etm2: CPU2: ETM v4.0 initialized [ 5.385851] coresight etm3: CPU3: ETM v4.0 initialized [ 5.389808] BUG: key ffff00000564a798 has not been registered! [ 5.392456] ------------[ cut here ]------------ [ 5.398195] DEBUG_LOCKS_WARN_ON(1) [ 5.398233] WARNING: CPU: 1 PID: 32 at kernel/locking/lockdep.c:4623 lockdep_init_map_waits+0x14c/0x260 [ 5.406149] Modules linked in: [ 5.415411] CPU: 1 PID: 32 Comm: kworker/1:1 Not tainted 5.9.0-12034-gbbe85027ce80 #51 [ 5.418553] Hardware name: Qualcomm Technologies, Inc. APQ 8016 SBC (DT) [ 5.426453] Workqueue: events amba_deferred_retry_func [ 5.433299] pstate: 40000005 (nZcv daif -PAN -UAO -TCO BTYPE=--) [ 5.438252] pc : lockdep_init_map_waits+0x14c/0x260 [ 5.444410] lr : lockdep_init_map_waits+0x14c/0x260 [ 5.449007] sp : ffff800012bbb720 ...
[ 5.531561] Call trace: [ 5.536847] lockdep_init_map_waits+0x14c/0x260 [ 5.539027] __kernfs_create_file+0xa8/0x1c8 [ 5.543539] sysfs_add_file_mode_ns+0xd0/0x208 [ 5.548054] internal_create_group+0x118/0x3c8 [ 5.552307] internal_create_groups+0x58/0xb8 [ 5.556733] sysfs_create_groups+0x2c/0x38 [ 5.561160] device_add+0x2d8/0x768 [ 5.565148] device_register+0x28/0x38 [ 5.568537] coresight_register+0xf8/0x320 [ 5.572358] cti_probe+0x1b0/0x3f0
...
Fix this by initializing the attributes when they are allocated.
Fixes: 3c5597e39812 ("coresight: cti: Add connection information to sysfs") Reported-by: Leo Yan leo.yan@linaro.org Tested-by: Leo Yan leo.yan@linaro.org Cc: Mike Leach mike.leach@linaro.org Cc: Mathieu Poirier mathieu.poirier@linaro.org Signed-off-by: Suzuki K Poulose suzuki.poulose@arm.com Cc: stable stable@vger.kernel.org Signed-off-by: Mathieu Poirier mathieu.poirier@linaro.org Link: https://lore.kernel.org/r/20201029164559.1268531-2-mathieu.poirier@linaro.or... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/hwtracing/coresight/coresight-cti-sysfs.c | 7 +++++++ 1 file changed, 7 insertions(+)
--- a/drivers/hwtracing/coresight/coresight-cti-sysfs.c +++ b/drivers/hwtracing/coresight/coresight-cti-sysfs.c @@ -1065,6 +1065,13 @@ static int cti_create_con_sysfs_attr(str } eattr->var = con; con->con_attrs[attr_idx] = &eattr->attr.attr; + /* + * Initialize the dynamically allocated attribute + * to avoid LOCKDEP splat. See include/linux/sysfs.h + * for more details. + */ + sysfs_attr_init(con->con_attrs[attr_idx]); + return 0; }
From: Andy Shevchenko andriy.shevchenko@linux.intel.com
commit d5dcce0c414fcbfe4c2037b66ac69ea5f9b3f75c upstream.
Behind primary and secondary we understand the type of the nodes which might define their ordering. However, if primary node gone, we can't maintain the ordering by definition of the linked list. Thus, by ordering secondary node becomes first in the list. But in this case the meaning of it is still secondary (or auxiliary). The type of the node is maintained by the secondary pointer in it:
secondary pointer Meaning NULL or valid primary node ERR_PTR(-ENODEV) secondary node
So, if by some reason we do the following sequence of calls
set_primary_fwnode(dev, NULL); set_primary_fwnode(dev, primary);
we should preserve secondary node.
This concept is supported by the description of set_primary_fwnode() along with implementation of set_secondary_fwnode(). Hence, fix the commit c15e1bdda436 to follow this as well.
Fixes: c15e1bdda436 ("device property: Fix the secondary firmware node handling in set_primary_fwnode()") Cc: Ferry Toth fntoth@gmail.com Signed-off-by: Andy Shevchenko andriy.shevchenko@linux.intel.com Reviewed-by: Heikki Krogerus heikki.krogerus@linux.intel.com Tested-by: Ferry Toth fntoth@gmail.com Cc: 5.9+ stable@vger.kernel.org # 5.9+ Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/base/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/base/core.c +++ b/drivers/base/core.c @@ -4274,7 +4274,7 @@ void set_primary_fwnode(struct device *d } else { if (fwnode_is_primary(fn)) { dev->fwnode = fn->secondary; - fn->secondary = NULL; + fn->secondary = ERR_PTR(-ENODEV); } else { dev->fwnode = NULL; }
From: Andy Shevchenko andriy.shevchenko@linux.intel.com
commit 99aed9227073fb34ce2880cbc7063e04185a65e1 upstream.
It appears that firmware nodes can be shared between devices. In such case when a (child) device is about to be deleted, its firmware node may be shared and ACPI_COMPANION_SET(..., NULL) call for it breaks the secondary link of the shared primary firmware node.
In order to prevent that, check, if the device has a parent and parent's firmware node is shared with its child, and avoid crashing the link.
Fixes: c15e1bdda436 ("device property: Fix the secondary firmware node handling in set_primary_fwnode()") Reported-by: Ferry Toth fntoth@gmail.com Signed-off-by: Andy Shevchenko andriy.shevchenko@linux.intel.com Reviewed-by: Heikki Krogerus heikki.krogerus@linux.intel.com Tested-by: Ferry Toth fntoth@gmail.com Cc: 5.9+ stable@vger.kernel.org # 5.9+ Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/base/core.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/drivers/base/core.c +++ b/drivers/base/core.c @@ -4260,6 +4260,7 @@ static inline bool fwnode_is_primary(str */ void set_primary_fwnode(struct device *dev, struct fwnode_handle *fwnode) { + struct device *parent = dev->parent; struct fwnode_handle *fn = dev->fwnode;
if (fwnode) { @@ -4274,7 +4275,8 @@ void set_primary_fwnode(struct device *d } else { if (fwnode_is_primary(fn)) { dev->fwnode = fn->secondary; - fn->secondary = ERR_PTR(-ENODEV); + if (!(parent && fn == parent->fwnode)) + fn->secondary = ERR_PTR(-ENODEV); } else { dev->fwnode = NULL; }
From: Takashi Iwai tiwai@suse.de
commit d383b3146d805a743658225c8973f5d38c6fedf4 upstream.
The newly introduced kvm_msr_ignored_check() tries to print error or debug messages via vcpu_*() macros, but those may cause Oops when NULL vcpu is passed for KVM_GET_MSRS ioctl.
Fix it by replacing the print calls with kvm_*() macros.
(Note that this will leave vcpu argument completely unused in the function, but I didn't touch it to make the fix as small as possible. A clean up may be applied later.)
Fixes: 12bc2132b15e ("KVM: X86: Do the same ignore_msrs check for feature msrs") BugLink: https://bugzilla.suse.com/show_bug.cgi?id=1178280 Cc: stable@vger.kernel.org Signed-off-by: Takashi Iwai tiwai@suse.de Message-Id: 20201030151414.20165-1-tiwai@suse.de Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/kvm/x86.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
--- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -259,13 +259,13 @@ static int kvm_msr_ignored_check(struct
if (ignore_msrs) { if (report_ignored_msrs) - vcpu_unimpl(vcpu, "ignored %s: 0x%x data 0x%llx\n", - op, msr, data); + kvm_pr_unimpl("ignored %s: 0x%x data 0x%llx\n", + op, msr, data); /* Mask the error */ return 0; } else { - vcpu_debug_ratelimited(vcpu, "unhandled %s: 0x%x data 0x%llx\n", - op, msr, data); + kvm_debug_ratelimited("unhandled %s: 0x%x data 0x%llx\n", + op, msr, data); return 1; } }
From: Marc Zyngier maz@kernel.org
commit 4a1c2c7f63c52ccb11770b5ae25920a6b79d3548 upstream.
The DBGD{CCINT,SCRext} and DBGVCR register entries in the cp14 array are missing their target register, resulting in all accesses being targetted at the guard sysreg (indexed by __INVALID_SYSREG__).
Point the emulation code at the actual register entries.
Fixes: bdfb4b389c8d ("arm64: KVM: add trap handlers for AArch32 debug registers") Signed-off-by: Marc Zyngier maz@kernel.org Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20201029172409.2768336-1-maz@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/kvm/sys_regs.c | 6 +++--- 2 files changed, 4 insertions(+), 3 deletions(-)
--- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -231,6 +231,7 @@ enum vcpu_sysreg { #define cp14_DBGWCR0 (DBGWCR0_EL1 * 2) #define cp14_DBGWVR0 (DBGWVR0_EL1 * 2) #define cp14_DBGDCCINT (MDCCINT_EL1 * 2) +#define cp14_DBGVCR (DBGVCR32_EL2 * 2)
#define NR_COPRO_REGS (NR_SYS_REGS * 2)
--- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1881,9 +1881,9 @@ static const struct sys_reg_desc cp14_re { Op1( 0), CRn( 0), CRm( 1), Op2( 0), trap_raz_wi }, DBG_BCR_BVR_WCR_WVR(1), /* DBGDCCINT */ - { Op1( 0), CRn( 0), CRm( 2), Op2( 0), trap_debug32 }, + { Op1( 0), CRn( 0), CRm( 2), Op2( 0), trap_debug32, NULL, cp14_DBGDCCINT }, /* DBGDSCRext */ - { Op1( 0), CRn( 0), CRm( 2), Op2( 2), trap_debug32 }, + { Op1( 0), CRn( 0), CRm( 2), Op2( 2), trap_debug32, NULL, cp14_DBGDSCRext }, DBG_BCR_BVR_WCR_WVR(2), /* DBGDTR[RT]Xint */ { Op1( 0), CRn( 0), CRm( 3), Op2( 0), trap_raz_wi }, @@ -1898,7 +1898,7 @@ static const struct sys_reg_desc cp14_re { Op1( 0), CRn( 0), CRm( 6), Op2( 2), trap_raz_wi }, DBG_BCR_BVR_WCR_WVR(6), /* DBGVCR */ - { Op1( 0), CRn( 0), CRm( 7), Op2( 0), trap_debug32 }, + { Op1( 0), CRn( 0), CRm( 7), Op2( 0), trap_debug32, NULL, cp14_DBGVCR }, DBG_BCR_BVR_WCR_WVR(7), DBG_BCR_BVR_WCR_WVR(8), DBG_BCR_BVR_WCR_WVR(9),
From: Zong Li zong.li@sifive.com
commit 4230e2deaa484b385aa01d598b2aea8e7f2660a6 upstream.
Some architectures assume that the stopped CPUs don't make function calls to traceable functions when they are in the stopped state. See also commit cb9d7fd51d9f ("watchdog: Mark watchdog touch functions as notrace").
Violating this assumption causes kernel crashes when switching tracer on RISC-V.
Mark rcu_momentary_dyntick_idle() and stop_machine_yield() notrace to prevent this.
Fixes: 4ecf0a43e729 ("processor: get rid of cpu_relax_yield") Fixes: 366237e7b083 ("stop_machine: Provide RCU quiescent state in multi_cpu_stop()") Signed-off-by: Zong Li zong.li@sifive.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Tested-by: Atish Patra atish.patra@wdc.com Tested-by: Colin Ian King colin.king@canonical.com Acked-by: Steven Rostedt (VMware) rostedt@goodmis.org Acked-by: Paul E. McKenney paulmck@kernel.org Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20201021073839.43935-1-zong.li@sifive.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- kernel/rcu/tree.c | 2 +- kernel/stop_machine.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-)
--- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -416,7 +416,7 @@ bool rcu_eqs_special_set(int cpu) * * The caller must have disabled interrupts and must not be idle. */ -void rcu_momentary_dyntick_idle(void) +notrace void rcu_momentary_dyntick_idle(void) { int special;
--- a/kernel/stop_machine.c +++ b/kernel/stop_machine.c @@ -178,7 +178,7 @@ static void ack_state(struct multi_stop_ set_state(msdata, msdata->state + 1); }
-void __weak stop_machine_yield(const struct cpumask *cpumask) +notrace void __weak stop_machine_yield(const struct cpumask *cpumask) { cpu_relax(); }
From: Jing Xiangfeng jingxiangfeng@huawei.com
commit 7e97e4cbf30026b49b0145c3bfe06087958382c5 upstream.
In current code, controller_probe() misses to call ida_simple_remove() in an error path. Jump to correct label to fix it.
Fixes: 17614978ed34 ("staging: fieldbus: anybus-s: support the Arcx anybus controller") Reviewed-by: Sven Van Asbroeck TheSven73@gmail.com Signed-off-by: Jing Xiangfeng jingxiangfeng@huawei.com Cc: stable stable@vger.kernel.org Link: https://lore.kernel.org/r/20201012132404.113031-1-jingxiangfeng@huawei.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/staging/fieldbus/anybuss/arcx-anybus.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/staging/fieldbus/anybuss/arcx-anybus.c +++ b/drivers/staging/fieldbus/anybuss/arcx-anybus.c @@ -293,7 +293,7 @@ static int controller_probe(struct platf regulator = devm_regulator_register(dev, &can_power_desc, &config); if (IS_ERR(regulator)) { err = PTR_ERR(regulator); - goto out_reset; + goto out_ida; } /* make controller info visible to userspace */ cd->class_dev = kzalloc(sizeof(*cd->class_dev), GFP_KERNEL);
From: Ian Abbott abbotti@mev.co.uk
commit 647a6002cb41d358d9ac5de101a8a6dc74748a59 upstream.
The "cb_pcidas" driver supports asynchronous commands on the analog output (AO) subdevice for those boards that have an AO FIFO. The code (in `cb_pcidas_ao_check_chanlist()` and `cb_pcidas_ao_cmd()`) to validate and set up the command supports output to a single channel or to two channels simultaneously (the boards have two AO channels). However, the code in `cb_pcidas_auto_attach()` that initializes the subdevices neglects to initialize the AO subdevice's `len_chanlist` member, leaving it set to 0, but the Comedi core will "correct" it to 1 if the driver neglected to set it. This limits commands to use a single channel (either channel 0 or 1), but the limit should be two channels. Set the AO subdevice's `len_chanlist` member to be the same value as the `n_chan` member, which will be 2.
Cc: stable@vger.kernel.org Signed-off-by: Ian Abbott abbotti@mev.co.uk Link: https://lore.kernel.org/r/20201021122142.81628-1-abbotti@mev.co.uk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/staging/comedi/drivers/cb_pcidas.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/staging/comedi/drivers/cb_pcidas.c +++ b/drivers/staging/comedi/drivers/cb_pcidas.c @@ -1342,6 +1342,7 @@ static int cb_pcidas_auto_attach(struct if (dev->irq && board->has_ao_fifo) { dev->write_subdev = s; s->subdev_flags |= SDF_CMD_WRITE; + s->len_chanlist = s->n_chan; s->do_cmdtest = cb_pcidas_ao_cmdtest; s->do_cmd = cb_pcidas_ao_cmd; s->cancel = cb_pcidas_ao_cancel;
From: Alexander Sverdlin alexander.sverdlin@nokia.com
commit 179f5dc36b0a1aa31538d7d8823deb65c39847b3 upstream.
The PHYs must be registered once in device probe function, not in device open callback because it's only possible to register them once.
Fixes: a25e278020bf ("staging: octeon: support fixed-link phys") Signed-off-by: Alexander Sverdlin alexander.sverdlin@nokia.com Cc: stable stable@vger.kernel.org Link: https://lore.kernel.org/r/20201016101858.11374-1-alexander.sverdlin@nokia.co... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/staging/octeon/ethernet-mdio.c | 6 ------ drivers/staging/octeon/ethernet.c | 9 +++++++++ 2 files changed, 9 insertions(+), 6 deletions(-)
--- a/drivers/staging/octeon/ethernet-mdio.c +++ b/drivers/staging/octeon/ethernet-mdio.c @@ -147,12 +147,6 @@ int cvm_oct_phy_setup_device(struct net_
phy_node = of_parse_phandle(priv->of_node, "phy-handle", 0); if (!phy_node && of_phy_is_fixed_link(priv->of_node)) { - int rc; - - rc = of_phy_register_fixed_link(priv->of_node); - if (rc) - return rc; - phy_node = of_node_get(priv->of_node); } if (!phy_node) --- a/drivers/staging/octeon/ethernet.c +++ b/drivers/staging/octeon/ethernet.c @@ -13,6 +13,7 @@ #include <linux/phy.h> #include <linux/slab.h> #include <linux/interrupt.h> +#include <linux/of_mdio.h> #include <linux/of_net.h> #include <linux/if_ether.h> #include <linux/if_vlan.h> @@ -892,6 +893,14 @@ static int cvm_oct_probe(struct platform break; }
+ if (priv->of_node && of_phy_is_fixed_link(priv->of_node)) { + if (of_phy_register_fixed_link(priv->of_node)) { + netdev_err(dev, "Failed to register fixed link for interface %d, port %d\n", + interface, priv->port); + dev->netdev_ops = NULL; + } + } + if (!dev->netdev_ops) { free_netdev(dev); } else if (register_netdev(dev) < 0) {
From: Alexander Sverdlin alexander.sverdlin@nokia.com
commit 49d28ebdf1e30d806410eefc7de0a7a1ca5d747c upstream.
Currently in case of alignment or FCS error if the packet cannot be corrected it's still not dropped. Report the error properly and drop the packet while making the code around a little bit more readable.
Fixes: 80ff0fd3ab64 ("Staging: Add octeon-ethernet driver files.") Signed-off-by: Alexander Sverdlin alexander.sverdlin@nokia.com Cc: stable stable@vger.kernel.org Link: https://lore.kernel.org/r/20201016145630.41852-1-alexander.sverdlin@nokia.co... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/staging/octeon/ethernet-rx.c | 34 +++++++++++++++++++--------------- 1 file changed, 19 insertions(+), 15 deletions(-)
--- a/drivers/staging/octeon/ethernet-rx.c +++ b/drivers/staging/octeon/ethernet-rx.c @@ -69,15 +69,17 @@ static inline int cvm_oct_check_rcv_erro else port = work->word1.cn38xx.ipprt;
- if ((work->word2.snoip.err_code == 10) && (work->word1.len <= 64)) { + if ((work->word2.snoip.err_code == 10) && (work->word1.len <= 64)) /* * Ignore length errors on min size packets. Some * equipment incorrectly pads packets to 64+4FCS * instead of 60+4FCS. Note these packets still get * counted as frame errors. */ - } else if (work->word2.snoip.err_code == 5 || - work->word2.snoip.err_code == 7) { + return 0; + + if (work->word2.snoip.err_code == 5 || + work->word2.snoip.err_code == 7) { /* * We received a packet with either an alignment error * or a FCS error. This may be signalling that we are @@ -108,7 +110,10 @@ static inline int cvm_oct_check_rcv_erro /* Port received 0xd5 preamble */ work->packet_ptr.s.addr += i + 1; work->word1.len -= i + 5; - } else if ((*ptr & 0xf) == 0xd) { + return 0; + } + + if ((*ptr & 0xf) == 0xd) { /* Port received 0xd preamble */ work->packet_ptr.s.addr += i; work->word1.len -= i + 4; @@ -118,21 +123,20 @@ static inline int cvm_oct_check_rcv_erro ((*(ptr + 1) & 0xf) << 4); ptr++; } - } else { - printk_ratelimited("Port %d unknown preamble, packet dropped\n", - port); - cvm_oct_free_work(work); - return 1; + return 0; } + + printk_ratelimited("Port %d unknown preamble, packet dropped\n", + port); + cvm_oct_free_work(work); + return 1; } - } else { - printk_ratelimited("Port %d receive error code %d, packet dropped\n", - port, work->word2.snoip.err_code); - cvm_oct_free_work(work); - return 1; }
- return 0; + printk_ratelimited("Port %d receive error code %d, packet dropped\n", + port, work->word2.snoip.err_code); + cvm_oct_free_work(work); + return 1; }
static void copy_segments_to_skb(struct cvmx_wqe *work, struct sk_buff *skb)
From: Rafael J. Wysocki rafael.j.wysocki@intel.com
commit a62f68f5ca53ab61cba2f0a410d0add7a6d54a52 upstream.
Add a helper function to test the flags of the cpufreq driver in use againt a given flags mask.
In particular, this will be needed to test the CPUFREQ_NEED_UPDATE_LIMITS cpufreq driver flag in the schedutil governor.
Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/cpufreq/cpufreq.c | 12 ++++++++++++ include/linux/cpufreq.h | 1 + 2 files changed, 13 insertions(+)
--- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -1904,6 +1904,18 @@ void cpufreq_resume(void) }
/** + * cpufreq_driver_test_flags - Test cpufreq driver's flags against given ones. + * @flags: Flags to test against the current cpufreq driver's flags. + * + * Assumes that the driver is there, so callers must ensure that this is the + * case. + */ +bool cpufreq_driver_test_flags(u16 flags) +{ + return !!(cpufreq_driver->flags & flags); +} + +/** * cpufreq_get_current_driver - return current driver's name * * Return the name string of the currently loaded cpufreq driver --- a/include/linux/cpufreq.h +++ b/include/linux/cpufreq.h @@ -428,6 +428,7 @@ struct cpufreq_driver { int cpufreq_register_driver(struct cpufreq_driver *driver_data); int cpufreq_unregister_driver(struct cpufreq_driver *driver_data);
+bool cpufreq_driver_test_flags(u16 flags); const char *cpufreq_get_current_driver(void); void *cpufreq_get_driver_data(void);
From: Rafael J. Wysocki rafael.j.wysocki@intel.com
commit d1e7c2996e988866e7ceceb4641a0886885b7889 upstream.
Because sugov_update_next_freq() may skip a frequency update even if the need_freq_update flag has been set for the policy at hand, policy limits updates may not take effect as expected.
For example, if the intel_pstate driver operates in the passive mode with HWP enabled, it needs to update the HWP min and max limits when the policy min and max limits change, respectively, but that may not happen if the target frequency does not change along with the limit at hand. In particular, if the policy min is changed first, causing the target frequency to be adjusted to it, and the policy max limit is changed later to the same value, the HWP max limit will not be updated to follow it as expected, because the target frequency is still equal to the policy min limit and it will not change until that limit is updated.
To address this issue, modify get_next_freq() to let the driver callback run if the CPUFREQ_NEED_UPDATE_LIMITS cpufreq driver flag is set regardless of whether or not the new frequency to set is equal to the previous one.
Fixes: f6ebbcf08f37 ("cpufreq: intel_pstate: Implement passive mode with HWP enabled") Reported-by: Zhang Rui rui.zhang@intel.com Tested-by: Zhang Rui rui.zhang@intel.com Cc: 5.9+ stable@vger.kernel.org # 5.9+: 1c534352f47f cpufreq: Introduce CPUFREQ_NEED_UPDATE_LIMITS ... Cc: 5.9+ stable@vger.kernel.org # 5.9+: a62f68f5ca53 cpufreq: Introduce cpufreq_driver_test_flags() Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Acked-by: Viresh Kumar viresh.kumar@linaro.org Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- kernel/sched/cpufreq_schedutil.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
--- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -102,7 +102,8 @@ static bool sugov_should_update_freq(str static bool sugov_update_next_freq(struct sugov_policy *sg_policy, u64 time, unsigned int next_freq) { - if (sg_policy->next_freq == next_freq) + if (sg_policy->next_freq == next_freq && + !cpufreq_driver_test_flags(CPUFREQ_NEED_UPDATE_LIMITS)) return false;
sg_policy->next_freq = next_freq; @@ -175,7 +176,8 @@ static unsigned int get_next_freq(struct
freq = map_util_freq(util, freq, max);
- if (freq == sg_policy->cached_raw_freq && !sg_policy->need_freq_update) + if (freq == sg_policy->cached_raw_freq && !sg_policy->need_freq_update && + !cpufreq_driver_test_flags(CPUFREQ_NEED_UPDATE_LIMITS)) return sg_policy->next_freq;
sg_policy->need_freq_update = false;
From: Dan Carpenter dan.carpenter@oracle.com
commit 7922460e33c81f41e0d2421417228b32e6fdbe94 upstream.
The copy_to/from_user() functions return the number of bytes which we weren't able to copy but the ioctl should return -EFAULT if they fail.
Fixes: a127c5bbb6a8 ("vhost-vdpa: fix backend feature ioctls") Signed-off-by: Dan Carpenter dan.carpenter@oracle.com Link: https://lore.kernel.org/r/20201023120853.GI282278@mwanda Signed-off-by: Michael S. Tsirkin mst@redhat.com Cc: stable@vger.kernel.org Acked-by: Jason Wang jasowang@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/vhost/vdpa.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)
--- a/drivers/vhost/vdpa.c +++ b/drivers/vhost/vdpa.c @@ -428,12 +428,11 @@ static long vhost_vdpa_unlocked_ioctl(st void __user *argp = (void __user *)arg; u64 __user *featurep = argp; u64 features; - long r; + long r = 0;
if (cmd == VHOST_SET_BACKEND_FEATURES) { - r = copy_from_user(&features, featurep, sizeof(features)); - if (r) - return r; + if (copy_from_user(&features, featurep, sizeof(features))) + return -EFAULT; if (features & ~VHOST_VDPA_BACKEND_FEATURES) return -EOPNOTSUPP; vhost_set_backend_features(&v->vdev, features); @@ -476,7 +475,8 @@ static long vhost_vdpa_unlocked_ioctl(st break; case VHOST_GET_BACKEND_FEATURES: features = VHOST_VDPA_BACKEND_FEATURES; - r = copy_to_user(featurep, &features, sizeof(features)); + if (copy_to_user(featurep, &features, sizeof(features))) + r = -EFAULT; break; default: r = vhost_dev_ioctl(&v->vdev, cmd, argp);
From: Jing Xiangfeng jingxiangfeng@huawei.com
commit 7ba08e81cb4aec9724ab7674a5de49e7a341062c upstream.
Fix to return the variable "err" from the error handling case instead of "ret".
Fixes: 94abbccdf291 ("vdpa/mlx5: Add shared memory registration code") Signed-off-by: Jing Xiangfeng jingxiangfeng@huawei.com Link: https://lore.kernel.org/r/20201026070637.164321-1-jingxiangfeng@huawei.com Signed-off-by: Michael S. Tsirkin mst@redhat.com Acked-by: Eli Cohen elic@nvidia.com Cc: stable@vger.kernel.org Acked-by: Jason Wang jasowang@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/vdpa/mlx5/core/mr.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
--- a/drivers/vdpa/mlx5/core/mr.c +++ b/drivers/vdpa/mlx5/core/mr.c @@ -239,7 +239,6 @@ static int map_direct_mr(struct mlx5_vdp u64 paend; struct scatterlist *sg; struct device *dma = mvdev->mdev->device; - int ret;
for (map = vhost_iotlb_itree_first(iotlb, mr->start, mr->end - 1); map; map = vhost_iotlb_itree_next(map, start, mr->end - 1)) { @@ -277,8 +276,8 @@ static int map_direct_mr(struct mlx5_vdp done: mr->log_size = log_entity_size; mr->nsg = nsg; - ret = dma_map_sg_attrs(dma, mr->sg_head.sgl, mr->nsg, DMA_BIDIRECTIONAL, 0); - if (!ret) + err = dma_map_sg_attrs(dma, mr->sg_head.sgl, mr->nsg, DMA_BIDIRECTIONAL, 0); + if (!err) goto err_map;
err = create_direct_mr(mvdev, mr);
From: Zeng Tao prime.zeng@hisilicon.com
commit cb47755725da7b90fecbb2aa82ac3b24a7adb89b upstream.
UBSAN reports:
Undefined behaviour in ./include/linux/time64.h:127:27 signed integer overflow: 17179869187 * 1000000000 cannot be represented in type 'long long int' Call Trace: timespec64_to_ns include/linux/time64.h:127 [inline] set_cpu_itimer+0x65c/0x880 kernel/time/itimer.c:180 do_setitimer+0x8e/0x740 kernel/time/itimer.c:245 __x64_sys_setitimer+0x14c/0x2c0 kernel/time/itimer.c:336 do_syscall_64+0xa1/0x540 arch/x86/entry/common.c:295
Commit bd40a175769d ("y2038: itimer: change implementation to timespec64") replaced the original conversion which handled time clamping correctly with timespec64_to_ns() which has no overflow protection.
Fix it in timespec64_to_ns() as this is not necessarily limited to the usage in itimers.
[ tglx: Added comment and adjusted the fixes tag ]
Fixes: 361a3bf00582 ("time64: Add time64.h header and define struct timespec64") Signed-off-by: Zeng Tao prime.zeng@hisilicon.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Reviewed-by: Arnd Bergmann arnd@arndb.de Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/1598952616-6416-1-git-send-email-prime.zeng@hisili... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- include/linux/time64.h | 4 ++++ kernel/time/itimer.c | 4 ---- 2 files changed, 4 insertions(+), 4 deletions(-)
--- a/include/linux/time64.h +++ b/include/linux/time64.h @@ -124,6 +124,10 @@ static inline bool timespec64_valid_sett */ static inline s64 timespec64_to_ns(const struct timespec64 *ts) { + /* Prevent multiplication overflow */ + if ((unsigned long long)ts->tv_sec >= KTIME_SEC_MAX) + return KTIME_MAX; + return ((s64) ts->tv_sec * NSEC_PER_SEC) + ts->tv_nsec; }
--- a/kernel/time/itimer.c +++ b/kernel/time/itimer.c @@ -172,10 +172,6 @@ static void set_cpu_itimer(struct task_s u64 oval, nval, ointerval, ninterval; struct cpu_itimer *it = &tsk->signal->it[clock_id];
- /* - * Use the to_ktime conversion because that clamps the maximum - * value to KTIME_MAX and avoid multiplication overflows. - */ nval = timespec64_to_ns(&value->it_value); ninterval = timespec64_to_ns(&value->it_interval);
From: Quanyang Wang quanyang.wang@windriver.com
commit 4cd2bb12981165f865d2b8ed92b446b52310ef74 upstream.
Since sched_clock_read_begin() and sched_clock_read_retry() are called by notrace function sched_clock(), they shouldn't be traceable either, or else ftrace_graph_caller will run into a dead loop on the path as below (arm for instance):
ftrace_graph_caller() prepare_ftrace_return() function_graph_enter() ftrace_push_return_trace() trace_clock_local() sched_clock() sched_clock_read_begin/retry()
Fixes: 1b86abc1c645 ("sched_clock: Expose struct clock_read_data") Signed-off-by: Quanyang Wang quanyang.wang@windriver.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Acked-by: Peter Zijlstra (Intel) peterz@infradead.org Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200929082027.16787-1-quanyang.wang@windriver.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- kernel/time/sched_clock.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/kernel/time/sched_clock.c +++ b/kernel/time/sched_clock.c @@ -68,13 +68,13 @@ static inline u64 notrace cyc_to_ns(u64 return (cyc * mult) >> shift; }
-struct clock_read_data *sched_clock_read_begin(unsigned int *seq) +notrace struct clock_read_data *sched_clock_read_begin(unsigned int *seq) { *seq = raw_read_seqcount_latch(&cd.seq); return cd.read_data + (*seq & 1); }
-int sched_clock_read_retry(unsigned int seq) +notrace int sched_clock_read_retry(unsigned int seq) { return read_seqcount_retry(&cd.seq, seq); }
From: Stephen Boyd swboyd@chromium.org
commit 1de111b51b829bcf01d2e57971f8fd07a665fa3f upstream.
According to the SMCCC spec[1](7.5.2 Discovery) the ARM_SMCCC_ARCH_WORKAROUND_1 function id only returns 0, 1, and SMCCC_RET_NOT_SUPPORTED.
0 is "workaround required and safe to call this function" 1 is "workaround not required but safe to call this function" SMCCC_RET_NOT_SUPPORTED is "might be vulnerable or might not be, who knows, I give up!"
SMCCC_RET_NOT_SUPPORTED might as well mean "workaround required, except calling this function may not work because it isn't implemented in some cases". Wonderful. We map this SMC call to
0 is SPECTRE_MITIGATED 1 is SPECTRE_UNAFFECTED SMCCC_RET_NOT_SUPPORTED is SPECTRE_VULNERABLE
For KVM hypercalls (hvc), we've implemented this function id to return SMCCC_RET_NOT_SUPPORTED, 0, and SMCCC_RET_NOT_REQUIRED. One of those isn't supposed to be there. Per the code we call arm64_get_spectre_v2_state() to figure out what to return for this feature discovery call.
0 is SPECTRE_MITIGATED SMCCC_RET_NOT_REQUIRED is SPECTRE_UNAFFECTED SMCCC_RET_NOT_SUPPORTED is SPECTRE_VULNERABLE
Let's clean this up so that KVM tells the guest this mapping:
0 is SPECTRE_MITIGATED 1 is SPECTRE_UNAFFECTED SMCCC_RET_NOT_SUPPORTED is SPECTRE_VULNERABLE
Note: SMCCC_RET_NOT_AFFECTED is 1 but isn't part of the SMCCC spec
Fixes: c118bbb52743 ("arm64: KVM: Propagate full Spectre v2 workaround state to KVM guests") Signed-off-by: Stephen Boyd swboyd@chromium.org Acked-by: Marc Zyngier maz@kernel.org Acked-by: Will Deacon will@kernel.org Cc: Andre Przywara andre.przywara@arm.com Cc: Steven Price steven.price@arm.com Cc: Marc Zyngier maz@kernel.org Cc: stable@vger.kernel.org Link: https://developer.arm.com/documentation/den0028/latest [1] Link: https://lore.kernel.org/r/20201023154751.1973872-1-swboyd@chromium.org Signed-off-by: Will Deacon will@kernel.org Signed-off-by: Stephen Boyd swboyd@chromium.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/arm64/kvm/hypercalls.c | 2 +- include/linux/arm-smccc.h | 2 ++ 2 files changed, 3 insertions(+), 1 deletion(-)
--- a/arch/arm64/kvm/hypercalls.c +++ b/arch/arm64/kvm/hypercalls.c @@ -31,7 +31,7 @@ int kvm_hvc_call_handler(struct kvm_vcpu val = SMCCC_RET_SUCCESS; break; case KVM_BP_HARDEN_NOT_REQUIRED: - val = SMCCC_RET_NOT_REQUIRED; + val = SMCCC_ARCH_WORKAROUND_RET_UNAFFECTED; break; } break; --- a/include/linux/arm-smccc.h +++ b/include/linux/arm-smccc.h @@ -86,6 +86,8 @@ ARM_SMCCC_SMC_32, \ 0, 0x7fff)
+#define SMCCC_ARCH_WORKAROUND_RET_UNAFFECTED 1 + /* Paravirtualised time calls (defined by ARM DEN0057A) */ #define ARM_SMCCC_HV_PV_TIME_FEATURES \ ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
On Wed, 4 Nov 2020 at 02:07, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 5.9.4 release. There are 391 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Thu, 05 Nov 2020 20:29:58 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.9.4-rc1.g... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.9.y and the diffstat can be found below.
thanks,
greg k-h
Results from Linaro’s test farm. No regressions on arm64, arm, x86_64, and i386.
Tested-by: Linux Kernel Functional Testing lkft@linaro.org
NOTE: The kernel warning noticed on arm64 nxp ls2088 device with KASAN config enabled while booting the device. We are not considering this as regression because this is the first arm64 KASAN config enabled on nxp ls2088 device.
[ 3.301882] dwc3 3100000.usb3: Failed to get clk 'ref': -2 [ 3.307433] ------------[ cut here ]------------ [ 3.312048] dwc3 3100000.usb3: request value same as default, ignoring [ 3.318596] WARNING: CPU: 3 PID: 1 at /home/jenkins/ci/lsdk/master/all/packages/linux/linux/drivers/usb/dwc3/core.c:347 dwc3_core_init+0xd14/0xd70 [ 3.331716] Modules linked in: [ 3.334766] CPU: 3 PID: 1 Comm: swapper/0 Not tainted 4.19.46 #1 [ 3.340765] Hardware name: Freescale Layerscape 2088A RDB Board (DT) [ 3.347111] pstate: 40000005 (nZcv daif -PAN -UAO) [ 3.351895] pc : dwc3_core_init+0xd14/0xd70 [ 3.356070] lr : dwc3_core_init+0xd14/0xd70 [ 3.360243] sp : ffff000008063b00 [ 3.363550] x29: ffff000008063b00 x28: 0000000000000007 [ 3.368855] x27: ffff000009624068 x26: ffff000009532768 [ 3.374160] x25: ffff8082cd5ea1a0 x24: ffff000008f27000 [ 3.379465] x23: ffff8082ef45a410 x22: ffff0000096ca000 [ 3.384770] x21: 0000000000000041 x20: 0000000030c11004 [ 3.390075] x19: ffff8082ef6f9080 x18: ffffffffffffffff [ 3.395380] x17: 0000000000000000 x16: 0000000000000000 [ 3.400685] x15: ffff0000096ca708 x14: ffff0000898776ff [ 3.405991] x13: ffff00000987770d x12: ffff0000096ca980 [ 3.411297] x11: ffff000008707260 x10: ffff0000080637d0 [ 3.416603] x9 : 0000000000000016 x8 : 6769202c746c7561 [ 3.421908] x7 : 6665642073612065 x6 : 0000000000000163 [ 3.427213] x5 : 0000000000000000 x4 : 0000000000000000 [ 3.432517] x3 : ffffffffffffffff x2 : ffff0000096e3cf8 [ 3.437823] x1 : 4d579be806451100 x0 : 0000000000000000 [ 3.443129] Call trace: [ 3.445568] dwc3_core_init+0xd14/0xd70 [ 3.449395] dwc3_probe+0x8b0/0xd30 [ 3.452877] platform_drv_probe+0x50/0xa0 [ 3.456878] really_probe+0x1b8/0x288 [ 3.460531] driver_probe_device+0x58/0x100 [ 3.464705] __driver_attach+0xd4/0xd8 [ 3.468446] bus_for_each_dev+0x74/0xc8 [ 3.472273] driver_attach+0x20/0x28 [ 3.475840] bus_add_driver+0x1ac/0x218 [ 3.479667] driver_register+0x60/0x110 [ 3.483494] __platform_driver_register+0x40/0x48 [ 3.488192] dwc3_driver_init+0x18/0x20 [ 3.492020] do_one_initcall+0x5c/0x178 [ 3.495848] kernel_init_freeable+0x198/0x244 [ 3.500197] kernel_init+0x10/0x108 [ 3.503677] ret_from_fork+0x10/0x18 [ 3.507245] ---[ end trace 13f260065c84085c ]--- [ 3.512196] dwc3 3110000.usb3: Failed to get clk 'ref': -2
full boot log: https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-5.9.y/build/v5.9.3-...
Summary ------------------------------------------------------------------------
kernel: 5.9.4-rc1 git repo: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git git branch: linux-5.9.y git commit: 53574a4c558e8da7cfde86a77fe760ba375394b8 git describe: v5.9.3-392-g53574a4c558e Test details: https://qa-reports.linaro.org/lkft/linux-stable-rc-linux-5.9.y/build/v5.9.3-...
No regressions (compared to build v5.9.3)
No fixes (compared to build v5.9.3)
Ran 38015 total tests in the following environments and test suites.
Environments -------------- - dragonboard-410c - hi6220-hikey - i386 - juno-r2 - juno-r2-compat - nxp-ls2088 - nxp-ls2088-kasan - qemu-arm64-kasan - qemu-x86_64-kasan - qemu_arm - qemu_arm64 - qemu_i386 - qemu_x86_64 - x15 - x86 - x86-kasan
Test Suites ----------- * build * install-android-platform-tools-r2600 * libhugetlbfs * linux-log-parser * ltp-cap_bounds-tests * ltp-containers-tests * ltp-cpuhotplug-tests * ltp-crypto-tests * ltp-fcntl-locktests-tests * ltp-filecaps-tests * ltp-fs-tests * ltp-fs_bind-tests * ltp-fs_perms_simple-tests * ltp-fsx-tests * ltp-hugetlb-tests * ltp-ipc-tests * ltp-mm-tests * ltp-nptl-tests * ltp-pty-tests * ltp-sched-tests * ltp-securebits-tests * ltp-tracing-tests * perf * ltp-commands-tests * ltp-controllers-tests * ltp-dio-tests * ltp-io-tests * ltp-math-tests * ltp-syscalls-tests * network-basic-tests * v4l2-compliance * kselftest * ltp-cve-tests * ltp-open-posix-tests * kvm-unit-tests * kunit * kselftest-vsyscall-mode-none * kselftest-vsyscall-mode-native
On Wed, 4 Nov 2020 at 12:42, Naresh Kamboju naresh.kamboju@linaro.org wrote:
On Wed, 4 Nov 2020 at 02:07, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 5.9.4 release. There are 391 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Thu, 05 Nov 2020 20:29:58 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.9.4-rc1.g... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.9.y and the diffstat can be found below.
thanks,
greg k-h
Results from Linaro’s test farm. No regressions on arm64, arm, x86_64, and i386.
Tested-by: Linux Kernel Functional Testing lkft@linaro.org
NOTE: The kernel warning noticed on arm64 nxp ls2088 device with KASAN config enabled while booting the device. We are not considering this as regression because this is the first arm64 KASAN config enabled on nxp ls2088 device.
[ 3.301882] dwc3 3100000.usb3: Failed to get clk 'ref': -2 [ 3.307433] ------------[ cut here ]------------ [ 3.312048] dwc3 3100000.usb3: request value same as default, ignoring [ 3.318596] WARNING: CPU: 3 PID: 1 at /home/jenkins/ci/lsdk/master/all/packages/linux/linux/drivers/usb/dwc3/core.c:347 dwc3_core_init+0xd14/0xd70
Please ignore this warning that this is coming from stable rc 4.19.
- Naresh
Hi,
Naresh Kamboju naresh.kamboju@linaro.org writes:
On Wed, 4 Nov 2020 at 02:07, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 5.9.4 release. There are 391 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Thu, 05 Nov 2020 20:29:58 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.9.4-rc1.g... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.9.y and the diffstat can be found below.
thanks,
greg k-h
Results from Linaro’s test farm. No regressions on arm64, arm, x86_64, and i386.
Tested-by: Linux Kernel Functional Testing lkft@linaro.org
NOTE: The kernel warning noticed on arm64 nxp ls2088 device with KASAN config enabled while booting the device. We are not considering this as regression because this is the first arm64 KASAN config enabled on nxp ls2088 device.
[ 3.301882] dwc3 3100000.usb3: Failed to get clk 'ref': -2 [ 3.307433] ------------[ cut here ]------------ [ 3.312048] dwc3 3100000.usb3: request value same as default, ignoring
fix your DTS :-)
You're requesting to change a register value that shouldn't be changed (it should be properly set during coreConsultant instantiation). Whenever the requested value is the same as the reset value of the register we WARN to let users know that the register shouldn't be touched.
On Wed, 4 Nov 2020 at 16:28, Felipe Balbi balbi@kernel.org wrote:
Hi,
Naresh Kamboju naresh.kamboju@linaro.org writes:
On Wed, 4 Nov 2020 at 02:07, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 5.9.4 release. There are 391 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Thu, 05 Nov 2020 20:29:58 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.9.4-rc1.g... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.9.y and the diffstat can be found below.
thanks,
greg k-h
Results from Linaro’s test farm. No regressions on arm64, arm, x86_64, and i386.
Tested-by: Linux Kernel Functional Testing lkft@linaro.org
NOTE: The kernel warning noticed on arm64 nxp ls2088 device with KASAN config enabled while booting the device. We are not considering this as regression because this is the first arm64 KASAN config enabled on nxp ls2088 device.
[ 3.301882] dwc3 3100000.usb3: Failed to get clk 'ref': -2 [ 3.307433] ------------[ cut here ]------------ [ 3.312048] dwc3 3100000.usb3: request value same as default, ignoring
fix your DTS :-)
Done.
You're requesting to change a register value that shouldn't be changed (it should be properly set during coreConsultant instantiation). Whenever the requested value is the same as the reset value of the register we WARN to let users know that the register shouldn't be touched.
Thanks for looking into this. The reported issue is a false alarm. please ignore.
- Naresh
On Wed, Nov 04, 2020 at 12:42:46PM +0530, Naresh Kamboju wrote:
On Wed, 4 Nov 2020 at 02:07, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 5.9.4 release. There are 391 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Thu, 05 Nov 2020 20:29:58 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.9.4-rc1.g... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.9.y and the diffstat can be found below.
thanks,
greg k-h
Results from Linaro’s test farm. No regressions on arm64, arm, x86_64, and i386.
Tested-by: Linux Kernel Functional Testing lkft@linaro.org
Thanks for testing all of these and letting me know.
greg k-h
On Tue, 03 Nov 2020 21:30:51 +0100, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.9.4 release. There are 391 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Thu, 05 Nov 2020 20:29:58 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.9.4-rc1.g... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.9.y and the diffstat can be found below.
thanks,
greg k-h
All tests passing for Tegra ...
Test results for stable-v5.9: 15 builds: 15 pass, 0 fail 26 boots: 26 pass, 0 fail 61 tests: 61 pass, 0 fail
Linux version: 5.9.4-rc1-g53574a4c558e Boards tested: tegra124-jetson-tk1, tegra186-p2771-0000, tegra194-p2972-0000, tegra20-ventana, tegra210-p2371-2180, tegra210-p3450-0000, tegra30-cardhu-a04
Tested-by: Jon Hunter jonathanh@nvidia.com
Jon
On Wed, Nov 04, 2020 at 09:03:46AM +0000, Jon Hunter wrote:
On Tue, 03 Nov 2020 21:30:51 +0100, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.9.4 release. There are 391 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Thu, 05 Nov 2020 20:29:58 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.9.4-rc1.g... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.9.y and the diffstat can be found below.
thanks,
greg k-h
All tests passing for Tegra ...
Test results for stable-v5.9: 15 builds: 15 pass, 0 fail 26 boots: 26 pass, 0 fail 61 tests: 61 pass, 0 fail
Linux version: 5.9.4-rc1-g53574a4c558e Boards tested: tegra124-jetson-tk1, tegra186-p2771-0000, tegra194-p2972-0000, tegra20-ventana, tegra210-p2371-2180, tegra210-p3450-0000, tegra30-cardhu-a04
Tested-by: Jon Hunter jonathanh@nvidia.com
Great, thanks for testing and letting me know.
greg k-h
On Tue, Nov 03, 2020 at 09:30:51PM +0100, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.9.4 release. There are 391 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Thu, 05 Nov 2020 20:29:58 +0000. Anything received after that time might be too late.
Build results: total: 154 pass: 154 fail: 0 Qemu test results: total: 426 pass: 426 fail: 0
Tested-by: Guenter Roeck linux@roeck-us.net
Guenter
On Wed, Nov 04, 2020 at 09:51:23AM -0800, Guenter Roeck wrote:
On Tue, Nov 03, 2020 at 09:30:51PM +0100, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.9.4 release. There are 391 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Thu, 05 Nov 2020 20:29:58 +0000. Anything received after that time might be too late.
Build results: total: 154 pass: 154 fail: 0 Qemu test results: total: 426 pass: 426 fail: 0
Tested-by: Guenter Roeck linux@roeck-us.net
I should have fixed up the other build issues, and thanks for letting me know about all of these.
greg k-h
On Tue, 2020-11-03 at 21:30 +0100, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.9.4 release. There are 391 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Thu, 05 Nov 2020 20:29:58 +0000. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.9.4-rc1.g... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux- stable-rc.git linux-5.9.y and the diffstat can be found below.
thanks,
greg k-h
Compiled and booted 5.9.4-rc1+ . No typical dmesg regressions.
Tested-by: Jeffrin Jose T jeffrin@rajagiritech.edu.in
linux-stable-mirror@lists.linaro.org