This is the start of the stable review cycle for the 4.14.3 release. There are 193 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Thu Nov 30 10:05:26 UTC 2017. Anything received after that time might be too late.
The whole patch series can be found in one patch at: kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.14.3-rc1.gz or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.14.y and the diffstat can be found below.
thanks,
greg k-h
------------- Pseudo-Shortlog of commits:
Greg Kroah-Hartman gregkh@linuxfoundation.org Linux 4.14.3-rc1
Sasha Neftin sasha.neftin@intel.com e1000e: fix buffer overrun while the I219 is processing DMA transactions
Benjamin Poirier bpoirier@suse.com e1000e: Avoid receiver overrun interrupt bursts
Benjamin Poirier bpoirier@suse.com e1000e: Separate signaling for link check/link up
Benjamin Poirier bpoirier@suse.com e1000e: Fix return value test
Benjamin Poirier bpoirier@suse.com e1000e: Fix error path in link detection
Luca Coelho luciano.coelho@intel.com iwlwifi: mvm: support version 7 of the SCAN_REQ_UMAC FW command
Luca Coelho luciano.coelho@intel.com iwlwifi: fix PCI IDs and configuration mapping for 9000 series
Ihab Zhaika ihab.zhaika@intel.com iwlwifi: add new cards for 8260 series
Ihab Zhaika ihab.zhaika@intel.com iwlwifi: add new cards for 8265 series
Ihab Zhaika ihab.zhaika@intel.com iwlwifi: add new cards for a000 series
Luca Coelho luciano.coelho@intel.com iwlwifi: pcie: sort IDs for the 9000 series for easier comparisons
Oren Givon oren.givon@intel.com iwlwifi: add a new a000 device
Oren Givon oren.givon@intel.com iwlwifi: fix wrong struct for a000 device
Neil Armstrong narmstrong@baylibre.com ARM64: dts: meson-gxl: Add alternate ARM Trusted Firmware reserved memory zone
Stanimir Varbanov stanimir.varbanov@linaro.org media: venus: reimplement decoder stop command
Stanimir Varbanov stanimir.varbanov@linaro.org media: venus: venc: fix bytesused v4l2_plane field
Stanimir Varbanov stanimir.varbanov@linaro.org media: venus: fix wrong size on dma_free
Ricardo Ribalda Delgado ricardo.ribalda@gmail.com media: v4l2-ctrl: Fix flags field on Control events
Johan Hovold johan@kernel.org cx231xx-cards: fix NULL-deref on missing association descriptor
Sean Young sean@mess.org media: rc: nec decoder should not send both repeat and keycode
Sean Young sean@mess.org media: rc: check for integer overflow
Michele Baldessari michele@acksyn.org media: Don't do DMA on stack for firmware upload in the AS102 driver
Nicholas Piggin npiggin@gmail.com powerpc/64s/hash: Allow MAP_FIXED allocations to cross 128TB boundary
Nicholas Piggin npiggin@gmail.com powerpc/64s/hash: Fix fork() with 512TB process address space
Nicholas Piggin npiggin@gmail.com powerpc/64s/hash: Fix 128TB-512TB virtual address boundary case allocation
Michael Ellerman mpe@ellerman.id.au powerpc/64s/hash: Fix 512T hint detection to use >= 128T
Nicholas Piggin npiggin@gmail.com powerpc/64s/radix: Fix 128TB-512TB virtual address boundary case allocation
Michael Ellerman mpe@ellerman.id.au powerpc/64s: Fix masking of SRR1 bits on instruction fault
Naveen N. Rao naveen.n.rao@linux.vnet.ibm.com powerpc/signal: Properly handle return value from uprobe_deny_signal()
Michael Ellerman mpe@ellerman.id.au powerpc/perf/imc: Use cpu_to_node() not topology_physical_package_id()
Balbir Singh bsingharora@gmail.com powerpc/mm/radix: Fix crashes on Power9 DD1 with radix MMU and STRICT_RWX
Christophe Leroy christophe.leroy@c-s.fr powerpc: Fix boot on BOOK3S_32 with CONFIG_STRICT_KERNEL_RWX
John David Anglin dave.anglin@bell.net parisc: Fix validity check of pointer size argument in new CAS implementation
Brian King brking@linux.vnet.ibm.com ixgbe: Fix skb list corruption on Power systems
Brian King brking@linux.vnet.ibm.com fm10k: Use smp_rmb rather than read_barrier_depends
Brian King brking@linux.vnet.ibm.com i40evf: Use smp_rmb rather than read_barrier_depends
Brian King brking@linux.vnet.ibm.com ixgbevf: Use smp_rmb rather than read_barrier_depends
Brian King brking@linux.vnet.ibm.com igbvf: Use smp_rmb rather than read_barrier_depends
Brian King brking@linux.vnet.ibm.com igb: Use smp_rmb rather than read_barrier_depends
Brian King brking@linux.vnet.ibm.com i40e: Use smp_rmb rather than read_barrier_depends
Bin Meng bmeng.cn@gmail.com spi-nor: intel-spi: Fix broken software sequencing codes
Johan Hovold johan@kernel.org NFC: fix device-allocation error return
Daniel Jurgens danielj@mellanox.com IB/core: Only maintain real QPs in the security lists
Parav Pandit parav@mellanox.com IB/core: Avoid crash on pkey enforcement failed in received MADs
Bart Van Assche bart.vanassche@wdc.com IB/srp: Avoid that a cable pull can trigger a kernel crash
Michael J. Ruhl michael.j.ruhl@intel.com IB/hfi1: Fix incorrect available receive user context count
Parav Pandit parav@mellanox.com IB/cm: Fix memory corruption in handling CM request
Bart Van Assche bart.vanassche@wdc.com IB/srpt: Do not accept invalid initiator port names
Chuck Lever chuck.lever@oracle.com svcrdma: Preserve CB send buffer across retransmits
Dan Williams dan.j.williams@intel.com libnvdimm, namespace: make 'resource' attribute only readable by root
Dan Williams dan.j.williams@intel.com libnvdimm, region : make 'resource' attribute only readable by root
Dan Williams dan.j.williams@intel.com libnvdimm, namespace: fix label initialization to use valid seq numbers
Dan Williams dan.j.williams@intel.com libnvdimm, pfn: make 'resource' attribute only readable by root
Dan Williams dan.j.williams@intel.com libnvdimm, dimm: clear 'locked' status on successful DIMM enable
Johan Hovold johan@kernel.org clk: ti: dra7-atl-clock: fix child-node lookups
Trond Myklebust trond.myklebust@primarydata.com SUNRPC: Fix tracepoint storage issues with svc_recv and svc_rqst_status
Mikulas Patocka mpatocka@redhat.com dax: fix general protection fault in dax_alloc_inode
Jeff Moyer jmoyer@redhat.com dax: fix PMD faults on zero-length files
Paolo Bonzini pbonzini@redhat.com kvm: vmx: Reinstate support for CPUs without virtual NMI
Paolo Bonzini pbonzini@redhat.com KVM: SVM: obey guest PAT
Ladi Prosek lprosek@redhat.com KVM: nVMX: set IDTR and GDTR limits when loading L1 host state
Paul Mackerras paulus@ozlabs.org KVM: PPC: Book3S HV: Don't call real-mode XICS hypercall handlers if not enabled
Vasily Averin vvs@virtuozzo.com lockd: double unregister of inetaddr notifiers
Johan Hovold johan@kernel.org irqchip/gic-v3: Fix ppi-partitions lookup
Marc Zyngier marc.zyngier@arm.com genirq: Track whether the trigger type has been set
Nate Dailey nate.dailey@stratus.com raid1: prevent freeze_array/wait_all_barriers deadlock
Bart Van Assche bart.vanassche@wdc.com block: Fix a race between blk_cleanup_queue() and timeout handling
Andrey Konovalov andreyknvl@google.com p54: don't unregister leds when they are not initialized
Anup Patel anup.patel@broadcom.com mailbox: bcm-flexrm-mailbox: Fix FlexRM ring flush sequence
Xiaolei Li xiaolei.li@mediatek.com mtd: nand: mtk: fix infinite ECC decode IRQ issue
Brent Taylor motobud@gmail.com mtd: nand: Fix writing mtdoops to nand flash.
Roger Quadros rogerq@ti.com mtd: nand: omap2: Fix subpage write
Boris Brezillon boris.brezillon@free-electrons.com mtd: nand: atmel: Actually use the PM ops
Boris Brezillon boris.brezillon@free-electrons.com mtd: nand: Export nand_reset() symbol
Boris Brezillon boris.brezillon@free-electrons.com mtd: Avoid probe failures when mtd->dbg.dfs_dir is invalid
Nicholas Bellinger nab@linux-iscsi.org target: Avoid early CMD_T_PRE_EXECUTE failures during ABORT_TASK
Nicholas Bellinger nab@linux-iscsi.org target: Fix quiese during transport_write_pending_qf endless loop
Nicholas Bellinger nab@linux-iscsi.org target: Fix caw_sem leak in transport_generic_request_failure
Nicholas Bellinger nab@linux-iscsi.org target: Fix QUEUE_FULL + SCSI task attribute handling
tangwenji tang.wenji@zte.com.cn target: fix buffer offset in core_scsi3_pri_read_full_status
tangwenji tang.wenji@zte.com.cn target: fix null pointer regression in core_tmr_drain_tmr_list
Nicholas Bellinger nab@linux-iscsi.org iscsi-target: Fix non-immediate TMR reference leak
Nicholas Bellinger nab@linux-iscsi.org iscsi-target: Make TASK_REASSIGN use proper se_cmd->cmd_kref
Dick Kennedy dick.kennedy@broadcom.com scsi: lpfc: Fix oops if nvmet_fc_register_targetport fails
Dick Kennedy dick.kennedy@broadcom.com scsi: lpfc: Fix FCP hba_wqidx assignment
Dick Kennedy dick.kennedy@broadcom.com scsi: lpfc: Fix crash receiving ELS while detaching driver
Dick Kennedy dick.kennedy@broadcom.com scsi: lpfc: fix pci hot plug crash in list_add call
Dick Kennedy dick.kennedy@broadcom.com scsi: lpfc: fix pci hot plug crash in timer management routines
Damien Le Moal damien.lemoal@wdc.com scsi: sd_zbc: Fix sd_zbc_read_zoned_characteristics()
Bart Van Assche bart.vanassche@wdc.com scsi: qla2xxx: Suppress a kernel complaint in qla_init_base_qpair()
Tuomas Tynkkynen tuomas@tuxera.com net/9p: Switch to wait_event_killable()
Tuomas Tynkkynen tuomas@tuxera.com fs/9p: Compare qid.path in v9fs_test_inode
Tuomas Tynkkynen tuomas@tuxera.com 9p: Fix missing commas in mount options
Al Viro viro@zeniv.linux.org.uk fix a page leak in vhost_scsi_iov_to_sgl() error recovery
Joakim Tjernlund joakim.tjernlund@infinera.com mfd: lpc_ich: Avoton/Rangeley uses SPI_BYT method
Maxime Ripard maxime.ripard@free-electrons.com ASoC: sun8i-codec: Set the BCLK divider
Maxime Ripard maxime.ripard@free-electrons.com ASoC: sun8i-codec: Fix left and right channels inversion
Maxime Ripard maxime.ripard@free-electrons.com ASoC: sun8i-codec: Invert Master / Slave condition
Kailang Yang kailang@realtek.com ALSA: hda/realtek - Fix ALC700 family no sound issue
Takashi Iwai tiwai@suse.de ALSA: hda - Fix yet remaining issue with vmaster 0dB initialization
Takashi Iwai tiwai@suse.de ALSA: hda: Fix too short HDMI/DP chmap reporting
Kailang Yang kailang@realtek.com ALSA: hda/realtek - Fix ALC275 no sound issue
Takashi Iwai tiwai@suse.de ALSA: timer: Remove kernel warning at compat ioctl error paths
Takashi Iwai tiwai@suse.de ALSA: usb-audio: Add sanity checks in v2 clock parsers
Takashi Iwai tiwai@suse.de ALSA: usb-audio: Fix potential out-of-bound access at parsing SU
Takashi Iwai tiwai@suse.de ALSA: usb-audio: Add sanity checks to FE parser
Henrik Eriksson henrik.eriksson@axis.com ALSA: pcm: update tstamp only if audio_tstamp changed
Ross Zwisler ross.zwisler@linux.intel.com ext4: prevent data corruption with journaling + DAX
Ross Zwisler ross.zwisler@linux.intel.com ext4: prevent data corruption with inline data + DAX
Theodore Ts'o tytso@mit.edu ext4: fix interaction between i_size, fallocate, and delalloc after a crash
Rameshwar Prasad Sahu rsahu@apm.com ata: fixes kernel crash while tracing ata_eh_link_autopsy event
Miklos Szeredi mszeredi@redhat.com fsnotify: fix pinning group in fsnotify_prepare_user_wait()
Miklos Szeredi mszeredi@redhat.com fsnotify: pin both inode and vfsmount mark
Miklos Szeredi mszeredi@redhat.com fsnotify: clean up fsnotify_prepare/finish_user_wait()
Shaohua Li shli@fb.com md/bitmap: revert a patch
Loic Poulain loic.poulain@linaro.org Bluetooth: btqcomsmd: Add support for BD address setup
Artur Paszkiewicz artur.paszkiewicz@intel.com md: don't check MD_SB_CHANGE_CLEAN in md_allow_write
NeilBrown neilb@suse.com md: fix deadlock error in recent patch.
Thomas Backlund tmb@mageia.org iwlwifi: fix firmware names for 9000 and A000 series hw
Arnd Bergmann arnd@arndb.de rtlwifi: fix uninitialized rtlhal->last_suspend_sec time
Larry Finger Larry.Finger@lwfinger.net rtlwifi: rtl8192ee: Fix memory leak when loading firmware
Andrew Elble aweits@rit.edu nfsd: deal with revoked delegations appropriately
NeilBrown neilb@suse.com NFS: revalidate "." etc correctly on "open".
Anna Schumaker Anna.Schumaker@Netapp.com NFS: Avoid RCU usage in tracepoints
Chuck Lever chuck.lever@oracle.com nfs: Fix ugly referral attributes
Benjamin Coddington bcodding@redhat.com NFS: Revert "NFS: Move the flock open mode check into nfs_flock()"
Joshua Watt jpewhacker@gmail.com NFS: Fix typo in nomigration mount option
Jaegeuk Kim jaegeuk@kernel.org f2fs: expose some sectors to user in inline data or dentry case
Josef Bacik jbacik@fb.com btrfs: change how we decide to commit transactions during flushing
Arnd Bergmann arnd@arndb.de isofs: fix timestamps beyond 2027
Miklos Szeredi mszeredi@redhat.com fanotify: fix fsnotify_prepare_user_wait() failure
Greg Edwards gedwards@ddn.com fs: guard_bio_eod() needs to consider partitions
Coly Li colyli@suse.de bcache: check ca->alloc_thread initialized before wake up it
Eric Biggers ebiggers@google.com libceph: don't WARN() if user tries to add invalid key
Dan Carpenter dan.carpenter@oracle.com eCryptfs: use after free in ecryptfs_release_messaging()
Eric Biggers ebiggers@google.com fscrypt: lock mutex before checking for bounce page pool
Andreas Rohner andreas.rohner@gmx.net nilfs2: fix race condition that causes file system corruption
NeilBrown neilb@suse.com autofs: don't fail mount for transient error
Vitaly Wool vitalywool@gmail.com mm/z3fold.c: use kref to prevent page free/compact race
Stanislaw Gruszka sgruszka@redhat.com rt2x00usb: mark device removed when get ENOENT usb error
Aleksandar Markovic aleksandar.markovic@mips.com MIPS: math-emu: Fix final emulation phase for certain instructions
Mirko Parthey mirko.parthey@web.de MIPS: BCM47XX: Fix LED inversion for WRT54GSv1
Maciej W. Rozycki macro@mips.com MIPS: Fix an n32 core file generation regset support regression
Masahiro Yamada yamada.masahiro@socionext.com MIPS: dts: remove bogus bcm96358nb4ser.dtb from dtb-y entry
James Hogan jhogan@kernel.org MIPS: Fix MIPS64 FP save/restore on 32-bit kernels
James Hogan jhogan@kernel.org MIPS: Fix odd fp register warnings with MIPS64r2
Mike Snitzer snitzer@redhat.com dm: discard support requires all targets in a table support discards
Hou Tao houtao1@huawei.com dm: fix race between dm_get_from_kobject() and __dm_destroy()
John Crispin john@phrozen.org MIPS: pci: Remove KERN_WARN instance inside the mt7620 driver
Steven Rostedt (Red Hat) rostedt@goodmis.org sched/rt: Simplify the IPI based RT balancing logic
Mikulas Patocka mpatocka@redhat.com dm: allocate struct mapped_device with kvzalloc
Vivek Goyal vgoyal@redhat.com ovl: Put upperdentry if ovl_check_origin() fails
Eric Biggers ebiggers@google.com dm bufio: fix integer overflow when limiting maximum cache size
Ming Lei ming.lei@redhat.com dm mpath: remove annoying message of 'blk_get_request() returned -11'
Damien Le Moal damien.lemoal@wdc.com dm zoned: ignore last smaller runt zone
Mikulas Patocka mpatocka@redhat.com dm crypt: allow unaligned bv_offset
Joe Thornber ejt@redhat.com dm cache: fix race condition in the writeback mode overwrite_bio optimisation
Mikulas Patocka mpatocka@redhat.com dm integrity: allow unaligned bv_offset
Vijendar Mukunda Vijendar.Mukunda@amd.com ALSA: hda: Add Raven PCI ID
Vadim Lomovtsev Vadim.Lomovtsev@cavium.com PCI: Apply Cavium ThunderX ACS quirk to more Root Ports
Vadim Lomovtsev Vadim.Lomovtsev@cavium.com PCI: Set Cavium ACS capability quirk flags to assert RR/CR/SV/UF
Dexuan Cui decui@microsoft.com PCI: hv: Use effective affinity mask
Bjorn Helgaas bhelgaas@google.com PCI/ASPM: Use correct capability pointer to program LTR_L1.2_THRESHOLD
Bjorn Helgaas bhelgaas@google.com PCI/ASPM: Account for downstream device's Port Common_Mode_Restore_Time
Tobias Jordan Tobias.Jordan@elektrobit.com PM / OPP: Add missing of_node_put(np)
Josef Bacik jbacik@fb.com nbd: don't start req until after the dead connection logic
Josef Bacik jbacik@fb.com nbd: wait uninterruptible for the dead timeout
Simon Guinot simon.guinot@sequanux.org net: mvneta: fix handling of the Tx descriptor counter
Mathias Kresin dev@kresin.me MIPS: ralink: Fix typo in mt7628 pinmux function
Mathias Kresin dev@kresin.me MIPS: ralink: Fix MT7628 pinmux
Ben Hutchings ben@decadent.org.uk MIPS: cmpxchg64() and HAVE_VIRT_CPU_ACCOUNTING_GEN don't work for 32-bit SMP
Dmitry V. Levin ldv@altlinux.org uapi: fix linux/rxrpc.h userspace compilation errors
Dmitry V. Levin ldv@altlinux.org uapi: fix linux/tls.h userspace compilation error
Philip Derrin philip@cog.systems ARM: 8721/1: mm: dump: check hardware RO bit for LPAE
Philip Derrin philip@cog.systems ARM: 8722/1: mm: make STRICT_KERNEL_RWX effective for LPAE
Catalin Marinas catalin.marinas@arm.com arm64: Implement arch-specific pte_access_permitted()
Andi Kleen ak@linux.intel.com perf/x86/intel: Hide TSX events when RTM is not supported
Andy Lutomirski luto@kernel.org x86/entry/64: Add missing irqflags tracing to native_load_gs_index()
Andy Lutomirski luto@kernel.org x86/entry/64: Fix entry_SYSCALL_64_after_hwframe() IRQ tracing
Masami Hiramatsu mhiramat@kernel.org x86/decoder: Add new TEST instruction pattern
Tom Lendacky thomas.lendacky@amd.com x86/boot: Fix boot failure when SMP MP-table is based at 0
Eric Biggers ebiggers@google.com lib/mpi: call cond_resched() from mpi_powm() loop
Paul E. McKenney paulmck@linux.vnet.ibm.com sched: Make resched_cpu() unconditional
Johan Hovold johan@kernel.org serdev: fix registration of second slave
Viresh Kumar viresh.kumar@linaro.org cpufreq: schedutil: Reset cached_raw_freq when not in sync with next_freq
Lv Zheng lv.zheng@intel.com ACPI / EC: Fix regression related to triggering source of EC event handling
Ville Syrjälä ville.syrjala@linux.intel.com ACPI / PM: Fix acpi_pm_notifier_lock vs flush_workqueue() deadlock
Vasily Gorbik gor@linux.vnet.ibm.com s390/disassembler: increase show_code buffer size
Heiko Carstens heiko.carstens@de.ibm.com s390/disassembler: add missing end marker for e7 table
Heiko Carstens heiko.carstens@de.ibm.com s390/guarded storage: fix possible memory corruption
Heiko Carstens heiko.carstens@de.ibm.com s390/runtime instrumention: fix possible memory corruption
Heiko Carstens heiko.carstens@de.ibm.com s390/noexec: execute kexec datamover without DAT
Heiko Carstens heiko.carstens@de.ibm.com s390: fix transactional execution control register handling
-------------
Diffstat:
Makefile | 4 +- arch/arm/mm/dump.c | 4 +- arch/arm/mm/init.c | 4 +- arch/arm64/boot/dts/amlogic/meson-gxl.dtsi | 8 + arch/arm64/include/asm/pgtable.h | 14 + arch/mips/Kconfig | 2 +- arch/mips/bcm47xx/leds.c | 2 +- arch/mips/boot/dts/brcm/Makefile | 1 - arch/mips/include/asm/asmmacro.h | 16 +- arch/mips/include/asm/cmpxchg.h | 2 + arch/mips/kernel/ptrace.c | 17 ++ arch/mips/kernel/r4k_fpu.S | 20 +- arch/mips/math-emu/cp1emu.c | 28 +- arch/mips/pci/pci-mt7620.c | 2 +- arch/mips/ralink/mt7620.c | 4 +- arch/parisc/kernel/syscall.S | 6 +- arch/powerpc/kernel/exceptions-64s.S | 2 +- arch/powerpc/kernel/signal.c | 2 +- arch/powerpc/kvm/book3s_hv_builtin.c | 12 + arch/powerpc/lib/code-patching.c | 6 +- arch/powerpc/mm/hugetlbpage-radix.c | 26 +- arch/powerpc/mm/mmap.c | 55 ++-- arch/powerpc/mm/mmu_context_book3s64.c | 8 +- arch/powerpc/mm/pgtable-radix.c | 10 + arch/powerpc/mm/slice.c | 50 ++- arch/powerpc/perf/imc-pmu.c | 12 +- arch/s390/include/asm/switch_to.h | 2 +- arch/s390/kernel/dis.c | 5 +- arch/s390/kernel/early.c | 4 +- arch/s390/kernel/guarded_storage.c | 2 + arch/s390/kernel/machine_kexec.c | 1 + arch/s390/kernel/process.c | 1 + arch/s390/kernel/relocate_kernel.S | 3 - arch/s390/kernel/runtime_instr.c | 4 +- arch/x86/entry/entry_64.S | 14 +- arch/x86/events/intel/core.c | 35 ++- arch/x86/kernel/mpparse.c | 6 +- arch/x86/kvm/svm.c | 7 + arch/x86/kvm/vmx.c | 152 ++++++--- arch/x86/lib/x86-opcode-map.txt | 2 +- block/blk-core.c | 2 + block/blk-timeout.c | 3 - drivers/acpi/device_pm.c | 21 +- drivers/acpi/ec.c | 12 +- drivers/ata/libata-eh.c | 2 +- drivers/base/power/opp/of.c | 1 + drivers/block/nbd.c | 26 +- drivers/bluetooth/btqcomsmd.c | 34 +++ drivers/clk/ti/clk-dra7-atl.c | 3 +- drivers/dax/super.c | 3 + drivers/infiniband/core/cm.c | 11 +- drivers/infiniband/core/mad.c | 3 +- drivers/infiniband/core/security.c | 51 ++-- drivers/infiniband/hw/hfi1/chip.c | 35 ++- drivers/infiniband/hw/hfi1/hfi.h | 2 + drivers/infiniband/hw/hfi1/sysfs.c | 2 +- drivers/infiniband/hw/hfi1/vnic_main.c | 7 +- drivers/infiniband/ulp/srp/ib_srp.c | 25 +- drivers/infiniband/ulp/srpt/ib_srpt.c | 9 +- drivers/irqchip/irq-gic-v3.c | 9 +- drivers/mailbox/bcm-flexrm-mailbox.c | 22 +- drivers/md/bcache/alloc.c | 3 +- drivers/md/bitmap.c | 4 +- drivers/md/dm-bufio.c | 15 +- drivers/md/dm-cache-target.c | 86 ++++-- drivers/md/dm-core.h | 3 +- drivers/md/dm-crypt.c | 4 +- drivers/md/dm-integrity.c | 2 +- drivers/md/dm-mpath.c | 2 - drivers/md/dm-table.c | 33 +- drivers/md/dm-zoned-target.c | 13 +- drivers/md/dm.c | 18 +- drivers/md/md.c | 4 +- drivers/md/raid1.c | 24 +- drivers/media/platform/qcom/venus/core.h | 2 - drivers/media/platform/qcom/venus/helpers.c | 7 - drivers/media/platform/qcom/venus/hfi.c | 1 + drivers/media/platform/qcom/venus/hfi_venus.c | 12 +- drivers/media/platform/qcom/venus/vdec.c | 34 ++- drivers/media/platform/qcom/venus/venc.c | 7 +- drivers/media/rc/ir-lirc-codec.c | 9 +- drivers/media/rc/ir-nec-decoder.c | 29 +- drivers/media/usb/as102/as102_fw.c | 28 +- drivers/media/usb/cx231xx/cx231xx-cards.c | 2 +- drivers/media/v4l2-core/v4l2-ctrls.c | 16 +- drivers/mfd/lpc_ich.c | 1 + drivers/mtd/devices/docg3.c | 7 +- drivers/mtd/nand/atmel/nand-controller.c | 1 + drivers/mtd/nand/mtk_ecc.c | 13 +- drivers/mtd/nand/nand_base.c | 10 +- drivers/mtd/nand/nandsim.c | 13 +- drivers/mtd/nand/omap2.c | 339 ++++++++++++++------- drivers/mtd/spi-nor/intel-spi.c | 4 +- drivers/net/ethernet/intel/e1000e/defines.h | 1 + drivers/net/ethernet/intel/e1000e/mac.c | 11 +- drivers/net/ethernet/intel/e1000e/netdev.c | 45 ++- drivers/net/ethernet/intel/e1000e/phy.c | 7 +- drivers/net/ethernet/intel/fm10k/fm10k_main.c | 2 +- drivers/net/ethernet/intel/i40e/i40e_main.c | 2 +- drivers/net/ethernet/intel/i40e/i40e_txrx.c | 2 +- drivers/net/ethernet/intel/i40evf/i40e_txrx.c | 2 +- drivers/net/ethernet/intel/igb/igb_main.c | 2 +- drivers/net/ethernet/intel/igbvf/netdev.c | 2 +- drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 2 +- drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c | 2 +- drivers/net/ethernet/marvell/mvneta.c | 13 +- drivers/net/wireless/intel/iwlwifi/cfg/9000.c | 73 ++++- drivers/net/wireless/intel/iwlwifi/cfg/a000.c | 10 +- drivers/net/wireless/intel/iwlwifi/fw/api/scan.h | 59 +++- drivers/net/wireless/intel/iwlwifi/fw/file.h | 1 + drivers/net/wireless/intel/iwlwifi/iwl-config.h | 5 + drivers/net/wireless/intel/iwlwifi/mvm/mvm.h | 6 + drivers/net/wireless/intel/iwlwifi/mvm/scan.c | 86 ++++-- drivers/net/wireless/intel/iwlwifi/pcie/drv.c | 183 ++++++++--- drivers/net/wireless/intersil/p54/main.c | 7 +- drivers/net/wireless/ralink/rt2x00/rt2x00usb.c | 6 +- .../net/wireless/realtek/rtlwifi/rtl8192ee/fw.c | 6 +- .../net/wireless/realtek/rtlwifi/rtl8821ae/hw.c | 1 + drivers/nvdimm/dimm.c | 1 + drivers/nvdimm/dimm_devs.c | 7 + drivers/nvdimm/label.c | 2 +- drivers/nvdimm/namespace_devs.c | 2 +- drivers/nvdimm/nd.h | 1 + drivers/nvdimm/pfn_devs.c | 8 + drivers/nvdimm/region_devs.c | 8 +- drivers/pci/host/pci-hyperv.c | 8 +- drivers/pci/pcie/aspm.c | 4 +- drivers/pci/quirks.c | 27 +- drivers/scsi/lpfc/lpfc_attr.c | 6 +- drivers/scsi/lpfc/lpfc_bsg.c | 4 +- drivers/scsi/lpfc/lpfc_els.c | 7 +- drivers/scsi/lpfc/lpfc_hbadisc.c | 5 +- drivers/scsi/lpfc/lpfc_init.c | 15 +- drivers/scsi/lpfc/lpfc_nportdisc.c | 2 +- drivers/scsi/lpfc/lpfc_nvmet.c | 15 +- drivers/scsi/lpfc/lpfc_sli.c | 34 ++- drivers/scsi/qla2xxx/qla_os.c | 2 +- drivers/scsi/sd_zbc.c | 6 +- drivers/target/iscsi/iscsi_target.c | 30 +- drivers/target/target_core_pr.c | 1 + drivers/target/target_core_tmr.c | 12 +- drivers/target/target_core_transport.c | 26 +- drivers/tty/serdev/core.c | 19 +- drivers/vhost/scsi.c | 5 +- fs/9p/vfs_inode.c | 3 + fs/9p/vfs_inode_dotl.c | 3 + fs/autofs4/waitq.c | 15 +- fs/btrfs/extent-tree.c | 42 ++- fs/buffer.c | 10 +- fs/crypto/crypto.c | 7 +- fs/dax.c | 6 +- fs/ecryptfs/messaging.c | 7 +- fs/ext4/extents.c | 6 +- fs/ext4/inline.c | 10 - fs/ext4/inode.c | 5 - fs/ext4/ioctl.c | 16 +- fs/ext4/super.c | 5 + fs/f2fs/file.c | 6 + fs/isofs/isofs.h | 2 +- fs/isofs/rock.h | 2 +- fs/isofs/util.c | 2 +- fs/lockd/svc.c | 20 +- fs/nfs/dir.c | 4 +- fs/nfs/file.c | 18 +- fs/nfs/nfs4proc.c | 32 +- fs/nfs/nfs4trace.h | 24 +- fs/nfs/super.c | 2 +- fs/nfsd/nfs4state.c | 25 +- fs/nilfs2/segment.c | 6 +- fs/notify/fanotify/fanotify.c | 33 +- fs/notify/fsnotify.c | 10 +- fs/notify/mark.c | 107 ++++--- fs/overlayfs/namei.c | 2 +- include/linux/genhd.h | 1 + include/linux/irq.h | 11 +- include/net/tls.h | 4 + include/sound/control.h | 4 +- include/target/target_core_base.h | 1 + include/trace/events/sunrpc.h | 17 +- include/uapi/linux/rxrpc.h | 10 +- include/uapi/linux/tls.h | 4 - kernel/irq/manage.c | 13 +- kernel/sched/core.c | 3 +- kernel/sched/cpufreq_schedutil.c | 6 +- kernel/sched/rt.c | 316 +++++++------------ kernel/sched/sched.h | 24 +- kernel/sched/topology.c | 6 + lib/mpi/mpi-pow.c | 2 + mm/z3fold.c | 10 +- net/9p/client.c | 5 +- net/9p/trans_fd.c | 6 +- net/9p/trans_virtio.c | 13 +- net/9p/trans_xen.c | 4 +- net/ceph/crypto.c | 4 +- net/nfc/core.c | 2 +- net/sunrpc/xprtrdma/svc_rdma_backchannel.c | 6 +- sound/core/pcm_lib.c | 6 +- sound/core/timer_compat.c | 12 +- sound/core/vmaster.c | 6 +- sound/hda/hdmi_chmap.c | 2 +- sound/pci/hda/hda_codec.c | 10 +- sound/pci/hda/hda_intel.c | 3 + sound/pci/hda/patch_realtek.c | 5 +- sound/soc/sunxi/sun8i-codec.c | 61 +++- sound/usb/clock.c | 9 +- sound/usb/mixer.c | 15 +- 206 files changed, 2165 insertions(+), 1243 deletions(-)
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Heiko Carstens heiko.carstens@de.ibm.com
commit d0e810eeb3d326978f248b8f0233a2f30f58c72d upstream.
Rebooting into a new kernel with kexec fails (system dies) if tried on a machine that has no-execute support. Reason for this is that the so called datamover code gets executed with DAT on (MMU is active) and the page that contains the datamover is marked as non-executable. Therefore when branching into the datamover an unexpected program check happens and afterwards the machine is dead.
This can be simply avoided by disabling DAT, which also disables any no-execute checks, just before the datamover gets executed.
In fact the first thing done by the datamover is to disable DAT. The code in the datamover that disables DAT can be removed as well.
Thanks to Michael Holzheu and Gerald Schaefer for tracking this down.
Reviewed-by: Michael Holzheu holzheu@linux.vnet.ibm.com Reviewed-by: Philipp Rudo prudo@linux.vnet.ibm.com Cc: Gerald Schaefer gerald.schaefer@de.ibm.com Cc: Martin Schwidefsky schwidefsky@de.ibm.com Fixes: 57d7f939e7bd ("s390: add no-execute support") Signed-off-by: Heiko Carstens heiko.carstens@de.ibm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/s390/kernel/machine_kexec.c | 1 + arch/s390/kernel/relocate_kernel.S | 3 --- 2 files changed, 1 insertion(+), 3 deletions(-)
--- a/arch/s390/kernel/machine_kexec.c +++ b/arch/s390/kernel/machine_kexec.c @@ -269,6 +269,7 @@ static void __do_machine_kexec(void *dat s390_reset_system(); data_mover = (relocate_kernel_t) page_to_phys(image->control_code_page);
+ __arch_local_irq_stnsm(0xfb); /* disable DAT - avoid no-execute */ /* Call the moving routine */ (*data_mover)(&image->head, image->start);
--- a/arch/s390/kernel/relocate_kernel.S +++ b/arch/s390/kernel/relocate_kernel.S @@ -29,7 +29,6 @@ ENTRY(relocate_kernel) basr %r13,0 # base address .base: - stnsm sys_msk-.base(%r13),0xfb # disable DAT stctg %c0,%c15,ctlregs-.base(%r13) stmg %r0,%r15,gprregs-.base(%r13) lghi %r0,3 @@ -103,8 +102,6 @@ ENTRY(relocate_kernel) .align 8 load_psw: .long 0x00080000,0x80000000 - sys_msk: - .quad 0 ctlregs: .rept 16 .quad 0
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Heiko Carstens heiko.carstens@de.ibm.com
commit d6e646ad7cfa7034d280459b2b2546288f247144 upstream.
For PREEMPT enabled kernels the runtime instrumentation (RI) code contains a possible use-after-free bug. If a task that makes use of RI exits, it will execute do_exit() while still enabled for preemption.
That function will call exit_thread_runtime_instr() via exit_thread(). If exit_thread_runtime_instr() gets preempted after the RI control block of the task has been freed but before the pointer to it is set to NULL, then save_ri_cb(), called from switch_to(), will write to already freed memory.
Avoid this and simply disable preemption while freeing the control block and setting the pointer to NULL.
Fixes: e4b8b3f33fca ("s390: add support for runtime instrumentation") Reviewed-by: Christian Borntraeger borntraeger@de.ibm.com Signed-off-by: Heiko Carstens heiko.carstens@de.ibm.com Signed-off-by: Martin Schwidefsky schwidefsky@de.ibm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/s390/kernel/runtime_instr.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/s390/kernel/runtime_instr.c +++ b/arch/s390/kernel/runtime_instr.c @@ -50,11 +50,13 @@ void exit_thread_runtime_instr(void) { struct task_struct *task = current;
+ preempt_disable(); if (!task->thread.ri_cb) return; disable_runtime_instr(); kfree(task->thread.ri_cb); task->thread.ri_cb = NULL; + preempt_enable(); }
SYSCALL_DEFINE1(s390_runtime_instr, int, command) @@ -65,9 +67,7 @@ SYSCALL_DEFINE1(s390_runtime_instr, int, return -EOPNOTSUPP;
if (command == S390_RUNTIME_INSTR_STOP) { - preempt_disable(); exit_thread_runtime_instr(); - preempt_enable(); return 0; }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Heiko Carstens heiko.carstens@de.ibm.com
commit fa1edf3f63c05ca8eacafcd7048ed91e5360f1a8 upstream.
For PREEMPT enabled kernels the guarded storage (GS) code contains a possible use-after-free bug. If a task that makes use of GS exits, it will execute do_exit() while still enabled for preemption.
That function will call exit_thread_runtime_instr() via exit_thread(). If exit_thread_gs() gets preempted after the GS control block of the task has been freed but before the pointer to it is set to NULL, then save_gs_cb(), called from switch_to(), will write to already freed memory.
Avoid this and simply disable preemption while freeing the control block and setting the pointer to NULL.
Fixes: 916cda1aa1b4 ("s390: add a system call for guarded storage") Reviewed-by: Christian Borntraeger borntraeger@de.ibm.com Signed-off-by: Heiko Carstens heiko.carstens@de.ibm.com Signed-off-by: Martin Schwidefsky schwidefsky@de.ibm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/s390/kernel/guarded_storage.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/arch/s390/kernel/guarded_storage.c +++ b/arch/s390/kernel/guarded_storage.c @@ -14,9 +14,11 @@
void exit_thread_gs(void) { + preempt_disable(); kfree(current->thread.gs_cb); kfree(current->thread.gs_bc_cb); current->thread.gs_cb = current->thread.gs_bc_cb = NULL; + preempt_enable(); }
static int gs_enable(void)
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Heiko Carstens heiko.carstens@de.ibm.com
commit 5c50538752af7968f53924b22dede8ed4ce4cb3b upstream.
The e7 opcode table does not have an end marker. Hence when trying to find an unknown e7 instruction the code will access memory behind the table until it finds something that matches the opcode, or the kernel crashes, whatever comes first.
This affects not only the in-kernel disassembler but also uprobes and kprobes which refuse to set a probe on unknown instructions, and therefore search the opcode tables to figure out if instructions are known or not.
Fixes: 3585cb0280654 ("s390/disassembler: add vector instructions") Signed-off-by: Heiko Carstens heiko.carstens@de.ibm.com Signed-off-by: Martin Schwidefsky schwidefsky@de.ibm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/s390/kernel/dis.c | 1 + 1 file changed, 1 insertion(+)
--- a/arch/s390/kernel/dis.c +++ b/arch/s390/kernel/dis.c @@ -1548,6 +1548,7 @@ static struct s390_insn opcode_e7[] = { { "vfsq", 0xce, INSTR_VRR_VV000MM }, { "vfs", 0xe2, INSTR_VRR_VVV00MM }, { "vftci", 0x4a, INSTR_VRI_VVIMM }, + { "", 0, INSTR_INVALID } };
static struct s390_insn opcode_eb[] = {
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Vasily Gorbik gor@linux.vnet.ibm.com
commit b192571d1ae375e0bbe0aa3ccfa1a3c3704454b9 upstream.
Current buffer size of 64 is too small. objdump shows that there are instructions which would require up to 75 bytes buffer (with current formating). 128 bytes "ought to be enough for anybody".
Also replaces 8 spaces with a single tab to reduce the memory footprint.
Fixes the following KASAN finding:
BUG: KASAN: stack-out-of-bounds in number+0x3fe/0x538 Write of size 1 at addr 000000005a4a75a0 by task bash/1282
CPU: 1 PID: 1282 Comm: bash Not tainted 4.14.0+ #215 Hardware name: IBM 2964 N96 702 (z/VM 6.4.0) Call Trace: ([<000000000011eeb6>] show_stack+0x56/0x88) [<0000000000e1ce1a>] dump_stack+0x15a/0x1b0 [<00000000004e2994>] print_address_description+0xf4/0x288 [<00000000004e2cf2>] kasan_report+0x13a/0x230 [<0000000000e38ae6>] number+0x3fe/0x538 [<0000000000e3dfe4>] vsnprintf+0x194/0x948 [<0000000000e3ea42>] sprintf+0xa2/0xb8 [<00000000001198dc>] print_insn+0x374/0x500 [<0000000000119346>] show_code+0x4ee/0x538 [<000000000011f234>] show_registers+0x34c/0x388 [<000000000011f2ae>] show_regs+0x3e/0xa8 [<000000000011f502>] die+0x1ea/0x2e8 [<0000000000138f0e>] do_no_context+0x106/0x168 [<0000000000139a1a>] do_protection_exception+0x4da/0x7d0 [<0000000000e55914>] pgm_check_handler+0x16c/0x1c0 [<000000000090639e>] sysrq_handle_crash+0x46/0x58 ([<0000000000000007>] 0x7) [<00000000009073fa>] __handle_sysrq+0x102/0x218 [<0000000000907c06>] write_sysrq_trigger+0xd6/0x100 [<000000000061d67a>] proc_reg_write+0xb2/0x128 [<0000000000520be6>] __vfs_write+0xee/0x368 [<0000000000521222>] vfs_write+0x21a/0x278 [<000000000052156a>] SyS_write+0xda/0x178 [<0000000000e555cc>] system_call+0xc4/0x270
The buggy address belongs to the page: page:000003d1016929c0 count:0 mapcount:0 mapping: (null) index:0x0 flags: 0x0() raw: 0000000000000000 0000000000000000 0000000000000000 ffffffff00000000 raw: 0000000000000100 0000000000000200 0000000000000000 0000000000000000 page dumped because: kasan: bad access detected
Memory state around the buggy address: 000000005a4a7480: 00 00 00 00 00 00 00 00 00 00 00 00 f1 f1 f1 f1 000000005a4a7500: 00 00 00 00 00 00 00 00 f2 f2 f2 f2 00 00 00 00
000000005a4a7580: 00 00 00 00 f3 f3 f3 f3 00 00 00 00 00 00 00 00
^ 000000005a4a7600: 00 00 00 00 00 00 00 00 00 00 f1 f1 f1 f1 f8 f8 000000005a4a7680: f2 f2 f2 f2 f2 f2 f8 f8 f2 f2 f3 f3 f3 f3 00 00 ==================================================================
Signed-off-by: Vasily Gorbik gor@linux.vnet.ibm.com Signed-off-by: Martin Schwidefsky schwidefsky@de.ibm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/s390/kernel/dis.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/s390/kernel/dis.c +++ b/arch/s390/kernel/dis.c @@ -1954,7 +1954,7 @@ void show_code(struct pt_regs *regs) { char *mode = user_mode(regs) ? "User" : "Krnl"; unsigned char code[64]; - char buffer[64], *ptr; + char buffer[128], *ptr; mm_segment_t old_fs; unsigned long addr; int start, end, opsize, hops, i; @@ -2017,7 +2017,7 @@ void show_code(struct pt_regs *regs) start += opsize; pr_cont("%s", buffer); ptr = buffer; - ptr += sprintf(ptr, "\n "); + ptr += sprintf(ptr, "\n\t "); hops++; } pr_cont("\n");
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Lv Zheng lv.zheng@intel.com
commit 53c5eaabaea9a1b7a96f95ccc486d2ad721d95bb upstream.
Originally the Samsung quirks removed by commit 4c237371 can be covered by commit e923e8e7 and ec_freeze_events=Y mode. But commit 9c40f956 changed ec_freeze_events=Y back to N, making this problem re-surface.
Actually, if commit e923e8e7 is robust enough, we can freely change ec_freeze_events mode, so this patch fixes the issue by improving commit e923e8e7.
Related commits listed in the merged order:
Commit: e923e8e79e18fd6be9162f1be6b99a002e9df2cb Subject: ACPI / EC: Fix an issue that SCI_EVT cannot be detected after event is enabled
Commit: 4c237371f290d1ed3b2071dd43554362137b1cce Subject: ACPI / EC: Remove old CLEAR_ON_RESUME quirk
Commit: 9c40f956ce9b331493347d1b3cb7e384f7dc0581 Subject: Revert "ACPI / EC: Enable event freeze mode..." to fix a regression
This patch not only fixes the reported post-resume EC event triggering source issue, but also fixes an unreported similar issue related to the driver bind by adding EC event triggering source in ec_install_handlers().
Fixes: e923e8e79e18 (ACPI / EC: Fix an issue that SCI_EVT cannot be detected after event is enabled) Fixes: 4c237371f290 (ACPI / EC: Remove old CLEAR_ON_RESUME quirk) Fixes: 9c40f956ce9b (Revert "ACPI / EC: Enable event freeze mode..." to fix a regression) Link: https://bugzilla.kernel.org/show_bug.cgi?id=196833 Signed-off-by: Lv Zheng lv.zheng@intel.com Reported-by: Alistair Hamilton ahpatent@gmail.com Tested-by: Alistair Hamilton ahpatent@gmail.com Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/acpi/ec.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-)
--- a/drivers/acpi/ec.c +++ b/drivers/acpi/ec.c @@ -486,8 +486,11 @@ static inline void __acpi_ec_enable_even { if (!test_and_set_bit(EC_FLAGS_QUERY_ENABLED, &ec->flags)) ec_log_drv("event unblocked"); - if (!test_bit(EC_FLAGS_QUERY_PENDING, &ec->flags)) - advance_transaction(ec); + /* + * Unconditionally invoke this once after enabling the event + * handling mechanism to detect the pending events. + */ + advance_transaction(ec); }
static inline void __acpi_ec_disable_event(struct acpi_ec *ec) @@ -1456,11 +1459,10 @@ static int ec_install_handlers(struct ac if (test_bit(EC_FLAGS_STARTED, &ec->flags) && ec->reference_count >= 1) acpi_ec_enable_gpe(ec, true); - - /* EC is fully operational, allow queries */ - acpi_ec_enable_event(ec); } } + /* EC is fully operational, allow queries */ + acpi_ec_enable_event(ec);
return 0; }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Viresh Kumar viresh.kumar@linaro.org
commit 07458f6a5171d97511dfbdf6ce549ed2ca0280c7 upstream.
'cached_raw_freq' is used to get the next frequency quickly but should always be in sync with sg_policy->next_freq. There is a case where it is not and in such cases it should be reset to avoid switching to incorrect frequencies.
Consider this case for example:
- policy->cur is 1.2 GHz (Max) - New request comes for 780 MHz and we store that in cached_raw_freq. - Based on 780 MHz, we calculate the effective frequency as 800 MHz. - We then see the CPU wasn't idle recently and choose to keep the next freq as 1.2 GHz. - Now we have cached_raw_freq is 780 MHz and sg_policy->next_freq is 1.2 GHz. - Now if the utilization doesn't change in then next request, then the next target frequency will still be 780 MHz and it will match with cached_raw_freq. But we will choose 1.2 GHz instead of 800 MHz here.
Fixes: b7eaf1aab9f8 (cpufreq: schedutil: Avoid reducing frequency of busy CPUs prematurely) Signed-off-by: Viresh Kumar viresh.kumar@linaro.org Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- kernel/sched/cpufreq_schedutil.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -282,8 +282,12 @@ static void sugov_update_single(struct u * Do not reduce the frequency if the CPU has not been idle * recently, as the reduction is likely to be premature then. */ - if (busy && next_f < sg_policy->next_freq) + if (busy && next_f < sg_policy->next_freq) { next_f = sg_policy->next_freq; + + /* Reset cached freq as next_freq has changed */ + sg_policy->cached_raw_freq = 0; + } } sugov_update_commit(sg_policy, time, next_f); }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Johan Hovold johan@kernel.org
commit 08fcee289f341786eb3b44e5f2d1dc850943238e upstream.
Serdev currently only supports a single slave device, but the required sanity checks to prevent further registration attempts were missing.
If a serial-port node has two child nodes with compatible properties, the OF code would try to register two slave devices using the same id and name. Driver core will not allow this (and there will be loud complaints), but the controller's slave pointer would already have been set to address of the soon to be deallocated second struct serdev_device. As the first slave device remains registered, this can lead to later use-after-free issues when the slave callbacks are accessed.
Note that while the serdev registration helpers are exported, they are typically only called by serdev core. Any other (out-of-tree) callers must serialise registration and deregistration themselves.
Fixes: cd6484e1830b ("serdev: Introduce new bus for serial attached devices") Cc: Rob Herring robh@kernel.org Signed-off-by: Johan Hovold johan@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/tty/serdev/core.c | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-)
--- a/drivers/tty/serdev/core.c +++ b/drivers/tty/serdev/core.c @@ -65,21 +65,32 @@ static int serdev_uevent(struct device * */ int serdev_device_add(struct serdev_device *serdev) { + struct serdev_controller *ctrl = serdev->ctrl; struct device *parent = serdev->dev.parent; int err;
dev_set_name(&serdev->dev, "%s-%d", dev_name(parent), serdev->nr);
+ /* Only a single slave device is currently supported. */ + if (ctrl->serdev) { + dev_err(&serdev->dev, "controller busy\n"); + return -EBUSY; + } + ctrl->serdev = serdev; + err = device_add(&serdev->dev); if (err < 0) { dev_err(&serdev->dev, "Can't add %s, status %d\n", dev_name(&serdev->dev), err); - goto err_device_add; + goto err_clear_serdev; }
dev_dbg(&serdev->dev, "device %s registered\n", dev_name(&serdev->dev));
-err_device_add: + return 0; + +err_clear_serdev: + ctrl->serdev = NULL; return err; } EXPORT_SYMBOL_GPL(serdev_device_add); @@ -90,7 +101,10 @@ EXPORT_SYMBOL_GPL(serdev_device_add); */ void serdev_device_remove(struct serdev_device *serdev) { + struct serdev_controller *ctrl = serdev->ctrl; + device_unregister(&serdev->dev); + ctrl->serdev = NULL; } EXPORT_SYMBOL_GPL(serdev_device_remove);
@@ -295,7 +309,6 @@ struct serdev_device *serdev_device_allo return NULL;
serdev->ctrl = ctrl; - ctrl->serdev = serdev; device_initialize(&serdev->dev); serdev->dev.parent = &ctrl->dev; serdev->dev.bus = &serdev_bus_type;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Paul E. McKenney paulmck@linux.vnet.ibm.com
commit 7c2102e56a3f7d85b5d8f33efbd7aecc1f36fdd8 upstream.
The current implementation of synchronize_sched_expedited() incorrectly assumes that resched_cpu() is unconditional, which it is not. This means that synchronize_sched_expedited() can hang when resched_cpu()'s trylock fails as follows (analysis by Neeraj Upadhyay):
o CPU1 is waiting for expedited wait to complete:
sync_rcu_exp_select_cpus rdp->exp_dynticks_snap & 0x1 // returns 1 for CPU5 IPI sent to CPU5
synchronize_sched_expedited_wait ret = swait_event_timeout(rsp->expedited_wq, sync_rcu_preempt_exp_done(rnp_root), jiffies_stall);
expmask = 0x20, CPU 5 in idle path (in cpuidle_enter())
o CPU5 handles IPI and fails to acquire rq lock.
Handles IPI sync_sched_exp_handler resched_cpu returns while failing to try lock acquire rq->lock need_resched is not set
o CPU5 calls rcu_idle_enter() and as need_resched is not set, goes to idle (schedule() is not called).
o CPU 1 reports RCU stall.
Given that resched_cpu() is now used only by RCU, this commit fixes the assumption by making resched_cpu() unconditional.
Reported-by: Neeraj Upadhyay neeraju@codeaurora.org Suggested-by: Neeraj Upadhyay neeraju@codeaurora.org Signed-off-by: Paul E. McKenney paulmck@linux.vnet.ibm.com Acked-by: Steven Rostedt (VMware) rostedt@goodmis.org Acked-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- kernel/sched/core.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
--- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -505,8 +505,7 @@ void resched_cpu(int cpu) struct rq *rq = cpu_rq(cpu); unsigned long flags;
- if (!raw_spin_trylock_irqsave(&rq->lock, flags)) - return; + raw_spin_lock_irqsave(&rq->lock, flags); resched_curr(rq); raw_spin_unlock_irqrestore(&rq->lock, flags); }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Biggers ebiggers@google.com
commit 1d9ddde12e3c9bab7f3d3484eb9446315e3571ca upstream.
On a non-preemptible kernel, if KEYCTL_DH_COMPUTE is called with the largest permitted inputs (16384 bits), the kernel spends 10+ seconds doing modular exponentiation in mpi_powm() without rescheduling. If all threads do it, it locks up the system. Moreover, it can cause rcu_sched-stall warnings.
Notwithstanding the insanity of doing this calculation in kernel mode rather than in userspace, fix it by calling cond_resched() as each bit from the exponent is processed. It's still noninterruptible, but at least it's preemptible now.
Do the cond_resched() once per bit rather than once per MPI limb because each limb might still easily take 100+ milliseconds on slow CPUs.
Signed-off-by: Eric Biggers ebiggers@google.com Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- lib/mpi/mpi-pow.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/lib/mpi/mpi-pow.c +++ b/lib/mpi/mpi-pow.c @@ -26,6 +26,7 @@ * however I decided to publish this code under the plain GPL. */
+#include <linux/sched.h> #include <linux/string.h> #include "mpi-internal.h" #include "longlong.h" @@ -256,6 +257,7 @@ int mpi_powm(MPI res, MPI base, MPI exp, } e <<= 1; c--; + cond_resched(); }
i--;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tom Lendacky thomas.lendacky@amd.com
commit ac5292e9a294618cecb31109d1ba265e3d027ba2 upstream.
When crosvm is used to boot a kernel as a VM, the SMP MP-table is found at physical address 0x0. This causes mpf_base to be set to 0 and a subsequent "if (!mpf_base)" check in default_get_smp_config() results in the MP-table not being parsed. Further into the boot this results in an oops when attempting a read_apic_id().
Add a boolean variable that is set to true when the MP-table is found. Use this variable for testing if the MP-table was found so that even a value of 0 for mpf_base will result in continued parsing of the MP-table.
Fixes: 5997efb96756 ("x86/boot: Use memremap() to map the MPF and MPC data") Reported-by: Tomeu Vizoso tomeu@tomeuvizoso.net Signed-off-by: Tom Lendacky thomas.lendacky@amd.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Cc: Peter Zijlstra peterz@infradead.org Cc: Borislav Petkov bp@alien8.de Cc: regression@leemhuis.info Link: https://lkml.kernel.org/r/20171106201753.23059.86674.stgit@tlendack-t1.amdof... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/kernel/mpparse.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
--- a/arch/x86/kernel/mpparse.c +++ b/arch/x86/kernel/mpparse.c @@ -431,6 +431,7 @@ static inline void __init construct_defa }
static unsigned long mpf_base; +static bool mpf_found;
static unsigned long __init get_mpc_size(unsigned long physptr) { @@ -504,7 +505,7 @@ void __init default_get_smp_config(unsig if (!smp_found_config) return;
- if (!mpf_base) + if (!mpf_found) return;
if (acpi_lapic && early) @@ -593,6 +594,7 @@ static int __init smp_scan_config(unsign smp_found_config = 1; #endif mpf_base = base; + mpf_found = true;
pr_info("found SMP MP-table at [mem %#010lx-%#010lx] mapped at [%p]\n", base, base + sizeof(*mpf) - 1, mpf); @@ -858,7 +860,7 @@ static int __init update_mp_table(void) if (!enable_update_mptable) return 0;
- if (!mpf_base) + if (!mpf_found) return 0;
mpf = early_memremap(mpf_base, sizeof(*mpf));
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Masami Hiramatsu mhiramat@kernel.org
commit 12a78d43de767eaf8fb272facb7a7b6f2dc6a9df upstream.
The kbuild test robot reported this build warning:
Warning: arch/x86/tools/test_get_len found difference at <jump_table>:ffffffff8103dd2c
Warning: ffffffff8103dd82: f6 09 d8 testb $0xd8,(%rcx) Warning: objdump says 3 bytes, but insn_get_length() says 2 Warning: decoded and checked 1569014 instructions with 1 warnings
This sequence seems to be a new instruction not in the opcode map in the Intel SDM.
The instruction sequence is "F6 09 d8", means Group3(F6), MOD(00)REG(001)RM(001), and 0xd8. Intel SDM vol2 A.4 Table A-6 said the table index in the group is "Encoding of Bits 5,4,3 of the ModR/M Byte (bits 2,1,0 in parenthesis)"
In that table, opcodes listed by the index REG bits as:
000 001 010 011 100 101 110 111 TEST Ib/Iz,(undefined),NOT,NEG,MUL AL/rAX,IMUL AL/rAX,DIV AL/rAX,IDIV AL/rAX
So, it seems TEST Ib is assigned to 001.
Add the new pattern.
Reported-by: kbuild test robot fengguang.wu@intel.com Signed-off-by: Masami Hiramatsu mhiramat@kernel.org Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: H. Peter Anvin hpa@zytor.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/lib/x86-opcode-map.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/x86/lib/x86-opcode-map.txt +++ b/arch/x86/lib/x86-opcode-map.txt @@ -896,7 +896,7 @@ EndTable
GrpTable: Grp3_1 0: TEST Eb,Ib -1: +1: TEST Eb,Ib 2: NOT Eb 3: NEG Eb 4: MUL AL,Eb
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andy Lutomirski luto@kernel.org
commit 548c3050ea8d16997ae27f9e080a8338a606fc93 upstream.
When I added entry_SYSCALL_64_after_hwframe(), I left TRACE_IRQS_OFF before it. This means that users of entry_SYSCALL_64_after_hwframe() were responsible for invoking TRACE_IRQS_OFF, and the one and only user (Xen, added in the same commit) got it wrong.
I think this would manifest as a warning if a Xen PV guest with CONFIG_DEBUG_LOCKDEP=y were used with context tracking. (The context tracking bit is to cause lockdep to get invoked before we turn IRQs back on.) I haven't tested that for real yet because I can't get a kernel configured like that to boot at all on Xen PV.
Move TRACE_IRQS_OFF below the label.
Signed-off-by: Andy Lutomirski luto@kernel.org Cc: Boris Ostrovsky boris.ostrovsky@oracle.com Cc: Borislav Petkov bpetkov@suse.de Cc: Brian Gerst brgerst@gmail.com Cc: Dave Hansen dave.hansen@intel.com Cc: Josh Poimboeuf jpoimboe@redhat.com Cc: Juergen Gross jgross@suse.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Fixes: 8a9949bc71a7 ("x86/xen/64: Rearrange the SYSCALL entries") Link: http://lkml.kernel.org/r/9150aac013b7b95d62c2336751d5b6e91d2722aa.1511325444... Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/entry/entry_64.S | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -148,8 +148,6 @@ ENTRY(entry_SYSCALL_64) movq %rsp, PER_CPU_VAR(rsp_scratch) movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
- TRACE_IRQS_OFF - /* Construct struct pt_regs on stack */ pushq $__USER_DS /* pt_regs->ss */ pushq PER_CPU_VAR(rsp_scratch) /* pt_regs->sp */ @@ -170,6 +168,8 @@ GLOBAL(entry_SYSCALL_64_after_hwframe) sub $(6*8), %rsp /* pt_regs->bp, bx, r12-15 not saved */ UNWIND_HINT_REGS extra=0
+ TRACE_IRQS_OFF + /* * If we need to do entry work or if we guess we'll need to do * exit work, go straight to the slow path.
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andy Lutomirski luto@kernel.org
commit ca37e57bbe0cf1455ea3e84eb89ed04a132d59e1 upstream.
Running this code with IRQs enabled (where dummy_lock is a spinlock):
static void check_load_gs_index(void) { /* This will fail. */ load_gs_index(0xffff);
spin_lock(&dummy_lock); spin_unlock(&dummy_lock); }
Will generate a lockdep warning. The issue is that the actual write to %gs would cause an exception with IRQs disabled, and the exception handler would, as an inadvertent side effect, update irqflag tracing to reflect the IRQs-off status. native_load_gs_index() would then turn IRQs back on and return with irqflag tracing still thinking that IRQs were off. The dummy lock-and-unlock causes lockdep to notice the error and warn.
Fix it by adding the missing tracing.
Apparently nothing did this in a context where it mattered. I haven't tried to find a code path that would actually exhibit the warning if appropriately nasty user code were running.
I suspect that the security impact of this bug is very, very low -- production systems don't run with lockdep enabled, and the warning is mostly harmless anyway.
Found during a quick audit of the entry code to try to track down an unrelated bug that Ingo found in some still-in-development code.
Signed-off-by: Andy Lutomirski luto@kernel.org Cc: Borislav Petkov bpetkov@suse.de Cc: Brian Gerst brgerst@gmail.com Cc: Dave Hansen dave.hansen@intel.com Cc: Josh Poimboeuf jpoimboe@redhat.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Link: http://lkml.kernel.org/r/e1aeb0e6ba8dd430ec36c8a35e63b429698b4132.1511411918... Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/entry/entry_64.S | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)
--- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -51,15 +51,19 @@ ENTRY(native_usergs_sysret64) END(native_usergs_sysret64) #endif /* CONFIG_PARAVIRT */
-.macro TRACE_IRQS_IRETQ +.macro TRACE_IRQS_FLAGS flags:req #ifdef CONFIG_TRACE_IRQFLAGS - bt $9, EFLAGS(%rsp) /* interrupts off? */ + bt $9, \flags /* interrupts off? */ jnc 1f TRACE_IRQS_ON 1: #endif .endm
+.macro TRACE_IRQS_IRETQ + TRACE_IRQS_FLAGS EFLAGS(%rsp) +.endm + /* * When dynamic function tracer is enabled it will add a breakpoint * to all locations that it is about to modify, sync CPUs, update @@ -923,11 +927,13 @@ ENTRY(native_load_gs_index) FRAME_BEGIN pushfq DISABLE_INTERRUPTS(CLBR_ANY & ~CLBR_RDI) + TRACE_IRQS_OFF SWAPGS .Lgs_change: movl %edi, %gs 2: ALTERNATIVE "", "mfence", X86_BUG_SWAPGS_FENCE SWAPGS + TRACE_IRQS_FLAGS (%rsp) popfq FRAME_END ret
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andi Kleen ak@linux.intel.com
commit 58ba4d5a25579e5c7e312bd359c95f3a9a0a242c upstream.
0day testing reported a perf test regression on Haswell systems without RTM. Commit a5df70c35 hides the in_tx/in_tx_cp attributes when RTM is not available, but the TSX events are still available in sysfs. Due to the missing attributes the event parser fails on those files.
Don't show the TSX events in sysfs when RTM is not available on Haswell/Broadwell/Skylake.
Fixes: a5df70c354c2 (perf/x86: Only show format attributes when supported) Reported-by: kernel test robot xiaolong.ye@intel.com Tested-by: Jin Yao yao.jin@linux.intel.com Signed-off-by: Andi Kleen ak@linux.intel.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Acked-by: Peter Zijlstra peterz@infradead.org Link: https://lkml.kernel.org/r/20171109000718.14137-1-andi@firstfloor.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/events/intel/core.c | 35 +++++++++++++++++++++++------------ 1 file changed, 23 insertions(+), 12 deletions(-)
--- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -3730,6 +3730,19 @@ EVENT_ATTR_STR(cycles-t, cycles_t, "even EVENT_ATTR_STR(cycles-ct, cycles_ct, "event=0x3c,in_tx=1,in_tx_cp=1");
static struct attribute *hsw_events_attrs[] = { + EVENT_PTR(mem_ld_hsw), + EVENT_PTR(mem_st_hsw), + EVENT_PTR(td_slots_issued), + EVENT_PTR(td_slots_retired), + EVENT_PTR(td_fetch_bubbles), + EVENT_PTR(td_total_slots), + EVENT_PTR(td_total_slots_scale), + EVENT_PTR(td_recovery_bubbles), + EVENT_PTR(td_recovery_bubbles_scale), + NULL +}; + +static struct attribute *hsw_tsx_events_attrs[] = { EVENT_PTR(tx_start), EVENT_PTR(tx_commit), EVENT_PTR(tx_abort), @@ -3742,18 +3755,16 @@ static struct attribute *hsw_events_attr EVENT_PTR(el_conflict), EVENT_PTR(cycles_t), EVENT_PTR(cycles_ct), - EVENT_PTR(mem_ld_hsw), - EVENT_PTR(mem_st_hsw), - EVENT_PTR(td_slots_issued), - EVENT_PTR(td_slots_retired), - EVENT_PTR(td_fetch_bubbles), - EVENT_PTR(td_total_slots), - EVENT_PTR(td_total_slots_scale), - EVENT_PTR(td_recovery_bubbles), - EVENT_PTR(td_recovery_bubbles_scale), NULL };
+static __init struct attribute **get_hsw_events_attrs(void) +{ + return boot_cpu_has(X86_FEATURE_RTM) ? + merge_attr(hsw_events_attrs, hsw_tsx_events_attrs) : + hsw_events_attrs; +} + static ssize_t freeze_on_smi_show(struct device *cdev, struct device_attribute *attr, char *buf) @@ -4182,7 +4193,7 @@ __init int intel_pmu_init(void)
x86_pmu.hw_config = hsw_hw_config; x86_pmu.get_event_constraints = hsw_get_event_constraints; - x86_pmu.cpu_events = hsw_events_attrs; + x86_pmu.cpu_events = get_hsw_events_attrs(); x86_pmu.lbr_double_abort = true; extra_attr = boot_cpu_has(X86_FEATURE_RTM) ? hsw_format_attr : nhm_format_attr; @@ -4221,7 +4232,7 @@ __init int intel_pmu_init(void)
x86_pmu.hw_config = hsw_hw_config; x86_pmu.get_event_constraints = hsw_get_event_constraints; - x86_pmu.cpu_events = hsw_events_attrs; + x86_pmu.cpu_events = get_hsw_events_attrs(); x86_pmu.limit_period = bdw_limit_period; extra_attr = boot_cpu_has(X86_FEATURE_RTM) ? hsw_format_attr : nhm_format_attr; @@ -4279,7 +4290,7 @@ __init int intel_pmu_init(void) extra_attr = boot_cpu_has(X86_FEATURE_RTM) ? hsw_format_attr : nhm_format_attr; extra_attr = merge_attr(extra_attr, skl_format_attr); - x86_pmu.cpu_events = hsw_events_attrs; + x86_pmu.cpu_events = get_hsw_events_attrs(); intel_pmu_pebs_data_source_skl( boot_cpu_data.x86_model == INTEL_FAM6_SKYLAKE_X); pr_cont("Skylake events, ");
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Catalin Marinas catalin.marinas@arm.com
commit 6218f96c58dbf44a06aeaf767aab1f54fc397838 upstream.
The generic pte_access_permitted() implementation only checks for pte_present() (together with the write permission where applicable). However, for both kernel ptes and PROT_NONE mappings pte_present() also returns true on arm64 even though such mappings are not user accessible. Additionally, arm64 now supports execute-only user permission (PROT_EXEC) which is implemented by clearing the PTE_USER bit.
With this patch the arm64 implementation of pte_access_permitted() checks for the PTE_VALID and PTE_USER bits together with writable access if applicable.
Reported-by: Al Viro viro@zeniv.linux.org.uk Signed-off-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Will Deacon will.deacon@arm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/arm64/include/asm/pgtable.h | 14 ++++++++++++++ 1 file changed, 14 insertions(+)
--- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -98,6 +98,8 @@ extern unsigned long empty_zero_page[PAG ((pte_val(pte) & (PTE_VALID | PTE_USER | PTE_UXN)) == (PTE_VALID | PTE_UXN)) #define pte_valid_young(pte) \ ((pte_val(pte) & (PTE_VALID | PTE_AF)) == (PTE_VALID | PTE_AF)) +#define pte_valid_user(pte) \ + ((pte_val(pte) & (PTE_VALID | PTE_USER)) == (PTE_VALID | PTE_USER))
/* * Could the pte be present in the TLB? We must check mm_tlb_flush_pending @@ -107,6 +109,18 @@ extern unsigned long empty_zero_page[PAG #define pte_accessible(mm, pte) \ (mm_tlb_flush_pending(mm) ? pte_present(pte) : pte_valid_young(pte))
+/* + * p??_access_permitted() is true for valid user mappings (subject to the + * write permission check) other than user execute-only which do not have the + * PTE_USER bit set. PROT_NONE mappings do not have the PTE_VALID bit set. + */ +#define pte_access_permitted(pte, write) \ + (pte_valid_user(pte) && (!(write) || pte_write(pte))) +#define pmd_access_permitted(pmd, write) \ + (pte_access_permitted(pmd_pte(pmd), (write))) +#define pud_access_permitted(pud, write) \ + (pte_access_permitted(pud_pte(pud), (write))) + static inline pte_t clear_pte_bit(pte_t pte, pgprot_t prot) { pte_val(pte) &= ~pgprot_val(prot);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Philip Derrin philip@cog.systems
commit 400eeffaffc7232c0ae1134fe04e14ae4fb48d8c upstream.
Currently, for ARM kernels with CONFIG_ARM_LPAE and CONFIG_STRICT_KERNEL_RWX enabled, the 2MiB pages mapping the kernel code and rodata are writable. They are marked read-only in a software bit (L_PMD_SECT_RDONLY) but the hardware read-only bit is not set (PMD_SECT_AP2).
For user mappings, the logic that propagates the software bit to the hardware bit is in set_pmd_at(); but for the kernel, section_update() writes the PMDs directly, skipping this logic.
The fix is to set PMD_SECT_AP2 for read-only sections in section_update(), at the same time as L_PMD_SECT_RDONLY.
Fixes: 1e3479225acb ("ARM: 8275/1: mm: fix PMD_SECT_RDONLY undeclared compile error") Signed-off-by: Philip Derrin philip@cog.systems Reported-by: Neil Dick neil@cog.systems Tested-by: Neil Dick neil@cog.systems Tested-by: Laura Abbott labbott@redhat.com Reviewed-by: Kees Cook keescook@chromium.org Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/arm/mm/init.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -639,8 +639,8 @@ static struct section_perm ro_perms[] = .start = (unsigned long)_stext, .end = (unsigned long)__init_begin, #ifdef CONFIG_ARM_LPAE - .mask = ~L_PMD_SECT_RDONLY, - .prot = L_PMD_SECT_RDONLY, + .mask = ~(L_PMD_SECT_RDONLY | PMD_SECT_AP2), + .prot = L_PMD_SECT_RDONLY | PMD_SECT_AP2, #else .mask = ~(PMD_SECT_APX | PMD_SECT_AP_WRITE), .prot = PMD_SECT_APX | PMD_SECT_AP_WRITE,
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Philip Derrin philip@cog.systems
commit 3b0c0c922ff4be275a8beb87ce5657d16f355b54 upstream.
When CONFIG_ARM_LPAE is set, the PMD dump relies on the software read-only bit to determine whether a page is writable. This concealed a bug which left the kernel text section writable (AP2=0) while marked read-only in the software bit.
In a kernel with the AP2 bug, the dump looks like this:
---[ Kernel Mapping ]--- 0xc0000000-0xc0200000 2M RW NX SHD 0xc0200000-0xc0600000 4M ro x SHD 0xc0600000-0xc0800000 2M ro NX SHD 0xc0800000-0xc4800000 64M RW NX SHD
The fix is to check that the software and hardware bits are both set before displaying "ro". The dump then shows the true perms:
---[ Kernel Mapping ]--- 0xc0000000-0xc0200000 2M RW NX SHD 0xc0200000-0xc0600000 4M RW x SHD 0xc0600000-0xc0800000 2M RW NX SHD 0xc0800000-0xc4800000 64M RW NX SHD
Fixes: ded947798469 ("ARM: 8109/1: mm: Modify pte_write and pmd_write logic for LPAE") Signed-off-by: Philip Derrin philip@cog.systems Tested-by: Neil Dick neil@cog.systems Reviewed-by: Kees Cook keescook@chromium.org Signed-off-by: Russell King rmk+kernel@armlinux.org.uk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/arm/mm/dump.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/arm/mm/dump.c +++ b/arch/arm/mm/dump.c @@ -129,8 +129,8 @@ static const struct prot_bits section_bi .val = PMD_SECT_USER, .set = "USR", }, { - .mask = L_PMD_SECT_RDONLY, - .val = L_PMD_SECT_RDONLY, + .mask = L_PMD_SECT_RDONLY | PMD_SECT_AP2, + .val = L_PMD_SECT_RDONLY | PMD_SECT_AP2, .set = "ro", .clear = "RW", #elif __LINUX_ARM_ARCH__ >= 6
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dmitry V. Levin ldv@altlinux.org
commit b9f3eb499d84f8d4adcb2f9212ec655700b28228 upstream.
Move inclusion of a private kernel header <net/tcp.h> from uapi/linux/tls.h to its only user - net/tls.h, to fix the following linux/tls.h userspace compilation error:
/usr/include/linux/tls.h:41:21: fatal error: net/tcp.h: No such file or directory
As to this point uapi/linux/tls.h was totaly unusuable for userspace, cleanup this header file further by moving other redundant includes to net/tls.h.
Fixes: 3c4d7559159b ("tls: kernel TLS support") Signed-off-by: Dmitry V. Levin ldv@altlinux.org Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- include/net/tls.h | 4 ++++ include/uapi/linux/tls.h | 4 ---- 2 files changed, 4 insertions(+), 4 deletions(-)
--- a/include/net/tls.h +++ b/include/net/tls.h @@ -35,6 +35,10 @@ #define _TLS_OFFLOAD_H
#include <linux/types.h> +#include <asm/byteorder.h> +#include <linux/socket.h> +#include <linux/tcp.h> +#include <net/tcp.h>
#include <uapi/linux/tls.h>
--- a/include/uapi/linux/tls.h +++ b/include/uapi/linux/tls.h @@ -35,10 +35,6 @@ #define _UAPI_LINUX_TLS_H
#include <linux/types.h> -#include <asm/byteorder.h> -#include <linux/socket.h> -#include <linux/tcp.h> -#include <net/tcp.h>
/* TLS socket options */ #define TLS_TX 1 /* Set transmit parameters */
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dmitry V. Levin ldv@altlinux.org
commit 0eef304bc9f7d079a1165e8cd2f24b078e9e1f2a upstream.
Consistently use types provided by <linux/types.h> to fix the following linux/rxrpc.h userspace compilation errors:
/usr/include/linux/rxrpc.h:24:2: error: unknown type name 'u16' u16 srx_service; /* service desired */ /usr/include/linux/rxrpc.h:25:2: error: unknown type name 'u16' u16 transport_type; /* type of transport socket (SOCK_DGRAM) */ /usr/include/linux/rxrpc.h:26:2: error: unknown type name 'u16' u16 transport_len; /* length of transport address */
Use __kernel_sa_family_t instead of sa_family_t the same way as uapi/linux/in.h does, to fix the following linux/rxrpc.h userspace compilation errors:
/usr/include/linux/rxrpc.h:23:2: error: unknown type name 'sa_family_t' sa_family_t srx_family; /* address family */ /usr/include/linux/rxrpc.h:28:3: error: unknown type name 'sa_family_t' sa_family_t family; /* transport address family */
Fixes: 727f8914477e ("rxrpc: Expose UAPI definitions to userspace") Signed-off-by: Dmitry V. Levin ldv@altlinux.org Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- include/uapi/linux/rxrpc.h | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)
--- a/include/uapi/linux/rxrpc.h +++ b/include/uapi/linux/rxrpc.h @@ -20,12 +20,12 @@ * RxRPC socket address */ struct sockaddr_rxrpc { - sa_family_t srx_family; /* address family */ - u16 srx_service; /* service desired */ - u16 transport_type; /* type of transport socket (SOCK_DGRAM) */ - u16 transport_len; /* length of transport address */ + __kernel_sa_family_t srx_family; /* address family */ + __u16 srx_service; /* service desired */ + __u16 transport_type; /* type of transport socket (SOCK_DGRAM) */ + __u16 transport_len; /* length of transport address */ union { - sa_family_t family; /* transport address family */ + __kernel_sa_family_t family; /* transport address family */ struct sockaddr_in sin; /* IPv4 transport address */ struct sockaddr_in6 sin6; /* IPv6 transport address */ } transport;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ben Hutchings ben@decadent.org.uk
commit a3f143106596d739e7fbc4b84c96b1475247d876 upstream.
__cmpxchg64_local_generic() is atomic only w.r.t tasks and interrupts on the same CPU (that's what the 'local' means). We can't use it to implement cmpxchg64() in SMP configurations.
So, for 32-bit SMP configurations:
- Don't define cmpxchg64() - Don't enable HAVE_VIRT_CPU_ACCOUNTING_GEN, which requires it
Fixes: e2093c7b03c1 ("MIPS: Fall back to generic implementation of ...") Fixes: bb877e96bea1 ("MIPS: Add support for full dynticks CPU time accounting") Signed-off-by: Ben Hutchings ben@decadent.org.uk Cc: Ralf Baechle ralf@linux-mips.org Cc: Deng-Cheng Zhu dengcheng.zhu@mips.com Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/17413/ Signed-off-by: James Hogan jhogan@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/mips/Kconfig | 2 +- arch/mips/include/asm/cmpxchg.h | 2 ++ 2 files changed, 3 insertions(+), 1 deletion(-)
--- a/arch/mips/Kconfig +++ b/arch/mips/Kconfig @@ -65,7 +65,7 @@ config MIPS select HAVE_PERF_EVENTS select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_SYSCALL_TRACEPOINTS - select HAVE_VIRT_CPU_ACCOUNTING_GEN + select HAVE_VIRT_CPU_ACCOUNTING_GEN if 64BIT || !SMP select IRQ_FORCED_THREADING select MODULES_USE_ELF_RELA if MODULES && 64BIT select MODULES_USE_ELF_REL if MODULES --- a/arch/mips/include/asm/cmpxchg.h +++ b/arch/mips/include/asm/cmpxchg.h @@ -204,8 +204,10 @@ static inline unsigned long __cmpxchg(vo #else #include <asm-generic/cmpxchg-local.h> #define cmpxchg64_local(ptr, o, n) __cmpxchg64_local_generic((ptr), (o), (n)) +#ifndef CONFIG_SMP #define cmpxchg64(ptr, o, n) cmpxchg64_local((ptr), (o), (n)) #endif +#endif
#undef __scbeqz
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mathias Kresin dev@kresin.me
commit 8ef4b43cd3794d63052d85898e42424fd3b14d24 upstream.
According to the datasheet the REFCLK pin is shared with GPIO#37 and the PERST pin is shared with GPIO#36.
Fixes: 53263a1c6852 ("MIPS: ralink: add mt7628an support") Signed-off-by: Mathias Kresin dev@kresin.me Acked-by: John Crispin john@phrozen.org Cc: Ralf Baechle ralf@linux-mips.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16046/ Signed-off-by: James Hogan jhogan@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/mips/ralink/mt7620.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/mips/ralink/mt7620.c +++ b/arch/mips/ralink/mt7620.c @@ -145,8 +145,8 @@ static struct rt2880_pmx_func i2c_grp_mt FUNC("i2c", 0, 4, 2), };
-static struct rt2880_pmx_func refclk_grp_mt7628[] = { FUNC("reclk", 0, 36, 1) }; -static struct rt2880_pmx_func perst_grp_mt7628[] = { FUNC("perst", 0, 37, 1) }; +static struct rt2880_pmx_func refclk_grp_mt7628[] = { FUNC("reclk", 0, 37, 1) }; +static struct rt2880_pmx_func perst_grp_mt7628[] = { FUNC("perst", 0, 36, 1) }; static struct rt2880_pmx_func wdt_grp_mt7628[] = { FUNC("wdt", 0, 38, 1) }; static struct rt2880_pmx_func spi_grp_mt7628[] = { FUNC("spi", 0, 7, 4) };
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mathias Kresin dev@kresin.me
commit 05a67cc258e75ac9758e6f13d26337b8be51162a upstream.
There is a typo inside the pinmux setup code. The function is called refclk and not reclk.
Fixes: 53263a1c6852 ("MIPS: ralink: add mt7628an support") Signed-off-by: Mathias Kresin dev@kresin.me Acked-by: John Crispin john@phrozen.org Cc: Ralf Baechle ralf@linux-mips.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16047/ Signed-off-by: James Hogan jhogan@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/mips/ralink/mt7620.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/mips/ralink/mt7620.c +++ b/arch/mips/ralink/mt7620.c @@ -145,7 +145,7 @@ static struct rt2880_pmx_func i2c_grp_mt FUNC("i2c", 0, 4, 2), };
-static struct rt2880_pmx_func refclk_grp_mt7628[] = { FUNC("reclk", 0, 37, 1) }; +static struct rt2880_pmx_func refclk_grp_mt7628[] = { FUNC("refclk", 0, 37, 1) }; static struct rt2880_pmx_func perst_grp_mt7628[] = { FUNC("perst", 0, 36, 1) }; static struct rt2880_pmx_func wdt_grp_mt7628[] = { FUNC("wdt", 0, 38, 1) }; static struct rt2880_pmx_func spi_grp_mt7628[] = { FUNC("spi", 0, 7, 4) };
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Simon Guinot simon.guinot@sequanux.org
commit 0d63785c6b94b5d2f095f90755825f90eea791f5 upstream.
The mvneta controller provides a 8-bit register to update the pending Tx descriptor counter. Then, a maximum of 255 Tx descriptors can be added at once. In the current code the mvneta_txq_pend_desc_add function assumes the caller takes care of this limit. But it is not the case. In some situations (xmit_more flag), more than 255 descriptors are added. When this happens, the Tx descriptor counter register is updated with a wrong value, which breaks the whole Tx queue management.
This patch fixes the issue by allowing the mvneta_txq_pend_desc_add function to process more than 255 Tx descriptors.
Fixes: 2a90f7e1d5d0 ("net: mvneta: add xmit_more support") Signed-off-by: Simon Guinot simon.guinot@sequanux.org Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/ethernet/marvell/mvneta.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-)
--- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -816,11 +816,14 @@ static void mvneta_txq_pend_desc_add(str { u32 val;
- /* Only 255 descriptors can be added at once ; Assume caller - * process TX desriptors in quanta less than 256 - */ - val = pend_desc + txq->pending; - mvreg_write(pp, MVNETA_TXQ_UPDATE_REG(txq->id), val); + pend_desc += txq->pending; + + /* Only 255 Tx descriptors can be added at once */ + do { + val = min(pend_desc, 255); + mvreg_write(pp, MVNETA_TXQ_UPDATE_REG(txq->id), val); + pend_desc -= val; + } while (pend_desc > 0); txq->pending = 0; }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Josef Bacik jbacik@fb.com
commit ff57dc94faec023abc267cdc45766fccff497557 upstream.
If we have a pending signal or the user kills their application then it'll bring down the whole device, which is less than awesome. Instead wait uninterruptible for the dead timeout so we're sure we gave it our best shot.
Fixes: 560bc4b39952 ("nbd: handle dead connections") Signed-off-by: Josef Bacik jbacik@fb.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/block/nbd.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
--- a/drivers/block/nbd.c +++ b/drivers/block/nbd.c @@ -723,9 +723,9 @@ static int wait_for_reconnect(struct nbd return 0; if (test_bit(NBD_DISCONNECTED, &config->runtime_flags)) return 0; - wait_event_interruptible_timeout(config->conn_wait, - atomic_read(&config->live_connections), - config->dead_conn_timeout); + wait_event_timeout(config->conn_wait, + atomic_read(&config->live_connections), + config->dead_conn_timeout); return atomic_read(&config->live_connections); }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Josef Bacik jbacik@fb.com
commit 6a468d5990ecd1c2d07dd85f8633bbdd0ba61c40 upstream.
We can end up sleeping for a while waiting for the dead timeout, which means we could get the per request timer to fire. We did handle this case, but if the dead timeout happened right after we submitted we'd either tear down the connection or possibly requeue as we're handling an error and race with the endio which can lead to panics and other hilarity.
Fixes: 560bc4b39952 ("nbd: handle dead connections") Signed-off-by: Josef Bacik jbacik@fb.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/block/nbd.c | 20 +++++++------------- 1 file changed, 7 insertions(+), 13 deletions(-)
--- a/drivers/block/nbd.c +++ b/drivers/block/nbd.c @@ -288,15 +288,6 @@ static enum blk_eh_timer_return nbd_xmit cmd->status = BLK_STS_TIMEOUT; return BLK_EH_HANDLED; } - - /* If we are waiting on our dead timer then we could get timeout - * callbacks for our request. For this we just want to reset the timer - * and let the queue side take care of everything. - */ - if (!completion_done(&cmd->send_complete)) { - nbd_config_put(nbd); - return BLK_EH_RESET_TIMER; - } config = nbd->config;
if (config->num_connections > 1) { @@ -740,6 +731,7 @@ static int nbd_handle_cmd(struct nbd_cmd if (!refcount_inc_not_zero(&nbd->config_refs)) { dev_err_ratelimited(disk_to_dev(nbd->disk), "Socks array is empty\n"); + blk_mq_start_request(req); return -EINVAL; } config = nbd->config; @@ -748,6 +740,7 @@ static int nbd_handle_cmd(struct nbd_cmd dev_err_ratelimited(disk_to_dev(nbd->disk), "Attempted send on invalid socket\n"); nbd_config_put(nbd); + blk_mq_start_request(req); return -EINVAL; } cmd->status = BLK_STS_OK; @@ -771,6 +764,7 @@ again: */ sock_shutdown(nbd); nbd_config_put(nbd); + blk_mq_start_request(req); return -EIO; } goto again; @@ -781,6 +775,7 @@ again: * here so that it gets put _after_ the request that is already on the * dispatch list. */ + blk_mq_start_request(req); if (unlikely(nsock->pending && nsock->pending != req)) { blk_mq_requeue_request(req, true); ret = 0; @@ -793,10 +788,10 @@ again: ret = nbd_send_cmd(nbd, cmd, index); if (ret == -EAGAIN) { dev_err_ratelimited(disk_to_dev(nbd->disk), - "Request send failed trying another connection\n"); + "Request send failed, requeueing\n"); nbd_mark_nsock_dead(nbd, nsock, 1); - mutex_unlock(&nsock->tx_lock); - goto again; + blk_mq_requeue_request(req, true); + ret = 0; } out: mutex_unlock(&nsock->tx_lock); @@ -820,7 +815,6 @@ static blk_status_t nbd_queue_rq(struct * done sending everything over the wire. */ init_completion(&cmd->send_complete); - blk_mq_start_request(bd->rq);
/* We can be called directly from the user space process, which means we * could possibly have signals pending so our sendmsg will fail. In
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tobias Jordan Tobias.Jordan@elektrobit.com
commit 7978db344719dab1e56d05e6fc04aaaddcde0a5e upstream.
The for_each_available_child_of_node() loop in _of_add_opp_table_v2() doesn't drop the reference to "np" on errors. Fix that.
Fixes: 274659029c9d (PM / OPP: Add support to parse "operating-points-v2" bindings) Signed-off-by: Tobias Jordan Tobias.Jordan@elektrobit.com [ VK: Improved commit log. ] Signed-off-by: Viresh Kumar viresh.kumar@linaro.org Reviewed-by: Stephen Boyd sboyd@codeaurora.org Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/base/power/opp/of.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/base/power/opp/of.c +++ b/drivers/base/power/opp/of.c @@ -397,6 +397,7 @@ static int _of_add_opp_table_v2(struct d dev_err(dev, "%s: Failed to add OPP, %d\n", __func__, ret); _dev_pm_opp_remove_table(opp_table, dev, false); + of_node_put(np); goto put_opp_table; } }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bjorn Helgaas bhelgaas@google.com
commit 94ac327e043ee40d7fc57b54541da50507ef4e99 upstream.
Every Port that supports the L1.2 substate advertises its Port Common_Mode_Restore_Time, i.e., the time the Port requires to re-establish common mode when exiting L1.2 (see PCIe r3.1, sec 7.33.2).
Per sec 5.5.3.3.1, when exiting L1.2, the Downstream Port (the device at the upstream end of the link) must send TS1 training sequences for at least T(COMMONMODE) after it detects electrical idle exit on the Link. We want this to be long enough for both ends of the Link, so we should set it to the maximum of the Port Common_Mode_Restore_Time for the upstream and downstream components on the Link.
Previously we only looked at the Port Common_Mode_Restore_Time of the upstream device, so if the downstream device required more time, we didn't program the upstream device's T(COMMONMODE) correctly.
Fixes: f1f0366dd6be ("PCI/ASPM: Calculate and save the L1.2 timing parameters") Signed-off-by: Bjorn Helgaas bhelgaas@google.com Reviewed-by: Vidya Sagar vidyas@nvidia.com Acked-by: Rajat Jain rajatja@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/pci/pcie/aspm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/pci/pcie/aspm.c +++ b/drivers/pci/pcie/aspm.c @@ -453,7 +453,7 @@ static void aspm_calc_l1ss_info(struct p
/* Choose the greater of the two T_cmn_mode_rstr_time */ val1 = (upreg->l1ss_cap >> 8) & 0xFF; - val2 = (upreg->l1ss_cap >> 8) & 0xFF; + val2 = (dwreg->l1ss_cap >> 8) & 0xFF; if (val1 > val2) link->l1ss.ctl1 |= val1 << 8; else
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bjorn Helgaas bhelgaas@google.com
commit c00054f540bf81e592e1fee709b0bdbf20f478b5 upstream.
Previously we programmed the LTR_L1.2_THRESHOLD in the parent (upstream) device using the capability pointer of the *child* (downstream) device, which corrupted some random word of the parent's config space.
Use the parent's L1 SS capability pointer to program its LTR_L1.2_THRESHOLD.
Fixes: aeda9adebab8 ("PCI/ASPM: Configure L1 substate settings") Signed-off-by: Bjorn Helgaas bhelgaas@google.com Reviewed-by: Vidya Sagar vidyas@nvidia.com CC: Rajat Jain rajatja@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/pci/pcie/aspm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/pci/pcie/aspm.c +++ b/drivers/pci/pcie/aspm.c @@ -658,7 +658,7 @@ static void pcie_config_aspm_l1ss(struct 0xFF00, link->l1ss.ctl1);
/* Program LTR L1.2 threshold in both ports */ - pci_clear_and_set_dword(parent, dw_cap_ptr + PCI_L1SS_CTL1, + pci_clear_and_set_dword(parent, up_cap_ptr + PCI_L1SS_CTL1, 0xE3FF0000, link->l1ss.ctl1); pci_clear_and_set_dword(child, dw_cap_ptr + PCI_L1SS_CTL1, 0xE3FF0000, link->l1ss.ctl1);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dexuan Cui decui@microsoft.com
commit 79aa801e899417a56863d6713f76c4e108856000 upstream.
The effective_affinity_mask is always set when an interrupt is assigned in __assign_irq_vector() -> apic->cpu_mask_to_apicid(), e.g. for struct apic apic_physflat: -> default_cpu_mask_to_apicid() -> irq_data_update_effective_affinity(), but it looks d->common->affinity remains all-1's before the user space or the kernel changes it later.
In the early allocation/initialization phase of an IRQ, we should use the effective_affinity_mask, otherwise Hyper-V may not deliver the interrupt to the expected CPU. Without the patch, if we assign 7 Mellanox ConnectX-3 VFs to a 32-vCPU VM, one of the VFs may fail to receive interrupts.
Tested-by: Adrian Suhov v-adsuho@microsoft.com Signed-off-by: Dexuan Cui decui@microsoft.com Signed-off-by: Bjorn Helgaas bhelgaas@google.com Reviewed-by: Jake Oshins jakeo@microsoft.com Cc: Jork Loeser jloeser@microsoft.com Cc: Stephen Hemminger sthemmin@microsoft.com Cc: K. Y. Srinivasan kys@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/pci/host/pci-hyperv.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
--- a/drivers/pci/host/pci-hyperv.c +++ b/drivers/pci/host/pci-hyperv.c @@ -879,7 +879,7 @@ static void hv_irq_unmask(struct irq_dat int cpu; u64 res;
- dest = irq_data_get_affinity_mask(data); + dest = irq_data_get_effective_affinity_mask(data); pdev = msi_desc_to_pci_dev(msi_desc); pbus = pdev->bus; hbus = container_of(pbus->sysdata, struct hv_pcibus_device, sysdata); @@ -1042,6 +1042,7 @@ static void hv_compose_msi_msg(struct ir struct hv_pci_dev *hpdev; struct pci_bus *pbus; struct pci_dev *pdev; + struct cpumask *dest; struct compose_comp_ctxt comp; struct tran_int_desc *int_desc; struct { @@ -1056,6 +1057,7 @@ static void hv_compose_msi_msg(struct ir int ret;
pdev = msi_desc_to_pci_dev(irq_data_get_msi_desc(data)); + dest = irq_data_get_effective_affinity_mask(data); pbus = pdev->bus; hbus = container_of(pbus->sysdata, struct hv_pcibus_device, sysdata); hpdev = get_pcichild_wslot(hbus, devfn_to_wslot(pdev->devfn)); @@ -1081,14 +1083,14 @@ static void hv_compose_msi_msg(struct ir switch (pci_protocol_version) { case PCI_PROTOCOL_VERSION_1_1: size = hv_compose_msi_req_v1(&ctxt.int_pkts.v1, - irq_data_get_affinity_mask(data), + dest, hpdev->desc.win_slot.slot, cfg->vector); break;
case PCI_PROTOCOL_VERSION_1_2: size = hv_compose_msi_req_v2(&ctxt.int_pkts.v2, - irq_data_get_affinity_mask(data), + dest, hpdev->desc.win_slot.slot, cfg->vector); break;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Vadim Lomovtsev Vadim.Lomovtsev@cavium.com
commit 7f342678634f16795892677204366e835e450dda upstream.
The Cavium ThunderX (CN8XXX) family of PCIe Root Ports does not advertise an ACS capability. However, the RTL internally implements similar protection as if ACS had Request Redirection, Completion Redirection, Source Validation, and Upstream Forwarding features enabled.
Change Cavium ACS capabilities quirk flags accordingly.
Fixes: b404bcfbf035 ("PCI: Add ACS quirk for all Cavium devices") Signed-off-by: Vadim Lomovtsev Vadim.Lomovtsev@cavium.com [bhelgaas: tidy changelog, comment, stable tag] Signed-off-by: Bjorn Helgaas bhelgaas@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/pci/quirks.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-)
--- a/drivers/pci/quirks.c +++ b/drivers/pci/quirks.c @@ -4215,12 +4215,14 @@ static int pci_quirk_amd_sb_acs(struct p static int pci_quirk_cavium_acs(struct pci_dev *dev, u16 acs_flags) { /* - * Cavium devices matching this quirk do not perform peer-to-peer - * with other functions, allowing masking out these bits as if they - * were unimplemented in the ACS capability. + * Cavium root ports don't advertise an ACS capability. However, + * the RTL internally implements similar protection as if ACS had + * Request Redirection, Completion Redirection, Source Validation, + * and Upstream Forwarding features enabled. Assert that the + * hardware implements and enables equivalent ACS functionality for + * these flags. */ - acs_flags &= ~(PCI_ACS_SV | PCI_ACS_TB | PCI_ACS_RR | - PCI_ACS_CR | PCI_ACS_UF | PCI_ACS_DT); + acs_flags &= ~(PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_SV | PCI_ACS_UF);
if (!((dev->device >= 0xa000) && (dev->device <= 0xa0ff))) return -ENOTTY;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Vadim Lomovtsev Vadim.Lomovtsev@cavium.com
commit f2ddaf8dfd4a5071ad09074d2f95ab85d35c8a1e upstream.
Extend the Cavium ThunderX ACS quirk to cover more device IDs and restrict it to only Root Ports.
Signed-off-by: Vadim Lomovtsev Vadim.Lomovtsev@cavium.com [bhelgaas: changelog, stable tag] Signed-off-by: Bjorn Helgaas bhelgaas@google.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/pci/quirks.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-)
--- a/drivers/pci/quirks.c +++ b/drivers/pci/quirks.c @@ -4212,6 +4212,19 @@ static int pci_quirk_amd_sb_acs(struct p #endif }
+static bool pci_quirk_cavium_acs_match(struct pci_dev *dev) +{ + /* + * Effectively selects all downstream ports for whole ThunderX 1 + * family by 0xf800 mask (which represents 8 SoCs), while the lower + * bits of device ID are used to indicate which subdevice is used + * within the SoC. + */ + return (pci_is_pcie(dev) && + (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) && + ((dev->device & 0xf800) == 0xa000)); +} + static int pci_quirk_cavium_acs(struct pci_dev *dev, u16 acs_flags) { /* @@ -4224,7 +4237,7 @@ static int pci_quirk_cavium_acs(struct p */ acs_flags &= ~(PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_SV | PCI_ACS_UF);
- if (!((dev->device >= 0xa000) && (dev->device <= 0xa0ff))) + if (!pci_quirk_cavium_acs_match(dev)) return -ENOTTY;
return acs_flags ? 0 : 1;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Vijendar Mukunda Vijendar.Mukunda@amd.com
commit 9ceace3c9c18c67676e75141032a65a8e01f9a7a upstream.
This commit adds PCI ID for Raven platform
Signed-off-by: Vijendar Mukunda Vijendar.Mukunda@amd.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/pci/hda/hda_intel.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/sound/pci/hda/hda_intel.c +++ b/sound/pci/hda/hda_intel.c @@ -2463,6 +2463,9 @@ static const struct pci_device_id azx_id /* AMD Hudson */ { PCI_DEVICE(0x1022, 0x780d), .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB }, + /* AMD Raven */ + { PCI_DEVICE(0x1022, 0x15e3), + .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB }, /* ATI HDMI */ { PCI_DEVICE(0x1002, 0x0002), .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Joe Thornber ejt@redhat.com
commit d1260e2a3f85f4c1010510a15f89597001318b1b upstream.
When a DM cache in writeback mode moves data between the slow and fast device it can often avoid a copy if the triggering bio either:
i) covers the whole block (no point copying if we're about to overwrite it) ii) the migration is a promotion and the origin block is currently discarded
Prior to this fix there was a race with case (ii). The discard status was checked with a shared lock held (rather than exclusive). This meant another bio could run in parallel and write data to the origin, removing the discard state. After the promotion the parallel write would have been lost.
With this fix the discard status is re-checked once the exclusive lock has been aquired. If the block is no longer discarded it falls back to the slower full copy path.
Fixes: b29d4986d ("dm cache: significant rework to leverage dm-bio-prison-v2") Signed-off-by: Joe Thornber ejt@redhat.com Signed-off-by: Mike Snitzer snitzer@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/md/dm-cache-target.c | 86 ++++++++++++++++++++++++++----------------- 1 file changed, 53 insertions(+), 33 deletions(-)
--- a/drivers/md/dm-cache-target.c +++ b/drivers/md/dm-cache-target.c @@ -1201,6 +1201,18 @@ static void background_work_end(struct c
/*----------------------------------------------------------------*/
+static bool bio_writes_complete_block(struct cache *cache, struct bio *bio) +{ + return (bio_data_dir(bio) == WRITE) && + (bio->bi_iter.bi_size == (cache->sectors_per_block << SECTOR_SHIFT)); +} + +static bool optimisable_bio(struct cache *cache, struct bio *bio, dm_oblock_t block) +{ + return writeback_mode(&cache->features) && + (is_discarded_oblock(cache, block) || bio_writes_complete_block(cache, bio)); +} + static void quiesce(struct dm_cache_migration *mg, void (*continuation)(struct work_struct *)) { @@ -1474,13 +1486,51 @@ static void mg_upgrade_lock(struct work_ } }
+static void mg_full_copy(struct work_struct *ws) +{ + struct dm_cache_migration *mg = ws_to_mg(ws); + struct cache *cache = mg->cache; + struct policy_work *op = mg->op; + bool is_policy_promote = (op->op == POLICY_PROMOTE); + + if ((!is_policy_promote && !is_dirty(cache, op->cblock)) || + is_discarded_oblock(cache, op->oblock)) { + mg_upgrade_lock(ws); + return; + } + + init_continuation(&mg->k, mg_upgrade_lock); + + if (copy(mg, is_policy_promote)) { + DMERR_LIMIT("%s: migration copy failed", cache_device_name(cache)); + mg->k.input = BLK_STS_IOERR; + mg_complete(mg, false); + } +} + static void mg_copy(struct work_struct *ws) { - int r; struct dm_cache_migration *mg = ws_to_mg(ws);
if (mg->overwrite_bio) { /* + * No exclusive lock was held when we last checked if the bio + * was optimisable. So we have to check again in case things + * have changed (eg, the block may no longer be discarded). + */ + if (!optimisable_bio(mg->cache, mg->overwrite_bio, mg->op->oblock)) { + /* + * Fallback to a real full copy after doing some tidying up. + */ + bool rb = bio_detain_shared(mg->cache, mg->op->oblock, mg->overwrite_bio); + BUG_ON(rb); /* An exclussive lock must _not_ be held for this block */ + mg->overwrite_bio = NULL; + inc_io_migrations(mg->cache); + mg_full_copy(ws); + return; + } + + /* * It's safe to do this here, even though it's new data * because all IO has been locked out of the block. * @@ -1489,26 +1539,8 @@ static void mg_copy(struct work_struct * */ overwrite(mg, mg_update_metadata_after_copy);
- } else { - struct cache *cache = mg->cache; - struct policy_work *op = mg->op; - bool is_policy_promote = (op->op == POLICY_PROMOTE); - - if ((!is_policy_promote && !is_dirty(cache, op->cblock)) || - is_discarded_oblock(cache, op->oblock)) { - mg_upgrade_lock(ws); - return; - } - - init_continuation(&mg->k, mg_upgrade_lock); - - r = copy(mg, is_policy_promote); - if (r) { - DMERR_LIMIT("%s: migration copy failed", cache_device_name(cache)); - mg->k.input = BLK_STS_IOERR; - mg_complete(mg, false); - } - } + } else + mg_full_copy(ws); }
static int mg_lock_writes(struct dm_cache_migration *mg) @@ -1748,18 +1780,6 @@ static void inc_miss_counter(struct cach
/*----------------------------------------------------------------*/
-static bool bio_writes_complete_block(struct cache *cache, struct bio *bio) -{ - return (bio_data_dir(bio) == WRITE) && - (bio->bi_iter.bi_size == (cache->sectors_per_block << SECTOR_SHIFT)); -} - -static bool optimisable_bio(struct cache *cache, struct bio *bio, dm_oblock_t block) -{ - return writeback_mode(&cache->features) && - (is_discarded_oblock(cache, block) || bio_writes_complete_block(cache, bio)); -} - static int map_bio(struct cache *cache, struct bio *bio, dm_oblock_t block, bool *commit_needed) {
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Damien Le Moal damien.lemoal@wdc.com
commit 114e025968b5990ad0b57bf60697ea64ee206aac upstream.
The SCSI layer allows ZBC drives to have a smaller last runt zone. For such a device, specifying the entire capacity for a dm-zoned target table entry fails because the specified capacity is not aligned on a device zone size indicated in the request queue structure of the device.
Fix this problem by ignoring the last runt zone in the entry length when seting up the dm-zoned target (ctr method) and when iterating table entries of the target (iterate_devices method). This allows dm-zoned users to still easily setup a target using the entire device capacity (as mandated by dm-zoned) or the aligned capacity excluding the last runt zone.
While at it, replace direct references to the device queue chunk_sectors limit with calls to the accessor blk_queue_zone_sectors().
Reported-by: Peter Desnoyers pjd@ccs.neu.edu Signed-off-by: Damien Le Moal damien.lemoal@wdc.com Signed-off-by: Mike Snitzer snitzer@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/md/dm-zoned-target.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-)
--- a/drivers/md/dm-zoned-target.c +++ b/drivers/md/dm-zoned-target.c @@ -660,6 +660,7 @@ static int dmz_get_zoned_device(struct d struct dmz_target *dmz = ti->private; struct request_queue *q; struct dmz_dev *dev; + sector_t aligned_capacity; int ret;
/* Get the target device */ @@ -685,15 +686,17 @@ static int dmz_get_zoned_device(struct d goto err; }
+ q = bdev_get_queue(dev->bdev); dev->capacity = i_size_read(dev->bdev->bd_inode) >> SECTOR_SHIFT; - if (ti->begin || (ti->len != dev->capacity)) { + aligned_capacity = dev->capacity & ~(blk_queue_zone_sectors(q) - 1); + if (ti->begin || + ((ti->len != dev->capacity) && (ti->len != aligned_capacity))) { ti->error = "Partial mapping not supported"; ret = -EINVAL; goto err; }
- q = bdev_get_queue(dev->bdev); - dev->zone_nr_sectors = q->limits.chunk_sectors; + dev->zone_nr_sectors = blk_queue_zone_sectors(q); dev->zone_nr_sectors_shift = ilog2(dev->zone_nr_sectors);
dev->zone_nr_blocks = dmz_sect2blk(dev->zone_nr_sectors); @@ -929,8 +932,10 @@ static int dmz_iterate_devices(struct dm iterate_devices_callout_fn fn, void *data) { struct dmz_target *dmz = ti->private; + struct dmz_dev *dev = dmz->dev; + sector_t capacity = dev->capacity & ~(dev->zone_nr_sectors - 1);
- return fn(ti, dmz->ddev, 0, dmz->dev->capacity, data); + return fn(ti, dmz->ddev, 0, capacity, data); }
static struct target_type dmz_type = {
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ming Lei ming.lei@redhat.com
commit 9dc112e2daf87b40607fd8d357e2d7de32290d45 upstream.
It is very normal to see allocation failure, especially with blk-mq request_queues, so it's unnecessary to report this error and annoy people.
In practice this 'blk_get_request() returned -11' error gets logged quite frequently when a blk-mq DM multipath device sees heavy IO.
This change is marked for stable@ because the annoying message in question was included in stable@ commit 7083abbbf.
Fixes: 7083abbbf ("dm mpath: avoid that path removal can trigger an infinite loop") Signed-off-by: Ming Lei ming.lei@redhat.com Signed-off-by: Mike Snitzer snitzer@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/md/dm-mpath.c | 2 -- 1 file changed, 2 deletions(-)
--- a/drivers/md/dm-mpath.c +++ b/drivers/md/dm-mpath.c @@ -499,8 +499,6 @@ static int multipath_clone_and_map(struc if (IS_ERR(clone)) { /* EBUSY, ENODEV or EWOULDBLOCK: requeue */ bool queue_dying = blk_queue_dying(q); - DMERR_LIMIT("blk_get_request() returned %ld%s - requeuing", - PTR_ERR(clone), queue_dying ? " (path offline)" : ""); if (queue_dying) { atomic_inc(&m->pg_init_in_progress); activate_or_offline_path(pgpath);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Biggers ebiggers@google.com
commit 74d4108d9e681dbbe4a2940ed8fdff1f6868184c upstream.
The default max_cache_size_bytes for dm-bufio is meant to be the lesser of 25% of the size of the vmalloc area and 2% of the size of lowmem. However, on 32-bit systems the intermediate result in the expression
(VMALLOC_END - VMALLOC_START) * DM_BUFIO_VMALLOC_PERCENT / 100
overflows, causing the wrong result to be computed. For example, on a 32-bit system where the vmalloc area is 520093696 bytes, the result is 1174405 rather than the expected 130023424, which makes the maximum cache size much too small (far less than 2% of lowmem). This causes severe performance problems for dm-verity users on affected systems.
Fix this by using mult_frac() to correctly multiply by a percentage. Do this for all places in dm-bufio that multiply by a percentage. Also replace (VMALLOC_END - VMALLOC_START) with VMALLOC_TOTAL, which contrary to the comment is now defined in include/linux/vmalloc.h.
Depends-on: 9993bc635 ("sched/x86: Fix overflow in cyc2ns_offset") Fixes: 95d402f057f2 ("dm: add bufio") Signed-off-by: Eric Biggers ebiggers@google.com Signed-off-by: Mike Snitzer snitzer@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/md/dm-bufio.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-)
--- a/drivers/md/dm-bufio.c +++ b/drivers/md/dm-bufio.c @@ -974,7 +974,8 @@ static void __get_memory_limit(struct dm buffers = c->minimum_buffers;
*limit_buffers = buffers; - *threshold_buffers = buffers * DM_BUFIO_WRITEBACK_PERCENT / 100; + *threshold_buffers = mult_frac(buffers, + DM_BUFIO_WRITEBACK_PERCENT, 100); }
/* @@ -1910,19 +1911,15 @@ static int __init dm_bufio_init(void) memset(&dm_bufio_caches, 0, sizeof dm_bufio_caches); memset(&dm_bufio_cache_names, 0, sizeof dm_bufio_cache_names);
- mem = (__u64)((totalram_pages - totalhigh_pages) * - DM_BUFIO_MEMORY_PERCENT / 100) << PAGE_SHIFT; + mem = (__u64)mult_frac(totalram_pages - totalhigh_pages, + DM_BUFIO_MEMORY_PERCENT, 100) << PAGE_SHIFT;
if (mem > ULONG_MAX) mem = ULONG_MAX;
#ifdef CONFIG_MMU - /* - * Get the size of vmalloc space the same way as VMALLOC_TOTAL - * in fs/proc/internal.h - */ - if (mem > (VMALLOC_END - VMALLOC_START) * DM_BUFIO_VMALLOC_PERCENT / 100) - mem = (VMALLOC_END - VMALLOC_START) * DM_BUFIO_VMALLOC_PERCENT / 100; + if (mem > mult_frac(VMALLOC_TOTAL, DM_BUFIO_VMALLOC_PERCENT, 100)) + mem = mult_frac(VMALLOC_TOTAL, DM_BUFIO_VMALLOC_PERCENT, 100); #endif
dm_bufio_default_cache_size = mem;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Vivek Goyal vgoyal@redhat.com
commit 5455f92b54e516995a9ca45bbf790d3629c27a93 upstream.
If ovl_check_origin() fails, we should put upperdentry. We have a reference on it by now. So goto out_put_upper instead of out.
Fixes: a9d019573e88 ("ovl: lookup non-dir copy-up-origin by file handle") Signed-off-by: Vivek Goyal vgoyal@redhat.com Signed-off-by: Miklos Szeredi mszeredi@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/overlayfs/namei.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/overlayfs/namei.c +++ b/fs/overlayfs/namei.c @@ -630,7 +630,7 @@ struct dentry *ovl_lookup(struct inode * err = ovl_check_origin(upperdentry, roe->lowerstack, roe->numlower, &stack, &ctr); if (err) - goto out; + goto out_put_upper; }
if (d.redirect) {
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mikulas Patocka mpatocka@redhat.com
commit 856eb0916d181da6d043cc33e03f54d5c5bbe54a upstream.
The structure srcu_struct can be very big, its size is proportional to the value CONFIG_NR_CPUS. The Fedora kernel has CONFIG_NR_CPUS 8192, the field io_barrier in the struct mapped_device has 84kB in the debugging kernel and 50kB in the non-debugging kernel. The large size may result in failure of the function kzalloc_node.
In order to avoid the allocation failure, we use the function kvzalloc_node, this function falls back to vmalloc if a large contiguous chunk of memory is not available. This patch also moves the field io_barrier to the last position of struct mapped_device - the reason is that on many processor architectures, short memory offsets result in smaller code than long memory offsets - on x86-64 it reduces code size by 320 bytes.
Note to stable kernel maintainers - the kernels 4.11 and older don't have the function kvzalloc_node, you can use the function vzalloc_node instead.
Signed-off-by: Mikulas Patocka mpatocka@redhat.com Signed-off-by: Mike Snitzer snitzer@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/md/dm-core.h | 3 ++- drivers/md/dm.c | 6 +++--- 2 files changed, 5 insertions(+), 4 deletions(-)
--- a/drivers/md/dm-core.h +++ b/drivers/md/dm-core.h @@ -29,7 +29,6 @@ struct dm_kobject_holder { * DM targets must _not_ deference a mapped_device to directly access its members! */ struct mapped_device { - struct srcu_struct io_barrier; struct mutex suspend_lock;
/* @@ -127,6 +126,8 @@ struct mapped_device { struct blk_mq_tag_set *tag_set; bool use_blk_mq:1; bool init_tio_pdu:1; + + struct srcu_struct io_barrier; };
void dm_init_md_queue(struct mapped_device *md); --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1695,7 +1695,7 @@ static struct mapped_device *alloc_dev(i struct mapped_device *md; void *old_md;
- md = kzalloc_node(sizeof(*md), GFP_KERNEL, numa_node_id); + md = kvzalloc_node(sizeof(*md), GFP_KERNEL, numa_node_id); if (!md) { DMWARN("unable to allocate device, out of memory."); return NULL; @@ -1795,7 +1795,7 @@ bad_io_barrier: bad_minor: module_put(THIS_MODULE); bad_module_get: - kfree(md); + kvfree(md); return NULL; }
@@ -1814,7 +1814,7 @@ static void free_dev(struct mapped_devic free_minor(minor);
module_put(THIS_MODULE); - kfree(md); + kvfree(md); }
static void __bind_mempools(struct mapped_device *md, struct dm_table *t)
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Steven Rostedt (Red Hat) rostedt@goodmis.org
commit 4bdced5c9a2922521e325896a7bbbf0132c94e56 upstream.
When a CPU lowers its priority (schedules out a high priority task for a lower priority one), a check is made to see if any other CPU has overloaded RT tasks (more than one). It checks the rto_mask to determine this and if so it will request to pull one of those tasks to itself if the non running RT task is of higher priority than the new priority of the next task to run on the current CPU.
When we deal with large number of CPUs, the original pull logic suffered from large lock contention on a single CPU run queue, which caused a huge latency across all CPUs. This was caused by only having one CPU having overloaded RT tasks and a bunch of other CPUs lowering their priority. To solve this issue, commit:
b6366f048e0c ("sched/rt: Use IPI to trigger RT task push migration instead of pulling")
changed the way to request a pull. Instead of grabbing the lock of the overloaded CPU's runqueue, it simply sent an IPI to that CPU to do the work.
Although the IPI logic worked very well in removing the large latency build up, it still could suffer from a large number of IPIs being sent to a single CPU. On a 80 CPU box, I measured over 200us of processing IPIs. Worse yet, when I tested this on a 120 CPU box, with a stress test that had lots of RT tasks scheduling on all CPUs, it actually triggered the hard lockup detector! One CPU had so many IPIs sent to it, and due to the restart mechanism that is triggered when the source run queue has a priority status change, the CPU spent minutes! processing the IPIs.
Thinking about this further, I realized there's no reason for each run queue to send its own IPI. As all CPUs with overloaded tasks must be scanned regardless if there's one or many CPUs lowering their priority, because there's no current way to find the CPU with the highest priority task that can schedule to one of these CPUs, there really only needs to be one IPI being sent around at a time.
This greatly simplifies the code!
The new approach is to have each root domain have its own irq work, as the rto_mask is per root domain. The root domain has the following fields attached to it:
rto_push_work - the irq work to process each CPU set in rto_mask rto_lock - the lock to protect some of the other rto fields rto_loop_start - an atomic that keeps contention down on rto_lock the first CPU scheduling in a lower priority task is the one to kick off the process. rto_loop_next - an atomic that gets incremented for each CPU that schedules in a lower priority task. rto_loop - a variable protected by rto_lock that is used to compare against rto_loop_next rto_cpu - The cpu to send the next IPI to, also protected by the rto_lock.
When a CPU schedules in a lower priority task and wants to make sure overloaded CPUs know about it. It increments the rto_loop_next. Then it atomically sets rto_loop_start with a cmpxchg. If the old value is not "0", then it is done, as another CPU is kicking off the IPI loop. If the old value is "0", then it will take the rto_lock to synchronize with a possible IPI being sent around to the overloaded CPUs.
If rto_cpu is greater than or equal to nr_cpu_ids, then there's either no IPI being sent around, or one is about to finish. Then rto_cpu is set to the first CPU in rto_mask and an IPI is sent to that CPU. If there's no CPUs set in rto_mask, then there's nothing to be done.
When the CPU receives the IPI, it will first try to push any RT tasks that is queued on the CPU but can't run because a higher priority RT task is currently running on that CPU.
Then it takes the rto_lock and looks for the next CPU in the rto_mask. If it finds one, it simply sends an IPI to that CPU and the process continues.
If there's no more CPUs in the rto_mask, then rto_loop is compared with rto_loop_next. If they match, everything is done and the process is over. If they do not match, then a CPU scheduled in a lower priority task as the IPI was being passed around, and the process needs to start again. The first CPU in rto_mask is sent the IPI.
This change removes this duplication of work in the IPI logic, and greatly lowers the latency caused by the IPIs. This removed the lockup happening on the 120 CPU machine. It also simplifies the code tremendously. What else could anyone ask for?
Thanks to Peter Zijlstra for simplifying the rto_loop_start atomic logic and supplying me with the rto_start_trylock() and rto_start_unlock() helper functions.
Signed-off-by: Steven Rostedt (VMware) rostedt@goodmis.org Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Cc: Clark Williams williams@redhat.com Cc: Daniel Bristot de Oliveira bristot@redhat.com Cc: John Kacur jkacur@redhat.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Mike Galbraith efault@gmx.de Cc: Peter Zijlstra peterz@infradead.org Cc: Scott Wood swood@redhat.com Cc: Thomas Gleixner tglx@linutronix.de Link: http://lkml.kernel.org/r/20170424114732.1aac6dc4@gandalf.local.home Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- kernel/sched/rt.c | 316 +++++++++++++++++------------------------------- kernel/sched/sched.h | 24 ++- kernel/sched/topology.c | 6 3 files changed, 138 insertions(+), 208 deletions(-)
--- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -74,10 +74,6 @@ static void start_rt_bandwidth(struct rt raw_spin_unlock(&rt_b->rt_runtime_lock); }
-#if defined(CONFIG_SMP) && defined(HAVE_RT_PUSH_IPI) -static void push_irq_work_func(struct irq_work *work); -#endif - void init_rt_rq(struct rt_rq *rt_rq) { struct rt_prio_array *array; @@ -97,13 +93,6 @@ void init_rt_rq(struct rt_rq *rt_rq) rt_rq->rt_nr_migratory = 0; rt_rq->overloaded = 0; plist_head_init(&rt_rq->pushable_tasks); - -#ifdef HAVE_RT_PUSH_IPI - rt_rq->push_flags = 0; - rt_rq->push_cpu = nr_cpu_ids; - raw_spin_lock_init(&rt_rq->push_lock); - init_irq_work(&rt_rq->push_work, push_irq_work_func); -#endif #endif /* CONFIG_SMP */ /* We start is dequeued state, because no RT tasks are queued */ rt_rq->rt_queued = 0; @@ -1876,241 +1865,166 @@ static void push_rt_tasks(struct rq *rq) }
#ifdef HAVE_RT_PUSH_IPI + /* - * The search for the next cpu always starts at rq->cpu and ends - * when we reach rq->cpu again. It will never return rq->cpu. - * This returns the next cpu to check, or nr_cpu_ids if the loop - * is complete. + * When a high priority task schedules out from a CPU and a lower priority + * task is scheduled in, a check is made to see if there's any RT tasks + * on other CPUs that are waiting to run because a higher priority RT task + * is currently running on its CPU. In this case, the CPU with multiple RT + * tasks queued on it (overloaded) needs to be notified that a CPU has opened + * up that may be able to run one of its non-running queued RT tasks. + * + * All CPUs with overloaded RT tasks need to be notified as there is currently + * no way to know which of these CPUs have the highest priority task waiting + * to run. Instead of trying to take a spinlock on each of these CPUs, + * which has shown to cause large latency when done on machines with many + * CPUs, sending an IPI to the CPUs to have them push off the overloaded + * RT tasks waiting to run. + * + * Just sending an IPI to each of the CPUs is also an issue, as on large + * count CPU machines, this can cause an IPI storm on a CPU, especially + * if its the only CPU with multiple RT tasks queued, and a large number + * of CPUs scheduling a lower priority task at the same time. + * + * Each root domain has its own irq work function that can iterate over + * all CPUs with RT overloaded tasks. Since all CPUs with overloaded RT + * tassk must be checked if there's one or many CPUs that are lowering + * their priority, there's a single irq work iterator that will try to + * push off RT tasks that are waiting to run. + * + * When a CPU schedules a lower priority task, it will kick off the + * irq work iterator that will jump to each CPU with overloaded RT tasks. + * As it only takes the first CPU that schedules a lower priority task + * to start the process, the rto_start variable is incremented and if + * the atomic result is one, then that CPU will try to take the rto_lock. + * This prevents high contention on the lock as the process handles all + * CPUs scheduling lower priority tasks. + * + * All CPUs that are scheduling a lower priority task will increment the + * rt_loop_next variable. This will make sure that the irq work iterator + * checks all RT overloaded CPUs whenever a CPU schedules a new lower + * priority task, even if the iterator is in the middle of a scan. Incrementing + * the rt_loop_next will cause the iterator to perform another scan. * - * rq->rt.push_cpu holds the last cpu returned by this function, - * or if this is the first instance, it must hold rq->cpu. */ static int rto_next_cpu(struct rq *rq) { - int prev_cpu = rq->rt.push_cpu; + struct root_domain *rd = rq->rd; + int next; int cpu;
- cpu = cpumask_next(prev_cpu, rq->rd->rto_mask); - /* - * If the previous cpu is less than the rq's CPU, then it already - * passed the end of the mask, and has started from the beginning. - * We end if the next CPU is greater or equal to rq's CPU. + * When starting the IPI RT pushing, the rto_cpu is set to -1, + * rt_next_cpu() will simply return the first CPU found in + * the rto_mask. + * + * If rto_next_cpu() is called with rto_cpu is a valid cpu, it + * will return the next CPU found in the rto_mask. + * + * If there are no more CPUs left in the rto_mask, then a check is made + * against rto_loop and rto_loop_next. rto_loop is only updated with + * the rto_lock held, but any CPU may increment the rto_loop_next + * without any locking. */ - if (prev_cpu < rq->cpu) { - if (cpu >= rq->cpu) - return nr_cpu_ids; + for (;;) {
- } else if (cpu >= nr_cpu_ids) { - /* - * We passed the end of the mask, start at the beginning. - * If the result is greater or equal to the rq's CPU, then - * the loop is finished. - */ - cpu = cpumask_first(rq->rd->rto_mask); - if (cpu >= rq->cpu) - return nr_cpu_ids; - } - rq->rt.push_cpu = cpu; + /* When rto_cpu is -1 this acts like cpumask_first() */ + cpu = cpumask_next(rd->rto_cpu, rd->rto_mask);
- /* Return cpu to let the caller know if the loop is finished or not */ - return cpu; -} + rd->rto_cpu = cpu;
-static int find_next_push_cpu(struct rq *rq) -{ - struct rq *next_rq; - int cpu; + if (cpu < nr_cpu_ids) + return cpu;
- while (1) { - cpu = rto_next_cpu(rq); - if (cpu >= nr_cpu_ids) - break; - next_rq = cpu_rq(cpu); + rd->rto_cpu = -1; + + /* + * ACQUIRE ensures we see the @rto_mask changes + * made prior to the @next value observed. + * + * Matches WMB in rt_set_overload(). + */ + next = atomic_read_acquire(&rd->rto_loop_next);
- /* Make sure the next rq can push to this rq */ - if (next_rq->rt.highest_prio.next < rq->rt.highest_prio.curr) + if (rd->rto_loop == next) break; + + rd->rto_loop = next; }
- return cpu; + return -1; }
-#define RT_PUSH_IPI_EXECUTING 1 -#define RT_PUSH_IPI_RESTART 2 +static inline bool rto_start_trylock(atomic_t *v) +{ + return !atomic_cmpxchg_acquire(v, 0, 1); +}
-/* - * When a high priority task schedules out from a CPU and a lower priority - * task is scheduled in, a check is made to see if there's any RT tasks - * on other CPUs that are waiting to run because a higher priority RT task - * is currently running on its CPU. In this case, the CPU with multiple RT - * tasks queued on it (overloaded) needs to be notified that a CPU has opened - * up that may be able to run one of its non-running queued RT tasks. - * - * On large CPU boxes, there's the case that several CPUs could schedule - * a lower priority task at the same time, in which case it will look for - * any overloaded CPUs that it could pull a task from. To do this, the runqueue - * lock must be taken from that overloaded CPU. Having 10s of CPUs all fighting - * for a single overloaded CPU's runqueue lock can produce a large latency. - * (This has actually been observed on large boxes running cyclictest). - * Instead of taking the runqueue lock of the overloaded CPU, each of the - * CPUs that scheduled a lower priority task simply sends an IPI to the - * overloaded CPU. An IPI is much cheaper than taking an runqueue lock with - * lots of contention. The overloaded CPU will look to push its non-running - * RT task off, and if it does, it can then ignore the other IPIs coming - * in, and just pass those IPIs off to any other overloaded CPU. - * - * When a CPU schedules a lower priority task, it only sends an IPI to - * the "next" CPU that has overloaded RT tasks. This prevents IPI storms, - * as having 10 CPUs scheduling lower priority tasks and 10 CPUs with - * RT overloaded tasks, would cause 100 IPIs to go out at once. - * - * The overloaded RT CPU, when receiving an IPI, will try to push off its - * overloaded RT tasks and then send an IPI to the next CPU that has - * overloaded RT tasks. This stops when all CPUs with overloaded RT tasks - * have completed. Just because a CPU may have pushed off its own overloaded - * RT task does not mean it should stop sending the IPI around to other - * overloaded CPUs. There may be another RT task waiting to run on one of - * those CPUs that are of higher priority than the one that was just - * pushed. - * - * An optimization that could possibly be made is to make a CPU array similar - * to the cpupri array mask of all running RT tasks, but for the overloaded - * case, then the IPI could be sent to only the CPU with the highest priority - * RT task waiting, and that CPU could send off further IPIs to the CPU with - * the next highest waiting task. Since the overloaded case is much less likely - * to happen, the complexity of this implementation may not be worth it. - * Instead, just send an IPI around to all overloaded CPUs. - * - * The rq->rt.push_flags holds the status of the IPI that is going around. - * A run queue can only send out a single IPI at a time. The possible flags - * for rq->rt.push_flags are: - * - * (None or zero): No IPI is going around for the current rq - * RT_PUSH_IPI_EXECUTING: An IPI for the rq is being passed around - * RT_PUSH_IPI_RESTART: The priority of the running task for the rq - * has changed, and the IPI should restart - * circulating the overloaded CPUs again. - * - * rq->rt.push_cpu contains the CPU that is being sent the IPI. It is updated - * before sending to the next CPU. - * - * Instead of having all CPUs that schedule a lower priority task send - * an IPI to the same "first" CPU in the RT overload mask, they send it - * to the next overloaded CPU after their own CPU. This helps distribute - * the work when there's more than one overloaded CPU and multiple CPUs - * scheduling in lower priority tasks. - * - * When a rq schedules a lower priority task than what was currently - * running, the next CPU with overloaded RT tasks is examined first. - * That is, if CPU 1 and 5 are overloaded, and CPU 3 schedules a lower - * priority task, it will send an IPI first to CPU 5, then CPU 5 will - * send to CPU 1 if it is still overloaded. CPU 1 will clear the - * rq->rt.push_flags if RT_PUSH_IPI_RESTART is not set. - * - * The first CPU to notice IPI_RESTART is set, will clear that flag and then - * send an IPI to the next overloaded CPU after the rq->cpu and not the next - * CPU after push_cpu. That is, if CPU 1, 4 and 5 are overloaded when CPU 3 - * schedules a lower priority task, and the IPI_RESTART gets set while the - * handling is being done on CPU 5, it will clear the flag and send it back to - * CPU 4 instead of CPU 1. - * - * Note, the above logic can be disabled by turning off the sched_feature - * RT_PUSH_IPI. Then the rq lock of the overloaded CPU will simply be - * taken by the CPU requesting a pull and the waiting RT task will be pulled - * by that CPU. This may be fine for machines with few CPUs. - */ -static void tell_cpu_to_push(struct rq *rq) +static inline void rto_start_unlock(atomic_t *v) { - int cpu; + atomic_set_release(v, 0); +}
- if (rq->rt.push_flags & RT_PUSH_IPI_EXECUTING) { - raw_spin_lock(&rq->rt.push_lock); - /* Make sure it's still executing */ - if (rq->rt.push_flags & RT_PUSH_IPI_EXECUTING) { - /* - * Tell the IPI to restart the loop as things have - * changed since it started. - */ - rq->rt.push_flags |= RT_PUSH_IPI_RESTART; - raw_spin_unlock(&rq->rt.push_lock); - return; - } - raw_spin_unlock(&rq->rt.push_lock); - } +static void tell_cpu_to_push(struct rq *rq) +{ + int cpu = -1;
- /* When here, there's no IPI going around */ + /* Keep the loop going if the IPI is currently active */ + atomic_inc(&rq->rd->rto_loop_next);
- rq->rt.push_cpu = rq->cpu; - cpu = find_next_push_cpu(rq); - if (cpu >= nr_cpu_ids) + /* Only one CPU can initiate a loop at a time */ + if (!rto_start_trylock(&rq->rd->rto_loop_start)) return;
- rq->rt.push_flags = RT_PUSH_IPI_EXECUTING; + raw_spin_lock(&rq->rd->rto_lock);
- irq_work_queue_on(&rq->rt.push_work, cpu); + /* + * The rto_cpu is updated under the lock, if it has a valid cpu + * then the IPI is still running and will continue due to the + * update to loop_next, and nothing needs to be done here. + * Otherwise it is finishing up and an ipi needs to be sent. + */ + if (rq->rd->rto_cpu < 0) + cpu = rto_next_cpu(rq); + + raw_spin_unlock(&rq->rd->rto_lock); + + rto_start_unlock(&rq->rd->rto_loop_start); + + if (cpu >= 0) + irq_work_queue_on(&rq->rd->rto_push_work, cpu); }
/* Called from hardirq context */ -static void try_to_push_tasks(void *arg) +void rto_push_irq_work_func(struct irq_work *work) { - struct rt_rq *rt_rq = arg; - struct rq *rq, *src_rq; - int this_cpu; + struct rq *rq; int cpu;
- this_cpu = rt_rq->push_cpu; + rq = this_rq();
- /* Paranoid check */ - BUG_ON(this_cpu != smp_processor_id()); - - rq = cpu_rq(this_cpu); - src_rq = rq_of_rt_rq(rt_rq); - -again: + /* + * We do not need to grab the lock to check for has_pushable_tasks. + * When it gets updated, a check is made if a push is possible. + */ if (has_pushable_tasks(rq)) { raw_spin_lock(&rq->lock); - push_rt_task(rq); + push_rt_tasks(rq); raw_spin_unlock(&rq->lock); }
- /* Pass the IPI to the next rt overloaded queue */ - raw_spin_lock(&rt_rq->push_lock); - /* - * If the source queue changed since the IPI went out, - * we need to restart the search from that CPU again. - */ - if (rt_rq->push_flags & RT_PUSH_IPI_RESTART) { - rt_rq->push_flags &= ~RT_PUSH_IPI_RESTART; - rt_rq->push_cpu = src_rq->cpu; - } + raw_spin_lock(&rq->rd->rto_lock);
- cpu = find_next_push_cpu(src_rq); + /* Pass the IPI to the next rt overloaded queue */ + cpu = rto_next_cpu(rq);
- if (cpu >= nr_cpu_ids) - rt_rq->push_flags &= ~RT_PUSH_IPI_EXECUTING; - raw_spin_unlock(&rt_rq->push_lock); + raw_spin_unlock(&rq->rd->rto_lock);
- if (cpu >= nr_cpu_ids) + if (cpu < 0) return;
- /* - * It is possible that a restart caused this CPU to be - * chosen again. Don't bother with an IPI, just see if we - * have more to push. - */ - if (unlikely(cpu == rq->cpu)) - goto again; - /* Try the next RT overloaded CPU */ - irq_work_queue_on(&rt_rq->push_work, cpu); -} - -static void push_irq_work_func(struct irq_work *work) -{ - struct rt_rq *rt_rq = container_of(work, struct rt_rq, push_work); - - try_to_push_tasks(rt_rq); + irq_work_queue_on(&rq->rd->rto_push_work, cpu); } #endif /* HAVE_RT_PUSH_IPI */
--- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -502,7 +502,7 @@ static inline int rt_bandwidth_enabled(v }
/* RT IPI pull logic requires IRQ_WORK */ -#ifdef CONFIG_IRQ_WORK +#if defined(CONFIG_IRQ_WORK) && defined(CONFIG_SMP) # define HAVE_RT_PUSH_IPI #endif
@@ -524,12 +524,6 @@ struct rt_rq { unsigned long rt_nr_total; int overloaded; struct plist_head pushable_tasks; -#ifdef HAVE_RT_PUSH_IPI - int push_flags; - int push_cpu; - struct irq_work push_work; - raw_spinlock_t push_lock; -#endif #endif /* CONFIG_SMP */ int rt_queued;
@@ -638,6 +632,19 @@ struct root_domain { struct dl_bw dl_bw; struct cpudl cpudl;
+#ifdef HAVE_RT_PUSH_IPI + /* + * For IPI pull requests, loop across the rto_mask. + */ + struct irq_work rto_push_work; + raw_spinlock_t rto_lock; + /* These are only updated and read within rto_lock */ + int rto_loop; + int rto_cpu; + /* These atomics are updated outside of a lock */ + atomic_t rto_loop_next; + atomic_t rto_loop_start; +#endif /* * The "RT overload" flag: it gets set if a CPU has more than * one runnable RT task. @@ -655,6 +662,9 @@ extern void init_defrootdomain(void); extern int sched_init_domains(const struct cpumask *cpu_map); extern void rq_attach_root(struct rq *rq, struct root_domain *rd);
+#ifdef HAVE_RT_PUSH_IPI +extern void rto_push_irq_work_func(struct irq_work *work); +#endif #endif /* CONFIG_SMP */
/* --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -269,6 +269,12 @@ static int init_rootdomain(struct root_d if (!zalloc_cpumask_var(&rd->rto_mask, GFP_KERNEL)) goto free_dlo_mask;
+#ifdef HAVE_RT_PUSH_IPI + rd->rto_cpu = -1; + raw_spin_lock_init(&rd->rto_lock); + init_irq_work(&rd->rto_push_work, rto_push_irq_work_func); +#endif + init_dl_bw(&rd->dl_bw); if (cpudl_init(&rd->cpudl) != 0) goto free_rto_mask;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: John Crispin john@phrozen.org
commit 8593b18ad348733b5d5ddfa0c79dcabf51dff308 upstream.
Switch the printk() call to the prefered pr_warn() api.
Fixes: 7e5873d3755c ("MIPS: pci: Add MT7620a PCIE driver") Signed-off-by: John Crispin john@phrozen.org Cc: Ralf Baechle ralf@linux-mips.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/15321/ Signed-off-by: James Hogan jhogan@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/mips/pci/pci-mt7620.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/mips/pci/pci-mt7620.c +++ b/arch/mips/pci/pci-mt7620.c @@ -121,7 +121,7 @@ static int wait_pciephy_busy(void) else break; if (retry++ > WAITRETRY_MAX) { - printk(KERN_WARN "PCIE-PHY retry failed.\n"); + pr_warn("PCIE-PHY retry failed.\n"); return -1; } }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Hou Tao houtao1@huawei.com
commit b9a41d21dceadf8104812626ef85dc56ee8a60ed upstream.
The following BUG_ON was hit when testing repeat creation and removal of DM devices:
kernel BUG at drivers/md/dm.c:2919! CPU: 7 PID: 750 Comm: systemd-udevd Not tainted 4.1.44 Call Trace: [<ffffffff81649e8b>] dm_get_from_kobject+0x34/0x3a [<ffffffff81650ef1>] dm_attr_show+0x2b/0x5e [<ffffffff817b46d1>] ? mutex_lock+0x26/0x44 [<ffffffff811df7f5>] sysfs_kf_seq_show+0x83/0xcf [<ffffffff811de257>] kernfs_seq_show+0x23/0x25 [<ffffffff81199118>] seq_read+0x16f/0x325 [<ffffffff811de994>] kernfs_fop_read+0x3a/0x13f [<ffffffff8117b625>] __vfs_read+0x26/0x9d [<ffffffff8130eb59>] ? security_file_permission+0x3c/0x44 [<ffffffff8117bdb8>] ? rw_verify_area+0x83/0xd9 [<ffffffff8117be9d>] vfs_read+0x8f/0xcf [<ffffffff81193e34>] ? __fdget_pos+0x12/0x41 [<ffffffff8117c686>] SyS_read+0x4b/0x76 [<ffffffff817b606e>] system_call_fastpath+0x12/0x71
The bug can be easily triggered, if an extra delay (e.g. 10ms) is added between the test of DMF_FREEING & DMF_DELETING and dm_get() in dm_get_from_kobject().
To fix it, we need to ensure the test of DMF_FREEING & DMF_DELETING and dm_get() are done in an atomic way, so _minor_lock is used.
The other callers of dm_get() have also been checked to be OK: some callers invoke dm_get() under _minor_lock, some callers invoke it under _hash_lock, and dm_start_request() invoke it after increasing md->open_count.
Signed-off-by: Hou Tao houtao1@huawei.com Signed-off-by: Mike Snitzer snitzer@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/md/dm.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-)
--- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -2709,11 +2709,15 @@ struct mapped_device *dm_get_from_kobjec
md = container_of(kobj, struct mapped_device, kobj_holder.kobj);
- if (test_bit(DMF_FREEING, &md->flags) || - dm_deleting_md(md)) - return NULL; - + spin_lock(&_minor_lock); + if (test_bit(DMF_FREEING, &md->flags) || dm_deleting_md(md)) { + md = NULL; + goto out; + } dm_get(md); +out: + spin_unlock(&_minor_lock); + return md; }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mike Snitzer snitzer@redhat.com
commit 8a74d29d541cd86569139c6f3f44b2d210458071 upstream.
A DM device with a mix of discard capabilities (due to some underlying devices not having discard support) _should_ just return -EOPNOTSUPP for the region of the device that doesn't support discards (even if only by way of the underlying driver formally not supporting discards). BUT, that does ask the underlying driver to handle something that it never advertised support for. In doing so we're exposing users to the potential for a underlying disk driver hanging if/when a discard is issued a the device that is incapable and never claimed to support discards.
Fix this by requiring that each DM target in a DM table provide discard support as a prereq for a DM device to advertise support for discards.
This may cause some configurations that were happily supporting discards (even in the face of a mix of discard support) to stop supporting discards -- but the risk of users hitting driver hangs, and forced reboots, outweighs supporting those fringe mixed discard configurations.
Signed-off-by: Mike Snitzer snitzer@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/md/dm-table.c | 33 ++++++++++++++------------------- 1 file changed, 14 insertions(+), 19 deletions(-)
--- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -1758,13 +1758,12 @@ static bool dm_table_supports_write_zero return true; }
- -static int device_discard_capable(struct dm_target *ti, struct dm_dev *dev, - sector_t start, sector_t len, void *data) +static int device_not_discard_capable(struct dm_target *ti, struct dm_dev *dev, + sector_t start, sector_t len, void *data) { struct request_queue *q = bdev_get_queue(dev->bdev);
- return q && blk_queue_discard(q); + return q && !blk_queue_discard(q); }
static bool dm_table_supports_discards(struct dm_table *t) @@ -1772,28 +1771,24 @@ static bool dm_table_supports_discards(s struct dm_target *ti; unsigned i;
- /* - * Unless any target used by the table set discards_supported, - * require at least one underlying device to support discards. - * t->devices includes internal dm devices such as mirror logs - * so we need to use iterate_devices here, which targets - * supporting discard selectively must provide. - */ for (i = 0; i < dm_table_get_num_targets(t); i++) { ti = dm_table_get_target(t, i);
if (!ti->num_discard_bios) - continue; - - if (ti->discards_supported) - return true; + return false;
- if (ti->type->iterate_devices && - ti->type->iterate_devices(ti, device_discard_capable, NULL)) - return true; + /* + * Either the target provides discard support (as implied by setting + * 'discards_supported') or it relies on _all_ data devices having + * discard support. + */ + if (!ti->discards_supported && + (!ti->type->iterate_devices || + ti->type->iterate_devices(ti, device_not_discard_capable, NULL))) + return false; }
- return false; + return true; }
void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: James Hogan jhogan@kernel.org
commit c7fd89a6407ea3a44a2a2fa12d290162c42499c4 upstream.
Building 32-bit MIPS64r2 kernels produces warnings like the following on certain toolchains (such as GNU assembler 2.24.90, but not GNU assembler 2.28.51) since commit 22b8ba765a72 ("MIPS: Fix MIPS64 FP save/restore on 32-bit kernels"), due to the exposure of fpu_save_16odd from fpu_save_double and fpu_restore_16odd from fpu_restore_double:
arch/mips/kernel/r4k_fpu.S:47: Warning: float register should be even, was 1 ... arch/mips/kernel/r4k_fpu.S:59: Warning: float register should be even, was 1 ...
This appears to be because .set mips64r2 does not change the FPU ABI to 64-bit when -march=mips64r2 (or e.g. -march=xlp) is provided on the command line on that toolchain, from the default FPU ABI of 32-bit due to the -mabi=32. This makes access to the odd FPU registers invalid.
Fix by explicitly changing the FPU ABI with .set fp=64 directives in fpu_save_16odd and fpu_restore_16odd, and moving the undefine of fp up in asmmacro.h so fp doesn't turn into $30.
Fixes: 22b8ba765a72 ("MIPS: Fix MIPS64 FP save/restore on 32-bit kernels") Signed-off-by: James Hogan jhogan@kernel.org Cc: Ralf Baechle ralf@linux-mips.org Cc: Paul Burton paul.burton@imgtec.com Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/17656/ Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/mips/include/asm/asmmacro.h | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
--- a/arch/mips/include/asm/asmmacro.h +++ b/arch/mips/include/asm/asmmacro.h @@ -19,6 +19,9 @@ #include <asm/asmmacro-64.h> #endif
+/* preprocessor replaces the fp in ".set fp=64" with $30 otherwise */ +#undef fp + /* * Helper macros for generating raw instruction encodings. */ @@ -105,6 +108,7 @@ .macro fpu_save_16odd thread .set push .set mips64r2 + .set fp=64 SET_HARDFLOAT sdc1 $f1, THREAD_FPR1(\thread) sdc1 $f3, THREAD_FPR3(\thread) @@ -163,6 +167,7 @@ .macro fpu_restore_16odd thread .set push .set mips64r2 + .set fp=64 SET_HARDFLOAT ldc1 $f1, THREAD_FPR1(\thread) ldc1 $f3, THREAD_FPR3(\thread) @@ -234,9 +239,6 @@ .endm
#ifdef TOOLCHAIN_SUPPORTS_MSA -/* preprocessor replaces the fp in ".set fp=64" with $30 otherwise */ -#undef fp - .macro _cfcmsa rd, cs .set push .set mips32r2
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: James Hogan jhogan@kernel.org
commit 22b8ba765a726d90e9830ff6134c32b04f12c10f upstream.
32-bit kernels can be configured to support MIPS64, in which case neither CONFIG_64BIT or CONFIG_CPU_MIPS32_R* will be set. This causes the CP0_Status.FR checks at the point of floating point register save and restore to be compiled out, which results in odd FP registers not being saved or restored to the task or signal context even when CP0_Status.FR is set.
Fix the ifdefs to use CONFIG_CPU_MIPSR2 and CONFIG_CPU_MIPSR6, which are enabled for the relevant revisions of either MIPS32 or MIPS64, along with some other CPUs such as Octeon (r2), Loongson1 (r2), XLP (r2), Loongson 3A R2.
The suspect code originates from commit 597ce1723e0f ("MIPS: Support for 64-bit FP with O32 binaries") in v3.14, however the code in __enable_fpu() was consistent and refused to set FR=1, falling back to software FPU emulation. This was suboptimal but should be functionally correct.
Commit fcc53b5f6c38 ("MIPS: fpu.h: Allow 64-bit FPU on a 64-bit MIPS R6 CPU") in v4.2 (and stable tagged back to 4.0) later introduced the bug by updating __enable_fpu() to set FR=1 but failing to update the other similar ifdefs to enable FR=1 state handling.
Fixes: fcc53b5f6c38 ("MIPS: fpu.h: Allow 64-bit FPU on a 64-bit MIPS R6 CPU") Signed-off-by: James Hogan jhogan@kernel.org Cc: Ralf Baechle ralf@linux-mips.org Cc: Paul Burton paul.burton@imgtec.com Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16739/ Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/mips/include/asm/asmmacro.h | 8 ++++---- arch/mips/kernel/r4k_fpu.S | 20 ++++++++++---------- 2 files changed, 14 insertions(+), 14 deletions(-)
--- a/arch/mips/include/asm/asmmacro.h +++ b/arch/mips/include/asm/asmmacro.h @@ -130,8 +130,8 @@ .endm
.macro fpu_save_double thread status tmp -#if defined(CONFIG_64BIT) || defined(CONFIG_CPU_MIPS32_R2) || \ - defined(CONFIG_CPU_MIPS32_R6) +#if defined(CONFIG_64BIT) || defined(CONFIG_CPU_MIPSR2) || \ + defined(CONFIG_CPU_MIPSR6) sll \tmp, \status, 5 bgez \tmp, 10f fpu_save_16odd \thread @@ -189,8 +189,8 @@ .endm
.macro fpu_restore_double thread status tmp -#if defined(CONFIG_64BIT) || defined(CONFIG_CPU_MIPS32_R2) || \ - defined(CONFIG_CPU_MIPS32_R6) +#if defined(CONFIG_64BIT) || defined(CONFIG_CPU_MIPSR2) || \ + defined(CONFIG_CPU_MIPSR6) sll \tmp, \status, 5 bgez \tmp, 10f # 16 register mode?
--- a/arch/mips/kernel/r4k_fpu.S +++ b/arch/mips/kernel/r4k_fpu.S @@ -40,8 +40,8 @@ */ LEAF(_save_fp) EXPORT_SYMBOL(_save_fp) -#if defined(CONFIG_64BIT) || defined(CONFIG_CPU_MIPS32_R2) || \ - defined(CONFIG_CPU_MIPS32_R6) +#if defined(CONFIG_64BIT) || defined(CONFIG_CPU_MIPSR2) || \ + defined(CONFIG_CPU_MIPSR6) mfc0 t0, CP0_STATUS #endif fpu_save_double a0 t0 t1 # clobbers t1 @@ -52,8 +52,8 @@ EXPORT_SYMBOL(_save_fp) * Restore a thread's fp context. */ LEAF(_restore_fp) -#if defined(CONFIG_64BIT) || defined(CONFIG_CPU_MIPS32_R2) || \ - defined(CONFIG_CPU_MIPS32_R6) +#if defined(CONFIG_64BIT) || defined(CONFIG_CPU_MIPSR2) || \ + defined(CONFIG_CPU_MIPSR6) mfc0 t0, CP0_STATUS #endif fpu_restore_double a0 t0 t1 # clobbers t1 @@ -246,11 +246,11 @@ LEAF(_save_fp_context) cfc1 t1, fcr31 .set pop
-#if defined(CONFIG_64BIT) || defined(CONFIG_CPU_MIPS32_R2) || \ - defined(CONFIG_CPU_MIPS32_R6) +#if defined(CONFIG_64BIT) || defined(CONFIG_CPU_MIPSR2) || \ + defined(CONFIG_CPU_MIPSR6) .set push SET_HARDFLOAT -#ifdef CONFIG_CPU_MIPS32_R2 +#ifdef CONFIG_CPU_MIPSR2 .set mips32r2 .set fp=64 mfc0 t0, CP0_STATUS @@ -314,11 +314,11 @@ LEAF(_save_fp_context) LEAF(_restore_fp_context) EX lw t1, 0(a1)
-#if defined(CONFIG_64BIT) || defined(CONFIG_CPU_MIPS32_R2) || \ - defined(CONFIG_CPU_MIPS32_R6) +#if defined(CONFIG_64BIT) || defined(CONFIG_CPU_MIPSR2) || \ + defined(CONFIG_CPU_MIPSR6) .set push SET_HARDFLOAT -#ifdef CONFIG_CPU_MIPS32_R2 +#ifdef CONFIG_CPU_MIPSR2 .set mips32r2 .set fp=64 mfc0 t0, CP0_STATUS
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Masahiro Yamada yamada.masahiro@socionext.com
commit 3cad14d56adbf7d621fc5a35db42f3acc0a2d6e8 upstream.
arch/mips/boot/dts/brcm/bcm96358nb4ser.dts does not exist, so we cannot build bcm96358nb4ser.dtb .
Signed-off-by: Masahiro Yamada yamada.masahiro@socionext.com Fixes: 695835511f96 ("MIPS: BMIPS: rename bcm96358nb4ser to bcm6358-neufbox4-sercom") Acked-by: James Hogan jhogan@kernel.org Signed-off-by: Rob Herring robh@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/mips/boot/dts/brcm/Makefile | 1 - 1 file changed, 1 deletion(-)
--- a/arch/mips/boot/dts/brcm/Makefile +++ b/arch/mips/boot/dts/brcm/Makefile @@ -23,7 +23,6 @@ dtb-$(CONFIG_DT_NONE) += \ bcm63268-comtrend-vr-3032u.dtb \ bcm93384wvg.dtb \ bcm93384wvg_viper.dtb \ - bcm96358nb4ser.dtb \ bcm96368mvwg.dtb \ bcm9ejtagprb.dtb \ bcm97125cbmb.dtb \
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Maciej W. Rozycki macro@mips.com
commit 547da673173de51f73887377eb275304775064ad upstream.
Fix a commit 7aeb753b5353 ("MIPS: Implement task_user_regset_view.") regression, then activated by commit 6a9c001b7ec3 ("MIPS: Switch ELF core dumper to use regsets.)", that caused n32 processes to dump o32 core files by failing to set the EF_MIPS_ABI2 flag in the ELF core file header's `e_flags' member:
$ file tls-core tls-core: ELF 32-bit MSB executable, MIPS, N32 MIPS64 rel2 version 1 (SYSV), [...] $ ./tls-core Aborted (core dumped) $ file core core: ELF 32-bit MSB core file MIPS, MIPS-I version 1 (SYSV), SVR4-style $
Previously the flag was set as the result of a:
statement placed in arch/mips/kernel/binfmt_elfn32.c, however in the regset case, i.e. when CORE_DUMP_USE_REGSET is set, ELF_CORE_EFLAGS is no longer used by `fill_note_info' in fs/binfmt_elf.c, and instead the `->e_flags' member of the regset view chosen is. We have the views defined in arch/mips/kernel/ptrace.c, however only an o32 and an n64 one, and the latter is used for n32 as well. Consequently an o32 core file is incorrectly dumped from n32 processes (the ELF32 vs ELF64 class is chosen elsewhere, and the 32-bit one is correctly selected for n32).
Correct the issue then by defining an n32 regset view and using it as appropriate. Issue discovered in GDB testing.
Fixes: 7aeb753b5353 ("MIPS: Implement task_user_regset_view.") Signed-off-by: Maciej W. Rozycki macro@mips.com Cc: Ralf Baechle ralf@linux-mips.org Cc: Djordje Todorovic djordje.todorovic@rt-rk.com Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/17617/ Signed-off-by: James Hogan jhogan@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/mips/kernel/ptrace.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+)
--- a/arch/mips/kernel/ptrace.c +++ b/arch/mips/kernel/ptrace.c @@ -618,6 +618,19 @@ static const struct user_regset_view use .n = ARRAY_SIZE(mips64_regsets), };
+#ifdef CONFIG_MIPS32_N32 + +static const struct user_regset_view user_mipsn32_view = { + .name = "mipsn32", + .e_flags = EF_MIPS_ABI2, + .e_machine = ELF_ARCH, + .ei_osabi = ELF_OSABI, + .regsets = mips64_regsets, + .n = ARRAY_SIZE(mips64_regsets), +}; + +#endif /* CONFIG_MIPS32_N32 */ + #endif /* CONFIG_64BIT */
const struct user_regset_view *task_user_regset_view(struct task_struct *task) @@ -629,6 +642,10 @@ const struct user_regset_view *task_user if (test_tsk_thread_flag(task, TIF_32BIT_REGS)) return &user_mips_view; #endif +#ifdef CONFIG_MIPS32_N32 + if (test_tsk_thread_flag(task, TIF_32BIT_ADDR)) + return &user_mipsn32_view; +#endif return &user_mips64_view; #endif }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Aleksandar Markovic aleksandar.markovic@mips.com
commit 409fcace9963c1e8d2cb0f7ac62e8b34d47ef979 upstream.
Fix final phase of <CLASS|MADDF|MSUBF|MAX|MIN|MAXA|MINA>.<D|S> emulation. Provide proper generation of SIGFPE signal and updating debugfs FP exception stats in cases of any exception flags set in preceding phases of emulation.
CLASS.<D|S> instruction may generate "Unimplemented Operation" FP exception. <MADDF|MSUBF>.<D|S> instructions may generate "Inexact", "Unimplemented Operation", "Invalid Operation", "Overflow", and "Underflow" FP exceptions. <MAX|MIN|MAXA|MINA>.<D|S> instructions can generate "Unimplemented Operation" and "Invalid Operation" FP exceptions.
The proper final processing of the cases when any FP exception flag is set is achieved by replacing "break" statement with "goto copcsr" statement. With such solution, this patch brings the final phase of emulation of the above instructions consistent with the one corresponding to the previously implemented emulation of other related FPU instructions (ADD, SUB, etc.).
Fixes: 38db37ba069f ("MIPS: math-emu: Add support for the MIPS R6 CLASS FPU instruction") Fixes: e24c3bec3e8e ("MIPS: math-emu: Add support for the MIPS R6 MADDF FPU instruction") Fixes: 83d43305a1df ("MIPS: math-emu: Add support for the MIPS R6 MSUBF FPU instruction") Fixes: a79f5f9ba508 ("MIPS: math-emu: Add support for the MIPS R6 MAX{, A} FPU instruction") Fixes: 4e9561b20e2f ("MIPS: math-emu: Add support for the MIPS R6 MIN{, A} FPU instruction") Signed-off-by: Aleksandar Markovic aleksandar.markovic@mips.com Cc: Ralf Baechle ralf@linux-mips.org Cc: Douglas Leung douglas.leung@mips.com Cc: Goran Ferenc goran.ferenc@mips.com Cc: "Maciej W. Rozycki" macro@imgtec.com Cc: Miodrag Dinic miodrag.dinic@mips.com Cc: Paul Burton paul.burton@mips.com Cc: Petar Jovanovic petar.jovanovic@mips.com Cc: Raghu Gandham raghu.gandham@mips.com Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/17581/ Signed-off-by: James Hogan jhogan@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/mips/math-emu/cp1emu.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-)
--- a/arch/mips/math-emu/cp1emu.c +++ b/arch/mips/math-emu/cp1emu.c @@ -1795,7 +1795,7 @@ static int fpu_emu(struct pt_regs *xcp, SPFROMREG(fs, MIPSInst_FS(ir)); SPFROMREG(fd, MIPSInst_FD(ir)); rv.s = ieee754sp_maddf(fd, fs, ft); - break; + goto copcsr; }
case fmsubf_op: { @@ -1809,7 +1809,7 @@ static int fpu_emu(struct pt_regs *xcp, SPFROMREG(fs, MIPSInst_FS(ir)); SPFROMREG(fd, MIPSInst_FD(ir)); rv.s = ieee754sp_msubf(fd, fs, ft); - break; + goto copcsr; }
case frint_op: { @@ -1834,7 +1834,7 @@ static int fpu_emu(struct pt_regs *xcp, SPFROMREG(fs, MIPSInst_FS(ir)); rv.w = ieee754sp_2008class(fs); rfmt = w_fmt; - break; + goto copcsr; }
case fmin_op: { @@ -1847,7 +1847,7 @@ static int fpu_emu(struct pt_regs *xcp, SPFROMREG(ft, MIPSInst_FT(ir)); SPFROMREG(fs, MIPSInst_FS(ir)); rv.s = ieee754sp_fmin(fs, ft); - break; + goto copcsr; }
case fmina_op: { @@ -1860,7 +1860,7 @@ static int fpu_emu(struct pt_regs *xcp, SPFROMREG(ft, MIPSInst_FT(ir)); SPFROMREG(fs, MIPSInst_FS(ir)); rv.s = ieee754sp_fmina(fs, ft); - break; + goto copcsr; }
case fmax_op: { @@ -1873,7 +1873,7 @@ static int fpu_emu(struct pt_regs *xcp, SPFROMREG(ft, MIPSInst_FT(ir)); SPFROMREG(fs, MIPSInst_FS(ir)); rv.s = ieee754sp_fmax(fs, ft); - break; + goto copcsr; }
case fmaxa_op: { @@ -1886,7 +1886,7 @@ static int fpu_emu(struct pt_regs *xcp, SPFROMREG(ft, MIPSInst_FT(ir)); SPFROMREG(fs, MIPSInst_FS(ir)); rv.s = ieee754sp_fmaxa(fs, ft); - break; + goto copcsr; }
case fabs_op: @@ -2165,7 +2165,7 @@ copcsr: DPFROMREG(fs, MIPSInst_FS(ir)); DPFROMREG(fd, MIPSInst_FD(ir)); rv.d = ieee754dp_maddf(fd, fs, ft); - break; + goto copcsr; }
case fmsubf_op: { @@ -2179,7 +2179,7 @@ copcsr: DPFROMREG(fs, MIPSInst_FS(ir)); DPFROMREG(fd, MIPSInst_FD(ir)); rv.d = ieee754dp_msubf(fd, fs, ft); - break; + goto copcsr; }
case frint_op: { @@ -2204,7 +2204,7 @@ copcsr: DPFROMREG(fs, MIPSInst_FS(ir)); rv.l = ieee754dp_2008class(fs); rfmt = l_fmt; - break; + goto copcsr; }
case fmin_op: { @@ -2217,7 +2217,7 @@ copcsr: DPFROMREG(ft, MIPSInst_FT(ir)); DPFROMREG(fs, MIPSInst_FS(ir)); rv.d = ieee754dp_fmin(fs, ft); - break; + goto copcsr; }
case fmina_op: { @@ -2230,7 +2230,7 @@ copcsr: DPFROMREG(ft, MIPSInst_FT(ir)); DPFROMREG(fs, MIPSInst_FS(ir)); rv.d = ieee754dp_fmina(fs, ft); - break; + goto copcsr; }
case fmax_op: { @@ -2243,7 +2243,7 @@ copcsr: DPFROMREG(ft, MIPSInst_FT(ir)); DPFROMREG(fs, MIPSInst_FS(ir)); rv.d = ieee754dp_fmax(fs, ft); - break; + goto copcsr; }
case fmaxa_op: { @@ -2256,7 +2256,7 @@ copcsr: DPFROMREG(ft, MIPSInst_FT(ir)); DPFROMREG(fs, MIPSInst_FS(ir)); rv.d = ieee754dp_fmaxa(fs, ft); - break; + goto copcsr; }
case fabs_op:
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Stanislaw Gruszka sgruszka@redhat.com
commit bfa62a52cad93686bb8d8171ea5288813248a7c6 upstream.
ENOENT usb error mean "specified interface or endpoint does not exist or is not enabled". Mark device not present when we encounter this error similar like we do with ENODEV error.
Otherwise we can have infinite loop in rt2x00usb_work_rxdone(), because we remove and put again RX entries to the queue infinitely.
We can have similar situation when submit urb will fail all the time with other error, so we need consider to limit number of entries processed by rxdone work. But for now, since the patch fixes reproducible soft lockup issue on single processor systems and taken ENOENT error meaning, let apply this fix.
Patch adds additional ENOENT check not only in rx kick routine, but also on other places where we check for ENODEV error.
Reported-by: Richard Genoud richard.genoud@gmail.com Debugged-by: Richard Genoud richard.genoud@gmail.com Signed-off-by: Stanislaw Gruszka sgruszka@redhat.com Tested-by: Richard Genoud richard.genoud@gmail.com Signed-off-by: Kalle Valo kvalo@codeaurora.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/wireless/ralink/rt2x00/rt2x00usb.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
--- a/drivers/net/wireless/ralink/rt2x00/rt2x00usb.c +++ b/drivers/net/wireless/ralink/rt2x00/rt2x00usb.c @@ -57,7 +57,7 @@ int rt2x00usb_vendor_request(struct rt2x if (status >= 0) return 0;
- if (status == -ENODEV) { + if (status == -ENODEV || status == -ENOENT) { /* Device has disappeared. */ clear_bit(DEVICE_STATE_PRESENT, &rt2x00dev->flags); break; @@ -321,7 +321,7 @@ static bool rt2x00usb_kick_tx_entry(stru
status = usb_submit_urb(entry_priv->urb, GFP_ATOMIC); if (status) { - if (status == -ENODEV) + if (status == -ENODEV || status == -ENOENT) clear_bit(DEVICE_STATE_PRESENT, &rt2x00dev->flags); set_bit(ENTRY_DATA_IO_FAILED, &entry->flags); rt2x00lib_dmadone(entry); @@ -410,7 +410,7 @@ static bool rt2x00usb_kick_rx_entry(stru
status = usb_submit_urb(entry_priv->urb, GFP_ATOMIC); if (status) { - if (status == -ENODEV) + if (status == -ENODEV || status == -ENOENT) clear_bit(DEVICE_STATE_PRESENT, &rt2x00dev->flags); set_bit(ENTRY_DATA_IO_FAILED, &entry->flags); rt2x00lib_dmadone(entry);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Vitaly Wool vitalywool@gmail.com
commit 5d03a6613957785e94af7a4a6212ad4af66aa5c2 upstream.
There is a race in the current z3fold implementation between do_compact() called in a work queue context and the page release procedure when page's kref goes to 0.
do_compact() may be waiting for page lock, which is released by release_z3fold_page_locked right before putting the page onto the "stale" list, and then the page may be freed as do_compact() modifies its contents.
The mechanism currently implemented to handle that (checking the PAGE_STALE flag) is not reliable enough. Instead, we'll use page's kref counter to guarantee that the page is not released if its compaction is scheduled. It then becomes compaction function's responsibility to decrease the counter and quit immediately if the page was actually freed.
Link: http://lkml.kernel.org/r/20171117092032.00ea56f42affbed19f4fcc6c@gmail.com Signed-off-by: Vitaly Wool vitaly.wool@sonymobile.com Cc: Oleksiy.Avramchenko@sony.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- mm/z3fold.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)
--- a/mm/z3fold.c +++ b/mm/z3fold.c @@ -404,8 +404,7 @@ static void do_compact_page(struct z3fol WARN_ON(z3fold_page_trylock(zhdr)); else z3fold_page_lock(zhdr); - if (test_bit(PAGE_STALE, &page->private) || - !test_and_clear_bit(NEEDS_COMPACTING, &page->private)) { + if (WARN_ON(!test_and_clear_bit(NEEDS_COMPACTING, &page->private))) { z3fold_page_unlock(zhdr); return; } @@ -413,6 +412,11 @@ static void do_compact_page(struct z3fol list_del_init(&zhdr->buddy); spin_unlock(&pool->lock);
+ if (kref_put(&zhdr->refcount, release_z3fold_page_locked)) { + atomic64_dec(&pool->pages_nr); + return; + } + z3fold_compact_page(zhdr); unbuddied = get_cpu_ptr(pool->unbuddied); fchunks = num_free_chunks(zhdr); @@ -753,9 +757,11 @@ static void z3fold_free(struct z3fold_po list_del_init(&zhdr->buddy); spin_unlock(&pool->lock); zhdr->cpu = -1; + kref_get(&zhdr->refcount); do_compact_page(zhdr, true); return; } + kref_get(&zhdr->refcount); queue_work_on(zhdr->cpu, pool->compact_wq, &zhdr->work); z3fold_page_unlock(zhdr); }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: NeilBrown neilb@suse.com
commit ecc0c469f27765ed1e2b967be0aa17cee1a60b76 upstream.
Currently if the autofs kernel module gets an error when writing to the pipe which links to the daemon, then it marks the whole moutpoint as catatonic, and it will stop working.
It is possible that the error is transient. This can happen if the daemon is slow and more than 16 requests queue up. If a subsequent process tries to queue a request, and is then signalled, the write to the pipe will return -ERESTARTSYS and autofs will take that as total failure.
So change the code to assess -ERESTARTSYS and -ENOMEM as transient failures which only abort the current request, not the whole mountpoint.
It isn't a crash or a data corruption, but having autofs mountpoints suddenly stop working is rather inconvenient.
Ian said:
: And given the problems with a half dozen (or so) user space applications : consuming large amounts of CPU under heavy mount and umount activity this : could happen more easily than we expect.
Link: http://lkml.kernel.org/r/87y3norvgp.fsf@notabene.neil.brown.name Signed-off-by: NeilBrown neilb@suse.com Acked-by: Ian Kent raven@themaw.net Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/autofs4/waitq.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-)
--- a/fs/autofs4/waitq.c +++ b/fs/autofs4/waitq.c @@ -81,7 +81,8 @@ static int autofs4_write(struct autofs_s spin_unlock_irqrestore(¤t->sighand->siglock, flags); }
- return (bytes > 0); + /* if 'wr' returned 0 (impossible) we assume -EIO (safe) */ + return bytes == 0 ? 0 : wr < 0 ? wr : -EIO; }
static void autofs4_notify_daemon(struct autofs_sb_info *sbi, @@ -95,6 +96,7 @@ static void autofs4_notify_daemon(struct } pkt; struct file *pipe = NULL; size_t pktsz; + int ret;
pr_debug("wait id = 0x%08lx, name = %.*s, type=%d\n", (unsigned long) wq->wait_queue_token, @@ -169,7 +171,18 @@ static void autofs4_notify_daemon(struct mutex_unlock(&sbi->wq_mutex);
if (autofs4_write(sbi, pipe, &pkt, pktsz)) + switch (ret = autofs4_write(sbi, pipe, &pkt, pktsz)) { + case 0: + break; + case -ENOMEM: + case -ERESTARTSYS: + /* Just fail this one */ + autofs4_wait_release(sbi, wq->wait_queue_token, ret); + break; + default: autofs4_catatonic_mode(sbi); + break; + } fput(pipe); }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andreas Rohner andreas.rohner@gmx.net
commit 31ccb1f7ba3cfe29631587d451cf5bb8ab593550 upstream.
There is a race condition between nilfs_dirty_inode() and nilfs_set_file_dirty().
When a file is opened, nilfs_dirty_inode() is called to update the access timestamp in the inode. It calls __nilfs_mark_inode_dirty() in a separate transaction. __nilfs_mark_inode_dirty() caches the ifile buffer_head in the i_bh field of the inode info structure and marks it as dirty.
After some data was written to the file in another transaction, the function nilfs_set_file_dirty() is called, which adds the inode to the ns_dirty_files list.
Then the segment construction calls nilfs_segctor_collect_dirty_files(), which goes through the ns_dirty_files list and checks the i_bh field. If there is a cached buffer_head in i_bh it is not marked as dirty again.
Since nilfs_dirty_inode() and nilfs_set_file_dirty() use separate transactions, it is possible that a segment construction that writes out the ifile occurs in-between the two. If this happens the inode is not on the ns_dirty_files list, but its ifile block is still marked as dirty and written out.
In the next segment construction, the data for the file is written out and nilfs_bmap_propagate() updates the b-tree. Eventually the bmap root is written into the i_bh block, which is not dirty, because it was written out in another segment construction.
As a result the bmap update can be lost, which leads to file system corruption. Either the virtual block address points to an unallocated DAT block, or the DAT entry will be reused for something different.
The error can remain undetected for a long time. A typical error message would be one of the "bad btree" errors or a warning that a DAT entry could not be found.
This bug can be reproduced reliably by a simple benchmark that creates and overwrites millions of 4k files.
Link: http://lkml.kernel.org/r/1509367935-3086-2-git-send-email-konishi.ryusuke@la... Signed-off-by: Andreas Rohner andreas.rohner@gmx.net Signed-off-by: Ryusuke Konishi konishi.ryusuke@lab.ntt.co.jp Tested-by: Andreas Rohner andreas.rohner@gmx.net Tested-by: Ryusuke Konishi konishi.ryusuke@lab.ntt.co.jp Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/nilfs2/segment.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
--- a/fs/nilfs2/segment.c +++ b/fs/nilfs2/segment.c @@ -1958,8 +1958,6 @@ static int nilfs_segctor_collect_dirty_f err, ii->vfs_inode.i_ino); return err; } - mark_buffer_dirty(ibh); - nilfs_mdt_mark_dirty(ifile); spin_lock(&nilfs->ns_inode_lock); if (likely(!ii->i_bh)) ii->i_bh = ibh; @@ -1968,6 +1966,10 @@ static int nilfs_segctor_collect_dirty_f goto retry; }
+ // Always redirty the buffer to avoid race condition + mark_buffer_dirty(ii->i_bh); + nilfs_mdt_mark_dirty(ifile); + clear_bit(NILFS_I_QUEUED, &ii->i_state); set_bit(NILFS_I_BUSY, &ii->i_state); list_move_tail(&ii->i_dirty, &sci->sc_dirty_files);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Biggers ebiggers@google.com
commit a0b3bc855374c50b5ea85273553485af48caf2f7 upstream.
fscrypt_initialize(), which allocates the global bounce page pool when an encrypted file is first accessed, uses "double-checked locking" to try to avoid locking fscrypt_init_mutex. However, it doesn't use any memory barriers, so it's theoretically possible for a thread to observe a bounce page pool which has not been fully initialized. This is a classic bug with "double-checked locking".
While "only a theoretical issue" in the latest kernel, in pre-4.8 kernels the pointer that was checked was not even the last to be initialized, so it was easily possible for a crash (NULL pointer dereference) to happen. This was changed only incidentally by the large refactor to use fs/crypto/.
Solve both problems in a trivial way that can easily be backported: just always take the mutex. It's theoretically less efficient, but it shouldn't be noticeable in practice as the mutex is only acquired very briefly once per encrypted file.
Later I'd like to make this use a helper macro like DO_ONCE(). However, DO_ONCE() runs in atomic context, so we'd need to add a new macro that allows blocking.
Signed-off-by: Eric Biggers ebiggers@google.com Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/crypto/crypto.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-)
--- a/fs/crypto/crypto.c +++ b/fs/crypto/crypto.c @@ -410,11 +410,8 @@ int fscrypt_initialize(unsigned int cop_ { int i, res = -ENOMEM;
- /* - * No need to allocate a bounce page pool if there already is one or - * this FS won't use it. - */ - if (cop_flags & FS_CFLG_OWN_PAGES || fscrypt_bounce_page_pool) + /* No need to allocate a bounce page pool if this FS won't use it. */ + if (cop_flags & FS_CFLG_OWN_PAGES) return 0;
mutex_lock(&fscrypt_init_mutex);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dan Carpenter dan.carpenter@oracle.com
commit db86be3a12d0b6e5c5b51c2ab2a48f06329cb590 upstream.
We're freeing the list iterator so we should be using the _safe() version of hlist_for_each_entry().
Fixes: 88b4a07e6610 ("[PATCH] eCryptfs: Public key transport mechanism") Signed-off-by: Dan Carpenter dan.carpenter@oracle.com Signed-off-by: Tyler Hicks tyhicks@canonical.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ecryptfs/messaging.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
--- a/fs/ecryptfs/messaging.c +++ b/fs/ecryptfs/messaging.c @@ -442,15 +442,16 @@ void ecryptfs_release_messaging(void) } if (ecryptfs_daemon_hash) { struct ecryptfs_daemon *daemon; + struct hlist_node *n; int i;
mutex_lock(&ecryptfs_daemon_hash_mux); for (i = 0; i < (1 << ecryptfs_hash_bits); i++) { int rc;
- hlist_for_each_entry(daemon, - &ecryptfs_daemon_hash[i], - euid_chain) { + hlist_for_each_entry_safe(daemon, n, + &ecryptfs_daemon_hash[i], + euid_chain) { rc = ecryptfs_exorcise_daemon(daemon); if (rc) printk(KERN_ERR "%s: Error whilst "
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Biggers ebiggers@google.com
commit b11270853fa3654f08d4a6a03b23ddb220512d8d upstream.
The WARN_ON(!key->len) in set_secret() in net/ceph/crypto.c is hit if a user tries to add a key of type "ceph" with an invalid payload as follows (assuming CONFIG_CEPH_LIB=y):
echo -e -n '\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' \ | keyctl padd ceph desc @s
This can be hit by fuzzers. As this is merely bad input and not a kernel bug, replace the WARN_ON() with return -EINVAL.
Fixes: 7af3ea189a9a ("libceph: stop allocating a new cipher on every crypto request") Signed-off-by: Eric Biggers ebiggers@google.com Reviewed-by: Ilya Dryomov idryomov@gmail.com Signed-off-by: Ilya Dryomov idryomov@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- net/ceph/crypto.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/net/ceph/crypto.c +++ b/net/ceph/crypto.c @@ -37,7 +37,9 @@ static int set_secret(struct ceph_crypto return -ENOTSUPP; }
- WARN_ON(!key->len); + if (!key->len) + return -EINVAL; + key->key = kmemdup(buf, key->len, GFP_NOIO); if (!key->key) { ret = -ENOMEM;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Coly Li colyli@suse.de
commit 91af8300d9c1d7c6b6a2fd754109e08d4798b8d8 upstream.
In bcache code, sysfs entries are created before all resources get allocated, e.g. allocation thread of a cache set.
There is posibility for NULL pointer deference if a resource is accessed but which is not initialized yet. Indeed Jorg Bornschein catches one on cache set allocation thread and gets a kernel oops.
The reason for this bug is, when bch_bucket_alloc() is called during cache set registration and attaching, ca->alloc_thread is not properly allocated and initialized yet, call wake_up_process() on ca->alloc_thread triggers NULL pointer deference failure. A simple and fast fix is, before waking up ca->alloc_thread, checking whether it is allocated, and only wake up ca->alloc_thread when it is not NULL.
Signed-off-by: Coly Li colyli@suse.de Reported-by: Jorg Bornschein jb@capsec.org Cc: Kent Overstreet kent.overstreet@gmail.com Reviewed-by: Michael Lyle mlyle@lyle.org Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/md/bcache/alloc.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/md/bcache/alloc.c +++ b/drivers/md/bcache/alloc.c @@ -407,7 +407,8 @@ long bch_bucket_alloc(struct cache *ca,
finish_wait(&ca->set->bucket_wait, &w); out: - wake_up_process(ca->alloc_thread); + if (ca->alloc_thread) + wake_up_process(ca->alloc_thread);
trace_bcache_alloc(ca, reserve);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Greg Edwards gedwards@ddn.com
commit 67f2519fe2903c4041c0e94394d14d372fe51399 upstream.
guard_bio_eod() needs to look at the partition capacity, not just the capacity of the whole device, when determining if truncation is necessary.
[ 60.268688] attempt to access beyond end of device [ 60.268690] unknown-block(9,1): rw=0, want=67103509, limit=67103506 [ 60.268693] buffer_io_error: 2 callbacks suppressed [ 60.268696] Buffer I/O error on dev md1p7, logical block 4524305, async page read
Fixes: 74d46992e0d9 ("block: replace bi_bdev with a gendisk pointer and partitions index") Reviewed-by: Christoph Hellwig hch@lst.de Signed-off-by: Greg Edwards gedwards@ddn.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/buffer.c | 10 +++++++++- include/linux/genhd.h | 1 + 2 files changed, 10 insertions(+), 1 deletion(-)
--- a/fs/buffer.c +++ b/fs/buffer.c @@ -3055,8 +3055,16 @@ void guard_bio_eod(int op, struct bio *b sector_t maxsector; struct bio_vec *bvec = &bio->bi_io_vec[bio->bi_vcnt - 1]; unsigned truncated_bytes; + struct hd_struct *part; + + rcu_read_lock(); + part = __disk_get_part(bio->bi_disk, bio->bi_partno); + if (part) + maxsector = part_nr_sects_read(part); + else + maxsector = get_capacity(bio->bi_disk); + rcu_read_unlock();
- maxsector = get_capacity(bio->bi_disk); if (!maxsector) return;
--- a/include/linux/genhd.h +++ b/include/linux/genhd.h @@ -243,6 +243,7 @@ static inline dev_t part_devt(struct hd_ return part_to_dev(part)->devt; }
+extern struct hd_struct *__disk_get_part(struct gendisk *disk, int partno); extern struct hd_struct *disk_get_part(struct gendisk *disk, int partno);
static inline void disk_put_part(struct hd_struct *part)
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miklos Szeredi mszeredi@redhat.com
commit f37650f1c7c71cf5180b43229d13b421d81e7170 upstream.
If fsnotify_prepare_user_wait() fails, we leave the event on the notification list. Which will result in a warning in fsnotify_destroy_event() and later use-after-free.
Instead of adding a new helper to remove the event from the list in this case, I opted to move the prepare/finish up into fanotify_handle_event().
This will allow these to be moved further out into the generic code later, and perhaps let us move to non-sleeping RCU.
Reviewed-by: Amir Goldstein amir73il@gmail.com Signed-off-by: Miklos Szeredi mszeredi@redhat.com Fixes: 05f0e38724e8 ("fanotify: Release SRCU lock when waiting for userspace response") Signed-off-by: Jan Kara jack@suse.cz Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/notify/fanotify/fanotify.c | 33 ++++++++++++++++++++------------- 1 file changed, 20 insertions(+), 13 deletions(-)
--- a/fs/notify/fanotify/fanotify.c +++ b/fs/notify/fanotify/fanotify.c @@ -65,19 +65,8 @@ static int fanotify_get_response(struct
pr_debug("%s: group=%p event=%p\n", __func__, group, event);
- /* - * fsnotify_prepare_user_wait() fails if we race with mark deletion. - * Just let the operation pass in that case. - */ - if (!fsnotify_prepare_user_wait(iter_info)) { - event->response = FAN_ALLOW; - goto out; - } - wait_event(group->fanotify_data.access_waitq, event->response);
- fsnotify_finish_user_wait(iter_info); -out: /* userspace responded, convert to something usable */ switch (event->response) { case FAN_ALLOW: @@ -212,9 +201,21 @@ static int fanotify_handle_event(struct pr_debug("%s: group=%p inode=%p mask=%x\n", __func__, group, inode, mask);
+#ifdef CONFIG_FANOTIFY_ACCESS_PERMISSIONS + if (mask & FAN_ALL_PERM_EVENTS) { + /* + * fsnotify_prepare_user_wait() fails if we race with mark + * deletion. Just let the operation pass in that case. + */ + if (!fsnotify_prepare_user_wait(iter_info)) + return 0; + } +#endif + event = fanotify_alloc_event(inode, mask, data); + ret = -ENOMEM; if (unlikely(!event)) - return -ENOMEM; + goto finish;
fsn_event = &event->fse; ret = fsnotify_add_event(group, fsn_event, fanotify_merge); @@ -224,7 +225,8 @@ static int fanotify_handle_event(struct /* Our event wasn't used in the end. Free it. */ fsnotify_destroy_event(group, fsn_event);
- return 0; + ret = 0; + goto finish; }
#ifdef CONFIG_FANOTIFY_ACCESS_PERMISSIONS @@ -233,6 +235,11 @@ static int fanotify_handle_event(struct iter_info); fsnotify_destroy_event(group, fsn_event); } +finish: + if (mask & FAN_ALL_PERM_EVENTS) + fsnotify_finish_user_wait(iter_info); +#else +finish: #endif return ret; }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Arnd Bergmann arnd@arndb.de
commit 34be4dbf87fc3e474a842305394534216d428f5d upstream.
isofs uses a 'char' variable to load the number of years since 1900 for an inode timestamp. On architectures that use a signed char type by default, this results in an invalid date for anything beyond 2027.
This changes the function argument to a 'u8' array, which is defined the same way on all architectures, and unambiguously lets us use years until 2155.
This should be backported to all kernels that might still be in use by that date.
Signed-off-by: Arnd Bergmann arnd@arndb.de Signed-off-by: Jan Kara jack@suse.cz Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/isofs/isofs.h | 2 +- fs/isofs/rock.h | 2 +- fs/isofs/util.c | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-)
--- a/fs/isofs/isofs.h +++ b/fs/isofs/isofs.h @@ -107,7 +107,7 @@ static inline unsigned int isonum_733(ch /* Ignore bigendian datum due to broken mastering programs */ return get_unaligned_le32(p); } -extern int iso_date(char *, int); +extern int iso_date(u8 *, int);
struct inode; /* To make gcc happy */
--- a/fs/isofs/rock.h +++ b/fs/isofs/rock.h @@ -66,7 +66,7 @@ struct RR_PL_s { };
struct stamp { - char time[7]; + __u8 time[7]; /* actually 6 unsigned, 1 signed */ } __attribute__ ((packed));
struct RR_TF_s { --- a/fs/isofs/util.c +++ b/fs/isofs/util.c @@ -16,7 +16,7 @@ * to GMT. Thus we should always be correct. */
-int iso_date(char * p, int flag) +int iso_date(u8 *p, int flag) { int year, month, day, hour, minute, second, tz; int crtime;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Josef Bacik jbacik@fb.com
commit 996478ca9c460886ac147eb0d00e99841b71d31b upstream.
Nikolay reported that generic/273 was failing currently with ENOSPC. Turns out this is because we get to the point where the outstanding reservations are greater than the pinned space on the fs. This is a mistake, previously we used the current reservation amount in may_commit_transaction, not the entire outstanding reservation amount. Fix this to find the minimum byte size needed to make progress in flushing, and pass that into may_commit_transaction. From there we can make a smarter decision on whether to commit the transaction or not. This fixes the failure in generic/273.
From Nikolai, IOW: when we go to the final stage of deciding whether to
do trans commit, instead of passing all the reservations from all tickets we just pass the reservation for the current ticket. Otherwise, in case all reservations exceed pinned space, then we don't commit transaction and fail prematurely. Before we passed num_bytes from flush_space, where num_bytes was the sum of all pending reserverations, but now all we do is take the first ticket and commit the trans if we can satisfy that.
Fixes: 957780eb2788 ("Btrfs: introduce ticketed enospc infrastructure") Reported-by: Nikolay Borisov nborisov@suse.com Signed-off-by: Josef Bacik jbacik@fb.com Reviewed-by: Nikolay Borisov nborisov@suse.com Tested-by: Nikolay Borisov nborisov@suse.com [ added Nikolai's comment ] Signed-off-by: David Sterba dsterba@suse.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/btrfs/extent-tree.c | 42 ++++++++++++++++++++++++++++-------------- 1 file changed, 28 insertions(+), 14 deletions(-)
--- a/fs/btrfs/extent-tree.c +++ b/fs/btrfs/extent-tree.c @@ -4919,6 +4919,13 @@ skip_async: } }
+struct reserve_ticket { + u64 bytes; + int error; + struct list_head list; + wait_queue_head_t wait; +}; + /** * maybe_commit_transaction - possibly commit the transaction if its ok to * @root - the root we're allocating for @@ -4930,18 +4937,29 @@ skip_async: * will return -ENOSPC. */ static int may_commit_transaction(struct btrfs_fs_info *fs_info, - struct btrfs_space_info *space_info, - u64 bytes, int force) + struct btrfs_space_info *space_info) { + struct reserve_ticket *ticket = NULL; struct btrfs_block_rsv *delayed_rsv = &fs_info->delayed_block_rsv; struct btrfs_trans_handle *trans; + u64 bytes;
trans = (struct btrfs_trans_handle *)current->journal_info; if (trans) return -EAGAIN;
- if (force) - goto commit; + spin_lock(&space_info->lock); + if (!list_empty(&space_info->priority_tickets)) + ticket = list_first_entry(&space_info->priority_tickets, + struct reserve_ticket, list); + else if (!list_empty(&space_info->tickets)) + ticket = list_first_entry(&space_info->tickets, + struct reserve_ticket, list); + bytes = (ticket) ? ticket->bytes : 0; + spin_unlock(&space_info->lock); + + if (!bytes) + return 0;
/* See if there is enough pinned space to make this reservation */ if (percpu_counter_compare(&space_info->total_bytes_pinned, @@ -4956,8 +4974,12 @@ static int may_commit_transaction(struct return -ENOSPC;
spin_lock(&delayed_rsv->lock); + if (delayed_rsv->size > bytes) + bytes = 0; + else + bytes -= delayed_rsv->size; if (percpu_counter_compare(&space_info->total_bytes_pinned, - bytes - delayed_rsv->size) < 0) { + bytes) < 0) { spin_unlock(&delayed_rsv->lock); return -ENOSPC; } @@ -4971,13 +4993,6 @@ commit: return btrfs_commit_transaction(trans); }
-struct reserve_ticket { - u64 bytes; - int error; - struct list_head list; - wait_queue_head_t wait; -}; - /* * Try to flush some data based on policy set by @state. This is only advisory * and may fail for various reasons. The caller is supposed to examine the @@ -5027,8 +5042,7 @@ static void flush_space(struct btrfs_fs_ ret = 0; break; case COMMIT_TRANS: - ret = may_commit_transaction(fs_info, space_info, - num_bytes, 0); + ret = may_commit_transaction(fs_info, space_info); break; default: ret = -ENOSPC;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jaegeuk Kim jaegeuk@kernel.org
commit 5b4267d195dd887c4412e34b5a7365baa741b679 upstream.
If there's some data written through inline data or dentry, we need to shouw st_blocks. This fixes reporting zero blocks even though there is small written data.
Reviewed-by: Chao Yu yuchao0@huawei.com [Jaegeuk Kim: avoid link file for quotacheck] Signed-off-by: Jaegeuk Kim jaegeuk@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/f2fs/file.c | 6 ++++++ 1 file changed, 6 insertions(+)
--- a/fs/f2fs/file.c +++ b/fs/f2fs/file.c @@ -683,6 +683,12 @@ int f2fs_getattr(const struct path *path STATX_ATTR_NODUMP);
generic_fillattr(inode, stat); + + /* we need to show initial sectors used for inline_data/dentries */ + if ((S_ISREG(inode->i_mode) && f2fs_has_inline_data(inode)) || + f2fs_has_inline_dentry(inode)) + stat->blocks += (stat->size + 511) >> 9; + return 0; }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Joshua Watt jpewhacker@gmail.com
commit f02fee227e5f21981152850744a6084ff3fa94ee upstream.
The option was incorrectly masking off all other options.
Signed-off-by: Joshua Watt JPEWhacker@gmail.com Signed-off-by: Anna Schumaker Anna.Schumaker@Netapp.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/nfs/super.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/nfs/super.c +++ b/fs/nfs/super.c @@ -1332,7 +1332,7 @@ static int nfs_parse_mount_options(char mnt->options |= NFS_OPTION_MIGRATION; break; case Opt_nomigration: - mnt->options &= NFS_OPTION_MIGRATION; + mnt->options &= ~NFS_OPTION_MIGRATION; break;
/*
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Benjamin Coddington bcodding@redhat.com
commit fcfa447062b2061e11f68b846d61cbfe60d0d604 upstream.
Commit e12937279c8b "NFS: Move the flock open mode check into nfs_flock()" changed NFSv3 behavior for flock() such that the open mode must match the lock type, however that requirement shouldn't be enforced for flock().
Signed-off-by: Benjamin Coddington bcodding@redhat.com Signed-off-by: Anna Schumaker Anna.Schumaker@Netapp.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/nfs/file.c | 18 ++---------------- fs/nfs/nfs4proc.c | 14 ++++++++++++++ 2 files changed, 16 insertions(+), 16 deletions(-)
--- a/fs/nfs/file.c +++ b/fs/nfs/file.c @@ -829,23 +829,9 @@ int nfs_flock(struct file *filp, int cmd if (NFS_SERVER(inode)->flags & NFS_MOUNT_LOCAL_FLOCK) is_local = 1;
- /* - * VFS doesn't require the open mode to match a flock() lock's type. - * NFS, however, may simulate flock() locking with posix locking which - * requires the open mode to match the lock type. - */ - switch (fl->fl_type) { - case F_UNLCK: + /* We're simulating flock() locks using posix locks on the server */ + if (fl->fl_type == F_UNLCK) return do_unlk(filp, cmd, fl, is_local); - case F_RDLCK: - if (!(filp->f_mode & FMODE_READ)) - return -EBADF; - break; - case F_WRLCK: - if (!(filp->f_mode & FMODE_WRITE)) - return -EBADF; - } - return do_setlk(filp, cmd, fl, is_local); } EXPORT_SYMBOL_GPL(nfs_flock); --- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -6568,6 +6568,20 @@ nfs4_proc_lock(struct file *filp, int cm !test_bit(NFS_STATE_POSIX_LOCKS, &state->flags)) return -ENOLCK;
+ /* + * Don't rely on the VFS having checked the file open mode, + * since it won't do this for flock() locks. + */ + switch (request->fl_type) { + case F_RDLCK: + if (!(filp->f_mode & FMODE_READ)) + return -EBADF; + break; + case F_WRLCK: + if (!(filp->f_mode & FMODE_WRITE)) + return -EBADF; + } + status = nfs4_set_lock_state(state, request); if (status != 0) return status;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Chuck Lever chuck.lever@oracle.com
commit c05cefcc72416a37eba5a2b35f0704ed758a9145 upstream.
Before traversing a referral and performing a mount, the mounted-on directory looks strange:
dr-xr-xr-x. 2 4294967294 4294967294 0 Dec 31 1969 dir.0
nfs4_get_referral is wiping out any cached attributes with what was returned via GETATTR(fs_locations), but the bit mask for that operation does not request any file attributes.
Retrieve owner and timestamp information so that the memcpy in nfs4_get_referral fills in more attributes.
Changes since v1: - Don't request attributes that the client unconditionally replaces - Request only MOUNTED_ON_FILEID or FILEID attribute, not both - encode_fs_locations() doesn't use the third bitmask word
Fixes: 6b97fd3da1ea ("NFSv4: Follow a referral") Suggested-by: Pradeep Thomas pradeepthomas@gmail.com Signed-off-by: Chuck Lever chuck.lever@oracle.com Signed-off-by: Anna Schumaker Anna.Schumaker@Netapp.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/nfs/nfs4proc.c | 18 ++++++++---------- 1 file changed, 8 insertions(+), 10 deletions(-)
--- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -254,15 +254,12 @@ const u32 nfs4_fsinfo_bitmap[3] = { FATT };
const u32 nfs4_fs_locations_bitmap[3] = { - FATTR4_WORD0_TYPE - | FATTR4_WORD0_CHANGE + FATTR4_WORD0_CHANGE | FATTR4_WORD0_SIZE | FATTR4_WORD0_FSID | FATTR4_WORD0_FILEID | FATTR4_WORD0_FS_LOCATIONS, - FATTR4_WORD1_MODE - | FATTR4_WORD1_NUMLINKS - | FATTR4_WORD1_OWNER + FATTR4_WORD1_OWNER | FATTR4_WORD1_OWNER_GROUP | FATTR4_WORD1_RAWDEV | FATTR4_WORD1_SPACE_USED @@ -6777,9 +6774,7 @@ static int _nfs4_proc_fs_locations(struc struct page *page) { struct nfs_server *server = NFS_SERVER(dir); - u32 bitmask[3] = { - [0] = FATTR4_WORD0_FSID | FATTR4_WORD0_FS_LOCATIONS, - }; + u32 bitmask[3]; struct nfs4_fs_locations_arg args = { .dir_fh = NFS_FH(dir), .name = name, @@ -6798,12 +6793,15 @@ static int _nfs4_proc_fs_locations(struc
dprintk("%s: start\n", __func__);
+ bitmask[0] = nfs4_fattr_bitmap[0] | FATTR4_WORD0_FS_LOCATIONS; + bitmask[1] = nfs4_fattr_bitmap[1]; + /* Ask for the fileid of the absent filesystem if mounted_on_fileid * is not supported */ if (NFS_SERVER(dir)->attr_bitmask[1] & FATTR4_WORD1_MOUNTED_ON_FILEID) - bitmask[1] |= FATTR4_WORD1_MOUNTED_ON_FILEID; + bitmask[0] &= ~FATTR4_WORD0_FILEID; else - bitmask[0] |= FATTR4_WORD0_FILEID; + bitmask[1] &= ~FATTR4_WORD1_MOUNTED_ON_FILEID;
nfs_fattr_init(&fs_locations->fattr); fs_locations->server = server;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Anna Schumaker Anna.Schumaker@Netapp.com
commit 3944369db701f075092357b511fd9f5755771585 upstream.
There isn't an obvious way to acquire and release the RCU lock during a tracepoint, so we can't use the rpc_peeraddr2str() function here. Instead, rely on the client's cl_hostname, which should have similar enough information without needing an rcu_dereference().
Reported-by: Dave Jones davej@codemonkey.org.uk Signed-off-by: Anna Schumaker Anna.Schumaker@Netapp.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/nfs/nfs4trace.h | 24 ++++++------------------ 1 file changed, 6 insertions(+), 18 deletions(-)
--- a/fs/nfs/nfs4trace.h +++ b/fs/nfs/nfs4trace.h @@ -202,17 +202,13 @@ DECLARE_EVENT_CLASS(nfs4_clientid_event, TP_ARGS(clp, error),
TP_STRUCT__entry( - __string(dstaddr, - rpc_peeraddr2str(clp->cl_rpcclient, - RPC_DISPLAY_ADDR)) + __string(dstaddr, clp->cl_hostname) __field(int, error) ),
TP_fast_assign( __entry->error = error; - __assign_str(dstaddr, - rpc_peeraddr2str(clp->cl_rpcclient, - RPC_DISPLAY_ADDR)); + __assign_str(dstaddr, clp->cl_hostname); ),
TP_printk( @@ -1133,9 +1129,7 @@ DECLARE_EVENT_CLASS(nfs4_inode_callback_ __field(dev_t, dev) __field(u32, fhandle) __field(u64, fileid) - __string(dstaddr, clp ? - rpc_peeraddr2str(clp->cl_rpcclient, - RPC_DISPLAY_ADDR) : "unknown") + __string(dstaddr, clp ? clp->cl_hostname : "unknown") ),
TP_fast_assign( @@ -1148,9 +1142,7 @@ DECLARE_EVENT_CLASS(nfs4_inode_callback_ __entry->fileid = 0; __entry->dev = 0; } - __assign_str(dstaddr, clp ? - rpc_peeraddr2str(clp->cl_rpcclient, - RPC_DISPLAY_ADDR) : "unknown") + __assign_str(dstaddr, clp ? clp->cl_hostname : "unknown") ),
TP_printk( @@ -1192,9 +1184,7 @@ DECLARE_EVENT_CLASS(nfs4_inode_stateid_c __field(dev_t, dev) __field(u32, fhandle) __field(u64, fileid) - __string(dstaddr, clp ? - rpc_peeraddr2str(clp->cl_rpcclient, - RPC_DISPLAY_ADDR) : "unknown") + __string(dstaddr, clp ? clp->cl_hostname : "unknown") __field(int, stateid_seq) __field(u32, stateid_hash) ), @@ -1209,9 +1199,7 @@ DECLARE_EVENT_CLASS(nfs4_inode_stateid_c __entry->fileid = 0; __entry->dev = 0; } - __assign_str(dstaddr, clp ? - rpc_peeraddr2str(clp->cl_rpcclient, - RPC_DISPLAY_ADDR) : "unknown") + __assign_str(dstaddr, clp ? clp->cl_hostname : "unknown") __entry->stateid_seq = be32_to_cpu(stateid->seqid); __entry->stateid_hash =
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: NeilBrown neilb@suse.com
commit b688741cb06695312f18b730653d6611e1bad28d upstream.
For correct close-to-open semantics, NFS must validate the change attribute of a directory (or file) on open.
Since commit ecf3d1f1aa74 ("vfs: kill FS_REVAL_DOT by adding a d_weak_revalidate dentry op"), open() of "." or a path ending ".." is not revalidated reliably (except when that direct is a mount point).
Prior to that commit, "." was revalidated using nfs_lookup_revalidate() which checks the LOOKUP_OPEN flag and forces revalidation if the flag is set. Since that commit, nfs_weak_revalidate() is used for NFSv3 (which ignores the flags) and nothing is used for NFSv4.
This is fixed by using nfs_lookup_verify_inode() in nfs_weak_revalidate(). This does the revalidation exactly when needed. Also, add a definition of .d_weak_revalidate for NFSv4.
The incorrect behavior is easily demonstrated by running "echo *" in some non-mountpoint NFS directory while watching network traffic. Without this patch, "echo *" sometimes doesn't produce any traffic. With the patch it always does.
Fixes: ecf3d1f1aa74 ("vfs: kill FS_REVAL_DOT by adding a d_weak_revalidate dentry op") Signed-off-by: NeilBrown neilb@suse.com Signed-off-by: Anna Schumaker Anna.Schumaker@Netapp.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/nfs/dir.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/fs/nfs/dir.c +++ b/fs/nfs/dir.c @@ -1241,8 +1241,7 @@ static int nfs_weak_revalidate(struct de return 0; }
- if (nfs_mapping_need_revalidate_inode(inode)) - error = __nfs_revalidate_inode(NFS_SERVER(inode), inode); + error = nfs_lookup_verify_inode(inode, flags); dfprintk(LOOKUPCACHE, "NFS: %s: inode %lu is %s\n", __func__, inode->i_ino, error ? "invalid" : "valid"); return !error; @@ -1393,6 +1392,7 @@ static int nfs4_lookup_revalidate(struct
const struct dentry_operations nfs4_dentry_operations = { .d_revalidate = nfs4_lookup_revalidate, + .d_weak_revalidate = nfs_weak_revalidate, .d_delete = nfs_dentry_delete, .d_iput = nfs_dentry_iput, .d_automount = nfs_d_automount,
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andrew Elble aweits@rit.edu
commit 95da1b3a5aded124dd1bda1e3cdb876184813140 upstream.
If a delegation has been revoked by the server, operations using that delegation should error out with NFS4ERR_DELEG_REVOKED in the >4.1 case, and NFS4ERR_BAD_STATEID otherwise.
The server needs NFSv4.1 clients to explicitly free revoked delegations. If the server returns NFS4ERR_DELEG_REVOKED, the client will do that; otherwise it may just forget about the delegation and be unable to recover when it later sees SEQ4_STATUS_RECALLABLE_STATE_REVOKED set on a SEQUENCE reply. That can cause the Linux 4.1 client to loop in its stage manager.
Signed-off-by: Andrew Elble aweits@rit.edu Reviewed-by: Trond Myklebust trond.myklebust@primarydata.com Signed-off-by: J. Bruce Fields bfields@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/nfsd/nfs4state.c | 25 ++++++++++++++++++++++++- 1 file changed, 24 insertions(+), 1 deletion(-)
--- a/fs/nfsd/nfs4state.c +++ b/fs/nfsd/nfs4state.c @@ -3966,7 +3966,8 @@ static struct nfs4_delegation *find_dele { struct nfs4_stid *ret;
- ret = find_stateid_by_type(cl, s, NFS4_DELEG_STID); + ret = find_stateid_by_type(cl, s, + NFS4_DELEG_STID|NFS4_REVOKED_DELEG_STID); if (!ret) return NULL; return delegstateid(ret); @@ -3989,6 +3990,12 @@ nfs4_check_deleg(struct nfs4_client *cl, deleg = find_deleg_stateid(cl, &open->op_delegate_stateid); if (deleg == NULL) goto out; + if (deleg->dl_stid.sc_type == NFS4_REVOKED_DELEG_STID) { + nfs4_put_stid(&deleg->dl_stid); + if (cl->cl_minorversion) + status = nfserr_deleg_revoked; + goto out; + } flags = share_access_to_flags(open->op_share_access); status = nfs4_check_delegmode(deleg, flags); if (status) { @@ -4858,6 +4865,16 @@ nfsd4_lookup_stateid(struct nfsd4_compou struct nfs4_stid **s, struct nfsd_net *nn) { __be32 status; + bool return_revoked = false; + + /* + * only return revoked delegations if explicitly asked. + * otherwise we report revoked or bad_stateid status. + */ + if (typemask & NFS4_REVOKED_DELEG_STID) + return_revoked = true; + else if (typemask & NFS4_DELEG_STID) + typemask |= NFS4_REVOKED_DELEG_STID;
if (ZERO_STATEID(stateid) || ONE_STATEID(stateid)) return nfserr_bad_stateid; @@ -4872,6 +4889,12 @@ nfsd4_lookup_stateid(struct nfsd4_compou *s = find_stateid_by_type(cstate->clp, stateid, typemask); if (!*s) return nfserr_bad_stateid; + if (((*s)->sc_type == NFS4_REVOKED_DELEG_STID) && !return_revoked) { + nfs4_put_stid(*s); + if (cstate->minorversion) + return nfserr_deleg_revoked; + return nfserr_bad_stateid; + } return nfs_ok; }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Larry Finger Larry.Finger@lwfinger.net
commit 519ce2f933fa14acf69d5c8cabcc18711943d629 upstream.
In routine rtl92ee_set_fw_rsvdpagepkt(), the driver allocates an skb, but never calls rtl_cmd_send_packet(), which will free the buffer. All other rtlwifi drivers perform this operation correctly.
This problem has been in the driver since it was included in the kernel. Fortunately, each firmware load only leaks 4 buffers, which likely explains why it has not previously been detected.
Signed-off-by: Larry Finger Larry.Finger@lwfinger.net Signed-off-by: Kalle Valo kvalo@codeaurora.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/wireless/realtek/rtlwifi/rtl8192ee/fw.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/fw.c +++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/fw.c @@ -682,7 +682,7 @@ void rtl92ee_set_fw_rsvdpagepkt(struct i struct rtl_priv *rtlpriv = rtl_priv(hw); struct rtl_mac *mac = rtl_mac(rtl_priv(hw)); struct sk_buff *skb = NULL; - + bool rtstatus; u32 totalpacketlen; u8 u1rsvdpageloc[5] = { 0 }; bool b_dlok = false; @@ -768,7 +768,9 @@ void rtl92ee_set_fw_rsvdpagepkt(struct i skb = dev_alloc_skb(totalpacketlen); skb_put_data(skb, &reserved_page_packet, totalpacketlen);
- b_dlok = true; + rtstatus = rtl_cmd_send_packet(hw, skb); + if (rtstatus) + b_dlok = true;
if (b_dlok) { RT_TRACE(rtlpriv, COMP_POWER, DBG_LOUD ,
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Arnd Bergmann arnd@arndb.de
commit 3f2a162fab15aee243178b5308bb5d1206fc4043 upstream.
We set rtlhal->last_suspend_sec to an uninitialized stack variable, but unfortunately gcc never warned about this, I only found it while working on another patch. I opened a gcc bug for this.
Presumably the value of rtlhal->last_suspend_sec is not all that important, but it does get used, so we probably want the patch backported to stable kernels.
Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82839 Signed-off-by: Arnd Bergmann arnd@arndb.de Acked-by: Larry Finger Larry.Finger@lwfinger.net Signed-off-by: Kalle Valo kvalo@codeaurora.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c +++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c @@ -1372,6 +1372,7 @@ static void _rtl8821ae_get_wakeup_reason
ppsc->wakeup_reason = 0;
+ do_gettimeofday(&ts); rtlhal->last_suspend_sec = ts.tv_sec;
switch (fw_reason) {
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Backlund tmb@mageia.org
commit c2c48ddfc8b03b9ecb51d2832b586497b37531bc upstream.
iwlwifi 9000 and a0000 series hw contains an extra dash in firmware file name as seeen in modinfo output for kernel 4.14:
firmware: iwlwifi-9260-th-b0-jf-b0--34.ucode firmware: iwlwifi-9260-th-a0-jf-a0--34.ucode firmware: iwlwifi-9000-pu-a0-jf-b0--34.ucode firmware: iwlwifi-9000-pu-a0-jf-a0--34.ucode firmware: iwlwifi-QuQnj-a0-hr-a0--34.ucode firmware: iwlwifi-QuQnj-a0-jf-b0--34.ucode firmware: iwlwifi-QuQnj-f0-hr-a0--34.ucode firmware: iwlwifi-Qu-a0-jf-b0--34.ucode firmware: iwlwifi-Qu-a0-hr-a0--34.ucode
Fix that by dropping the extra adding of '"-"'.
Signed-off-by: Thomas Backlund tmb@mageia.org Signed-off-by: Luca Coelho luciano.coelho@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/wireless/intel/iwlwifi/cfg/9000.c | 6 +++--- drivers/net/wireless/intel/iwlwifi/cfg/a000.c | 10 +++++----- 2 files changed, 8 insertions(+), 8 deletions(-)
--- a/drivers/net/wireless/intel/iwlwifi/cfg/9000.c +++ b/drivers/net/wireless/intel/iwlwifi/cfg/9000.c @@ -79,11 +79,11 @@ #define IWL9000_MODULE_FIRMWARE(api) \ IWL9000_FW_PRE "-" __stringify(api) ".ucode" #define IWL9000RFB_MODULE_FIRMWARE(api) \ - IWL9000RFB_FW_PRE "-" __stringify(api) ".ucode" + IWL9000RFB_FW_PRE __stringify(api) ".ucode" #define IWL9260A_MODULE_FIRMWARE(api) \ - IWL9260A_FW_PRE "-" __stringify(api) ".ucode" + IWL9260A_FW_PRE __stringify(api) ".ucode" #define IWL9260B_MODULE_FIRMWARE(api) \ - IWL9260B_FW_PRE "-" __stringify(api) ".ucode" + IWL9260B_FW_PRE __stringify(api) ".ucode"
#define NVM_HW_SECTION_NUM_FAMILY_9000 10
--- a/drivers/net/wireless/intel/iwlwifi/cfg/a000.c +++ b/drivers/net/wireless/intel/iwlwifi/cfg/a000.c @@ -80,15 +80,15 @@ #define IWL_A000_HR_A0_FW_PRE "iwlwifi-QuQnj-a0-hr-a0-"
#define IWL_A000_HR_MODULE_FIRMWARE(api) \ - IWL_A000_HR_FW_PRE "-" __stringify(api) ".ucode" + IWL_A000_HR_FW_PRE __stringify(api) ".ucode" #define IWL_A000_JF_MODULE_FIRMWARE(api) \ - IWL_A000_JF_FW_PRE "-" __stringify(api) ".ucode" + IWL_A000_JF_FW_PRE __stringify(api) ".ucode" #define IWL_A000_HR_F0_QNJ_MODULE_FIRMWARE(api) \ - IWL_A000_HR_F0_FW_PRE "-" __stringify(api) ".ucode" + IWL_A000_HR_F0_FW_PRE __stringify(api) ".ucode" #define IWL_A000_JF_B0_QNJ_MODULE_FIRMWARE(api) \ - IWL_A000_JF_B0_FW_PRE "-" __stringify(api) ".ucode" + IWL_A000_JF_B0_FW_PRE __stringify(api) ".ucode" #define IWL_A000_HR_A0_QNJ_MODULE_FIRMWARE(api) \ - IWL_A000_HR_A0_FW_PRE "-" __stringify(api) ".ucode" + IWL_A000_HR_A0_FW_PRE __stringify(api) ".ucode"
#define NVM_HW_SECTION_NUM_FAMILY_A000 10
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: NeilBrown neilb@suse.com
commit d47c8ad261f787af22a220ffcc2d07afba809223 upstream.
A recent patch aimed to cause md_write_start() to fail (rather than block) when the mddev was suspending, so as to avoid deadlocks. Unfortunately the test in wait_event() was wrong, and it didn't change behaviour at all.
We wait_event() must wait until the metadata is written OR the array is suspending.
Fixes: cc27b0c78c79 ("md: fix deadlock between mddev_suspend() and md_write_start()") Reported-by: Xiao Ni xni@redhat.com Signed-off-by: NeilBrown neilb@suse.com Signed-off-by: Shaohua Li shli@fb.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/md/md.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -8039,7 +8039,8 @@ bool md_write_start(struct mddev *mddev, if (did_change) sysfs_notify_dirent_safe(mddev->sysfs_state); wait_event(mddev->sb_wait, - !test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags) && !mddev->suspended); + !test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags) || + mddev->suspended); if (test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags)) { percpu_ref_put(&mddev->writes_pending); return false;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Artur Paszkiewicz artur.paszkiewicz@intel.com
commit b90f6ff080c52e2f05364210733df120e3c4e597 upstream.
Only MD_SB_CHANGE_PENDING should be used to wait for transition from clean to dirty. Checking also MD_SB_CHANGE_CLEAN is unnecessary and can race with e.g. md_do_sync(). This sporadically causes a hang when changing consistency policy during resync:
INFO: task mdadm:6183 blocked for more than 30 seconds. Not tainted 4.14.0-rc3+ #391 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. mdadm D12752 6183 6022 0x00000000 Call Trace: __schedule+0x93f/0x990 schedule+0x6b/0x90 md_allow_write+0x100/0x130 [md_mod] ? do_wait_intr_irq+0x90/0x90 resize_stripes+0x3a/0x5b0 [raid456] ? kernfs_fop_write+0xbe/0x180 raid5_change_consistency_policy+0xa6/0x200 [raid456] consistency_policy_store+0x2e/0x70 [md_mod] md_attr_store+0x90/0xc0 [md_mod] sysfs_kf_write+0x42/0x50 kernfs_fop_write+0x119/0x180 __vfs_write+0x28/0x110 ? rcu_sync_lockdep_assert+0x12/0x60 ? __sb_start_write+0x15a/0x1c0 ? vfs_write+0xa3/0x1a0 vfs_write+0xb4/0x1a0 SyS_write+0x49/0xa0 entry_SYSCALL_64_fastpath+0x18/0xad
Fixes: 2214c260c72b ("md: don't return -EAGAIN in md_allow_write for external metadata arrays") Signed-off-by: Artur Paszkiewicz artur.paszkiewicz@intel.com Signed-off-by: Shaohua Li shli@fb.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/md/md.c | 1 - 1 file changed, 1 deletion(-)
--- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -8111,7 +8111,6 @@ void md_allow_write(struct mddev *mddev) sysfs_notify_dirent_safe(mddev->sysfs_state); /* wait for the dirty state to be recorded in the metadata */ wait_event(mddev->sb_wait, - !test_bit(MD_SB_CHANGE_CLEAN, &mddev->sb_flags) && !test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags)); } else spin_unlock(&mddev->lock);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Loic Poulain loic.poulain@linaro.org
commit 6e518111060c2290427d79c43d4add9600ad852b upstream.
This patch implements the hdev setup function since wcnss-bt does not have persistent memory to store an allocated BD address. The device is therefore marked as unconfigured if no BD address has been previously retrieved.
Signed-off-by: Loic Poulain loic.poulain@linaro.org Signed-off-by: Marcel Holtmann marcel@holtmann.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/bluetooth/btqcomsmd.c | 34 ++++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+)
--- a/drivers/bluetooth/btqcomsmd.c +++ b/drivers/bluetooth/btqcomsmd.c @@ -26,6 +26,7 @@ struct btqcomsmd { struct hci_dev *hdev;
+ bdaddr_t bdaddr; struct rpmsg_endpoint *acl_channel; struct rpmsg_endpoint *cmd_channel; }; @@ -100,6 +101,38 @@ static int btqcomsmd_close(struct hci_de return 0; }
+static int btqcomsmd_setup(struct hci_dev *hdev) +{ + struct btqcomsmd *btq = hci_get_drvdata(hdev); + struct sk_buff *skb; + int err; + + skb = __hci_cmd_sync(hdev, HCI_OP_RESET, 0, NULL, HCI_INIT_TIMEOUT); + if (IS_ERR(skb)) + return PTR_ERR(skb); + kfree_skb(skb); + + /* Devices do not have persistent storage for BD address. If no + * BD address has been retrieved during probe, mark the device + * as having an invalid BD address. + */ + if (!bacmp(&btq->bdaddr, BDADDR_ANY)) { + set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks); + return 0; + } + + /* When setting a configured BD address fails, mark the device + * as having an invalid BD address. + */ + err = qca_set_bdaddr_rome(hdev, &btq->bdaddr); + if (err) { + set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks); + return 0; + } + + return 0; +} + static int btqcomsmd_probe(struct platform_device *pdev) { struct btqcomsmd *btq; @@ -135,6 +168,7 @@ static int btqcomsmd_probe(struct platfo hdev->open = btqcomsmd_open; hdev->close = btqcomsmd_close; hdev->send = btqcomsmd_send; + hdev->setup = btqcomsmd_setup; hdev->set_bdaddr = qca_set_bdaddr_rome;
ret = hci_register_dev(hdev);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Shaohua Li shli@fb.com
commit 938b533d479e7428b7fa1b8179283646d2e2c53d upstream.
This reverts commit 8031c3ddc70a. That patches doesn't work well if PAGE_SIZE > 4k. We will fix the original problem with a different approach.
Fix: 8031c3ddc70a(md/bitmap: copy correct data for bitmap super) Reported-by: Joshua Kinard kumba@gentoo.org Suggested-by: Neil Brown neilb@suse.com Signed-off-by: Shaohua Li shli@fb.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/md/bitmap.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/md/bitmap.c +++ b/drivers/md/bitmap.c @@ -625,7 +625,7 @@ re_read: err = read_sb_page(bitmap->mddev, offset, sb_page, - 0, PAGE_SIZE); + 0, sizeof(bitmap_super_t)); } if (err) return err; @@ -2123,7 +2123,7 @@ int bitmap_resize(struct bitmap *bitmap, if (store.sb_page && bitmap->storage.sb_page) memcpy(page_address(store.sb_page), page_address(bitmap->storage.sb_page), - PAGE_SIZE); + sizeof(bitmap_super_t)); bitmap_file_unmap(&bitmap->storage); bitmap->storage = store;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miklos Szeredi mszeredi@redhat.com
commit 24c20305c7fc8959836211cb8c50aab93ae0e54f upstream.
This patch doesn't actually fix any bug, just paves the way for fixing mark and group pinning.
Reviewed-by: Amir Goldstein amir73il@gmail.com Signed-off-by: Miklos Szeredi mszeredi@redhat.com Signed-off-by: Jan Kara jack@suse.cz Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/notify/mark.c | 98 +++++++++++++++++++++++++++---------------------------- 1 file changed, 49 insertions(+), 49 deletions(-)
--- a/fs/notify/mark.c +++ b/fs/notify/mark.c @@ -109,16 +109,6 @@ void fsnotify_get_mark(struct fsnotify_m atomic_inc(&mark->refcnt); }
-/* - * Get mark reference when we found the mark via lockless traversal of object - * list. Mark can be already removed from the list by now and on its way to be - * destroyed once SRCU period ends. - */ -static bool fsnotify_get_mark_safe(struct fsnotify_mark *mark) -{ - return atomic_inc_not_zero(&mark->refcnt); -} - static void __fsnotify_recalc_mask(struct fsnotify_mark_connector *conn) { u32 new_mask = 0; @@ -256,32 +246,63 @@ void fsnotify_put_mark(struct fsnotify_m FSNOTIFY_REAPER_DELAY); }
-bool fsnotify_prepare_user_wait(struct fsnotify_iter_info *iter_info) +/* + * Get mark reference when we found the mark via lockless traversal of object + * list. Mark can be already removed from the list by now and on its way to be + * destroyed once SRCU period ends. + * + * Also pin the group so it doesn't disappear under us. + */ +static bool fsnotify_get_mark_safe(struct fsnotify_mark *mark) { struct fsnotify_group *group;
- if (WARN_ON_ONCE(!iter_info->inode_mark && !iter_info->vfsmount_mark)) - return false; - - if (iter_info->inode_mark) - group = iter_info->inode_mark->group; - else - group = iter_info->vfsmount_mark->group; + if (!mark) + return true;
+ group = mark->group; /* * Since acquisition of mark reference is an atomic op as well, we can * be sure this inc is seen before any effect of refcount increment. */ atomic_inc(&group->user_waits); + if (atomic_inc_not_zero(&mark->refcnt)) + return true; + + if (atomic_dec_and_test(&group->user_waits) && group->shutdown) + wake_up(&group->notification_waitq); + + return false; +} + +/* + * Puts marks and wakes up group destruction if necessary. + * + * Pairs with fsnotify_get_mark_safe() + */ +static void fsnotify_put_mark_wake(struct fsnotify_mark *mark) +{ + if (mark) { + struct fsnotify_group *group = mark->group;
- if (iter_info->inode_mark) { - /* This can fail if mark is being removed */ - if (!fsnotify_get_mark_safe(iter_info->inode_mark)) - goto out_wait; - } - if (iter_info->vfsmount_mark) { - if (!fsnotify_get_mark_safe(iter_info->vfsmount_mark)) - goto out_inode; + fsnotify_put_mark(mark); + /* + * We abuse notification_waitq on group shutdown for waiting for + * all marks pinned when waiting for userspace. + */ + if (atomic_dec_and_test(&group->user_waits) && group->shutdown) + wake_up(&group->notification_waitq); + } +} + +bool fsnotify_prepare_user_wait(struct fsnotify_iter_info *iter_info) +{ + /* This can fail if mark is being removed */ + if (!fsnotify_get_mark_safe(iter_info->inode_mark)) + return false; + if (!fsnotify_get_mark_safe(iter_info->vfsmount_mark)) { + fsnotify_put_mark_wake(iter_info->inode_mark); + return false; }
/* @@ -292,34 +313,13 @@ bool fsnotify_prepare_user_wait(struct f srcu_read_unlock(&fsnotify_mark_srcu, iter_info->srcu_idx);
return true; -out_inode: - if (iter_info->inode_mark) - fsnotify_put_mark(iter_info->inode_mark); -out_wait: - if (atomic_dec_and_test(&group->user_waits) && group->shutdown) - wake_up(&group->notification_waitq); - return false; }
void fsnotify_finish_user_wait(struct fsnotify_iter_info *iter_info) { - struct fsnotify_group *group = NULL; - iter_info->srcu_idx = srcu_read_lock(&fsnotify_mark_srcu); - if (iter_info->inode_mark) { - group = iter_info->inode_mark->group; - fsnotify_put_mark(iter_info->inode_mark); - } - if (iter_info->vfsmount_mark) { - group = iter_info->vfsmount_mark->group; - fsnotify_put_mark(iter_info->vfsmount_mark); - } - /* - * We abuse notification_waitq on group shutdown for waiting for all - * marks pinned when waiting for userspace. - */ - if (atomic_dec_and_test(&group->user_waits) && group->shutdown) - wake_up(&group->notification_waitq); + fsnotify_put_mark_wake(iter_info->inode_mark); + fsnotify_put_mark_wake(iter_info->vfsmount_mark); }
/*
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miklos Szeredi mszeredi@redhat.com
commit 0d6ec079d6aaa098b978d6395973bb027c752a03 upstream.
We may fail to pin one of the marks in fsnotify_prepare_user_wait() when dropping the srcu read lock, resulting in use after free at the next iteration.
Solution is to store both marks in iter_info instead of just the one we'll be sending the event for.
Reviewed-by: Amir Goldstein amir73il@gmail.com Signed-off-by: Miklos Szeredi mszeredi@redhat.com Fixes: 9385a84d7e1f ("fsnotify: Pass fsnotify_iter_info into handle_event handler") Signed-off-by: Jan Kara jack@suse.cz Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/notify/fsnotify.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-)
--- a/fs/notify/fsnotify.c +++ b/fs/notify/fsnotify.c @@ -335,6 +335,13 @@ int fsnotify(struct inode *to_tell, __u3 struct fsnotify_mark, obj_list); vfsmount_group = vfsmount_mark->group; } + /* + * Need to protect both marks against freeing so that we can + * continue iteration from this place, regardless of which mark + * we actually happen to send an event for. + */ + iter_info.inode_mark = inode_mark; + iter_info.vfsmount_mark = vfsmount_mark;
if (inode_group && vfsmount_group) { int cmp = fsnotify_compare_groups(inode_group, @@ -348,9 +355,6 @@ int fsnotify(struct inode *to_tell, __u3 } }
- iter_info.inode_mark = inode_mark; - iter_info.vfsmount_mark = vfsmount_mark; - ret = send_to_group(to_tell, inode_mark, vfsmount_mark, mask, data, data_is, cookie, file_name, &iter_info);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Miklos Szeredi mszeredi@redhat.com
commit 9a31d7ad997f55768c687974ce36b759065b49e5 upstream.
Blind increment of group's user_waits is not enough, we could be far enough in the group's destruction that it isn't taken into account (i.e. grabbing the mark ref afterwards doesn't guarantee that it was the ref coming from the _group_ that was grabbed).
Instead we need to check (under lock) that the mark is still attached to the group after having obtained a ref to the mark. If not, skip it.
Reviewed-by: Amir Goldstein amir73il@gmail.com Signed-off-by: Miklos Szeredi mszeredi@redhat.com Fixes: 9385a84d7e1f ("fsnotify: Pass fsnotify_iter_info into handle_event handler") Signed-off-by: Jan Kara jack@suse.cz Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/notify/mark.c | 25 +++++++++++-------------- 1 file changed, 11 insertions(+), 14 deletions(-)
--- a/fs/notify/mark.c +++ b/fs/notify/mark.c @@ -255,23 +255,20 @@ void fsnotify_put_mark(struct fsnotify_m */ static bool fsnotify_get_mark_safe(struct fsnotify_mark *mark) { - struct fsnotify_group *group; - if (!mark) return true;
- group = mark->group; - /* - * Since acquisition of mark reference is an atomic op as well, we can - * be sure this inc is seen before any effect of refcount increment. - */ - atomic_inc(&group->user_waits); - if (atomic_inc_not_zero(&mark->refcnt)) - return true; - - if (atomic_dec_and_test(&group->user_waits) && group->shutdown) - wake_up(&group->notification_waitq); - + if (atomic_inc_not_zero(&mark->refcnt)) { + spin_lock(&mark->lock); + if (mark->flags & FSNOTIFY_MARK_FLAG_ATTACHED) { + /* mark is attached, group is still alive then */ + atomic_inc(&mark->group->user_waits); + spin_unlock(&mark->lock); + return true; + } + spin_unlock(&mark->lock); + fsnotify_put_mark(mark); + } return false; }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Rameshwar Prasad Sahu rsahu@apm.com
commit f1601113ddc0339a745e702f4fb1ca37d4875e65 upstream.
When tracing ata link error event, the kernel crashes when the disk is removed due to NULL pointer access by trace_ata_eh_link_autopsy API. This occurs as the dev is NULL when the disk disappeared. This patch fixes this crash by calling trace_ata_eh_link_autopsy only if "dev" is not NULL.
v2 changes: Removed direct passing "link" pointer instead of "dev" in trace API.
Signed-off-by: Rameshwar Prasad Sahu rsahu@apm.com Signed-off-by: Tejun Heo tj@kernel.org Fixes: 255c03d15a29 ("libata: Add tracepoints") Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/ata/libata-eh.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/ata/libata-eh.c +++ b/drivers/ata/libata-eh.c @@ -2264,8 +2264,8 @@ static void ata_eh_link_autopsy(struct a if (dev->flags & ATA_DFLAG_DUBIOUS_XFER) eflags |= ATA_EFLAG_DUBIOUS_XFER; ehc->i.action |= ata_eh_speed_down(dev, eflags, all_err_mask); + trace_ata_eh_link_autopsy(dev, ehc->i.action, all_err_mask); } - trace_ata_eh_link_autopsy(dev, ehc->i.action, all_err_mask); DPRINTK("EXIT\n"); }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Theodore Ts'o tytso@mit.edu
commit 51e3ae81ec58e95f10a98ef3dd6d7bce5d8e35a2 upstream.
If there are pending writes subject to delayed allocation, then i_size will show size after the writes have completed, while i_disksize contains the value of i_size on the disk (since the writes have not been persisted to disk).
If fallocate(2) is called with the FALLOC_FL_KEEP_SIZE flag, either with or without the FALLOC_FL_ZERO_RANGE flag set, and the new size after the fallocate(2) is between i_size and i_disksize, then after a crash, if a journal commit has resulted in the changes made by the fallocate() call to be persisted after a crash, but the delayed allocation write has not resolved itself, i_size would not be updated, and this would cause the following e2fsck complaint:
Inode 12, end of extent exceeds allowed value (logical block 33, physical block 33441, len 7)
This can only take place on a sparse file, where the fallocate(2) call is allocating blocks in a range which is before a pending delayed allocation write which is extending i_size. Since this situation is quite rare, and the window in which the crash must take place is typically < 30 seconds, in practice this condition will rarely happen.
Nevertheless, it can be triggered in testing, and in particular by xfstests generic/456.
Signed-off-by: Theodore Ts'o tytso@mit.edu Reported-by: Amir Goldstein amir73il@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ext4/extents.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
--- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -4794,7 +4794,8 @@ static long ext4_zero_range(struct file }
if (!(mode & FALLOC_FL_KEEP_SIZE) && - offset + len > i_size_read(inode)) { + (offset + len > i_size_read(inode) || + offset + len > EXT4_I(inode)->i_disksize)) { new_size = offset + len; ret = inode_newsize_ok(inode, new_size); if (ret) @@ -4965,7 +4966,8 @@ long ext4_fallocate(struct file *file, i }
if (!(mode & FALLOC_FL_KEEP_SIZE) && - offset + len > i_size_read(inode)) { + (offset + len > i_size_read(inode) || + offset + len > EXT4_I(inode)->i_disksize)) { new_size = offset + len; ret = inode_newsize_ok(inode, new_size); if (ret)
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ross Zwisler ross.zwisler@linux.intel.com
commit 559db4c6d784ceedc2a5418ced4d357cb843e221 upstream.
If an inode has inline data it is currently prevented from using DAX by a check in ext4_set_inode_flags(). When the inode grows inline data via ext4_create_inline_data() or removes its inline data via ext4_destroy_inline_data_nolock(), the value of S_DAX can change.
Currently these changes are unsafe because we don't hold off page faults and I/O, write back dirty radix tree entries and invalidate all mappings. There are also issues with mm-level races when changing the value of S_DAX, as well as issues with the VM_MIXEDMAP flag:
https://www.spinics.net/lists/linux-xfs/msg09859.html
The unsafe transition of S_DAX can reliably cause data corruption, as shown by the following fstest:
https://patchwork.kernel.org/patch/9948381/
Fix this issue by preventing the DAX mount option from being used on filesystems that were created to support inline data. Inline data is an option given to mkfs.ext4.
Signed-off-by: Ross Zwisler ross.zwisler@linux.intel.com Signed-off-by: Theodore Ts'o tytso@mit.edu Reviewed-by: Jan Kara jack@suse.cz Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ext4/inline.c | 10 ---------- fs/ext4/super.c | 5 +++++ 2 files changed, 5 insertions(+), 10 deletions(-)
--- a/fs/ext4/inline.c +++ b/fs/ext4/inline.c @@ -302,11 +302,6 @@ static int ext4_create_inline_data(handl EXT4_I(inode)->i_inline_size = len + EXT4_MIN_INLINE_DATA_SIZE; ext4_clear_inode_flag(inode, EXT4_INODE_EXTENTS); ext4_set_inode_flag(inode, EXT4_INODE_INLINE_DATA); - /* - * Propagate changes to inode->i_flags as well - e.g. S_DAX may - * get cleared - */ - ext4_set_inode_flags(inode); get_bh(is.iloc.bh); error = ext4_mark_iloc_dirty(handle, inode, &is.iloc);
@@ -451,11 +446,6 @@ static int ext4_destroy_inline_data_nolo } } ext4_clear_inode_flag(inode, EXT4_INODE_INLINE_DATA); - /* - * Propagate changes to inode->i_flags as well - e.g. S_DAX may - * get set. - */ - ext4_set_inode_flags(inode);
get_bh(is.iloc.bh); error = ext4_mark_iloc_dirty(handle, inode, &is.iloc); --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -3708,6 +3708,11 @@ static int ext4_fill_super(struct super_ }
if (sbi->s_mount_opt & EXT4_MOUNT_DAX) { + if (ext4_has_feature_inline_data(sb)) { + ext4_msg(sb, KERN_ERR, "Cannot use DAX on a filesystem" + " that may contain inline data"); + goto failed_mount; + } err = bdev_dax_supported(sb, blocksize); if (err) goto failed_mount;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ross Zwisler ross.zwisler@linux.intel.com
commit e9072d859df3e0f2c3ba450f0d1739595c2d5d13 upstream.
The current code has the potential for data corruption when changing an inode's journaling mode, as that can result in a subsequent unsafe change in S_DAX.
I've captured an instance of this data corruption in the following fstest:
https://patchwork.kernel.org/patch/9948377/
Prevent this data corruption from happening by disallowing changes to the journaling mode if the '-o dax' mount option was used. This means that for a given filesystem we could have a mix of inodes using either DAX or data journaling, but whatever state the inodes are in will be held for the duration of the mount.
Signed-off-by: Ross Zwisler ross.zwisler@linux.intel.com Signed-off-by: Theodore Ts'o tytso@mit.edu Reviewed-by: Jan Kara jack@suse.cz Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/ext4/inode.c | 5 ----- fs/ext4/ioctl.c | 16 +++++++++++++--- 2 files changed, 13 insertions(+), 8 deletions(-)
--- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -5967,11 +5967,6 @@ int ext4_change_inode_journal_flag(struc ext4_clear_inode_flag(inode, EXT4_INODE_JOURNAL_DATA); } ext4_set_aops(inode); - /* - * Update inode->i_flags after EXT4_INODE_JOURNAL_DATA was updated. - * E.g. S_DAX may get cleared / set. - */ - ext4_set_inode_flags(inode);
jbd2_journal_unlock_updates(journal); percpu_up_write(&sbi->s_journal_flag_rwsem); --- a/fs/ext4/ioctl.c +++ b/fs/ext4/ioctl.c @@ -291,10 +291,20 @@ flags_err: if (err) goto flags_out;
- if ((jflag ^ oldflags) & (EXT4_JOURNAL_DATA_FL)) + if ((jflag ^ oldflags) & (EXT4_JOURNAL_DATA_FL)) { + /* + * Changes to the journaling mode can cause unsafe changes to + * S_DAX if we are using the DAX mount option. + */ + if (test_opt(inode->i_sb, DAX)) { + err = -EBUSY; + goto flags_out; + } + err = ext4_change_inode_journal_flag(inode, jflag); - if (err) - goto flags_out; + if (err) + goto flags_out; + } if (migrate) { if (flags & EXT4_EXTENTS_FL) err = ext4_ext_migrate(inode);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Henrik Eriksson henrik.eriksson@axis.com
commit 20e3f985bb875fea4f86b04eba4b6cc29bfd6b71 upstream.
commit 3179f6200188 ("ALSA: core: add .get_time_info") had a side effect of changing the behaviour of the PCM runtime tstamp. Prior to this change tstamp was not updated by snd_pcm_update_hw_ptr0() unless the hw_ptr had moved, after this change tstamp was always updated.
For an application using alsa-lib, doing snd_pcm_readi() followed by snd_pcm_status() to estimate the age of the read samples by subtracting status->avail * [sample rate] from status->tstamp this change degraded the accuracy of the estimate on devices where the pcm hw does not provide a granular hw_ptr, e.g., devices using soc-generic-dmaengine-pcm.c and a dma-engine with residue_granularity DMA_RESIDUE_GRANULARITY_DESCRIPTOR. The accuracy of the estimate depended on the latency between the PCM hw completing a period and the driver called snd_pcm_period_elapsed() to notify ALSA core, typically determined by interrupt handling latency. After the change the accuracy of the estimate depended on the latency between the PCM hw completing a period and the application calling snd_pcm_status(), determined by the scheduling of the application process. The maximum error of the estimate is one period length in both cases, but the error average and variance is smaller when it depends on interrupt latency.
Instead of always updating tstamp, update it only if audio_tstamp changed.
Fixes: 3179f6200188 ("ALSA: core: add .get_time_info") Suggested-by: Pierre-Louis Bossart pierre-louis.bossart@linux.intel.com Signed-off-by: Henrik Eriksson henrik.eriksson@axis.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/core/pcm_lib.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
--- a/sound/core/pcm_lib.c +++ b/sound/core/pcm_lib.c @@ -248,8 +248,10 @@ static void update_audio_tstamp(struct s runtime->rate); *audio_tstamp = ns_to_timespec(audio_nsecs); } - runtime->status->audio_tstamp = *audio_tstamp; - runtime->status->tstamp = *curr_tstamp; + if (!timespec_equal(&runtime->status->audio_tstamp, audio_tstamp)) { + runtime->status->audio_tstamp = *audio_tstamp; + runtime->status->tstamp = *curr_tstamp; + }
/* * re-take a driver timestamp to let apps detect if the reference tstamp
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Takashi Iwai tiwai@suse.de
commit d937cd6790a2bef2d07b500487646bd794c039bb upstream.
When the usb-audio descriptor contains the malformed feature unit description with a too short length, the driver may access out-of-bounds. Add a sanity check of the header size at the beginning of parse_audio_feature_unit().
Fixes: 23caaf19b11e ("ALSA: usb-mixer: Add support for Audio Class v2.0") Reported-by: Andrey Konovalov andreyknvl@google.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/usb/mixer.c | 12 ++++++++++++ 1 file changed, 12 insertions(+)
--- a/sound/usb/mixer.c +++ b/sound/usb/mixer.c @@ -1469,6 +1469,12 @@ static int parse_audio_feature_unit(stru __u8 *bmaControls;
if (state->mixer->protocol == UAC_VERSION_1) { + if (hdr->bLength < 7) { + usb_audio_err(state->chip, + "unit %u: invalid UAC_FEATURE_UNIT descriptor\n", + unitid); + return -EINVAL; + } csize = hdr->bControlSize; if (!csize) { usb_audio_dbg(state->chip, @@ -1486,6 +1492,12 @@ static int parse_audio_feature_unit(stru } } else { struct uac2_feature_unit_descriptor *ftr = _ftr; + if (hdr->bLength < 6) { + usb_audio_err(state->chip, + "unit %u: invalid UAC_FEATURE_UNIT descriptor\n", + unitid); + return -EINVAL; + } csize = 4; channels = (hdr->bLength - 6) / 4 - 1; bmaControls = ftr->bmaControls;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Takashi Iwai tiwai@suse.de
commit f658f17b5e0e339935dca23e77e0f3cad591926b upstream.
The usb-audio driver may trigger an out-of-bound access at parsing a malformed selector unit, as it checks the header length only after evaluating bNrInPins field, which can be already above the given length. Fix it by adding the length check beforehand.
Fixes: 99fc86450c43 ("ALSA: usb-mixer: parse descriptors with structs") Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/usb/mixer.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/sound/usb/mixer.c +++ b/sound/usb/mixer.c @@ -2098,7 +2098,8 @@ static int parse_audio_selector_unit(str const struct usbmix_name_map *map; char **namelist;
- if (!desc->bNrInPins || desc->bLength < 5 + desc->bNrInPins) { + if (desc->bLength < 5 || !desc->bNrInPins || + desc->bLength < 5 + desc->bNrInPins) { usb_audio_err(state->chip, "invalid SELECTOR UNIT descriptor %d\n", unitid); return -EINVAL;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Takashi Iwai tiwai@suse.de
commit 0a62d6c966956d77397c32836a5bbfe3af786fc1 upstream.
The helper functions to parse and look for the clock source, selector and multiplier unit may return the descriptor with a too short length than required, while there is no sanity check in the caller side. Add some sanity checks in the parsers, at least, to guarantee the given descriptor size, for avoiding the potential crashes.
Fixes: 79f920fbff56 ("ALSA: usb-audio: parse clock topology of UAC2 devices") Reported-by: Andrey Konovalov andreyknvl@google.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/usb/clock.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
--- a/sound/usb/clock.c +++ b/sound/usb/clock.c @@ -43,7 +43,7 @@ static struct uac_clock_source_descripto while ((cs = snd_usb_find_csint_desc(ctrl_iface->extra, ctrl_iface->extralen, cs, UAC2_CLOCK_SOURCE))) { - if (cs->bClockID == clock_id) + if (cs->bLength >= sizeof(*cs) && cs->bClockID == clock_id) return cs; }
@@ -59,8 +59,11 @@ static struct uac_clock_selector_descrip while ((cs = snd_usb_find_csint_desc(ctrl_iface->extra, ctrl_iface->extralen, cs, UAC2_CLOCK_SELECTOR))) { - if (cs->bClockID == clock_id) + if (cs->bLength >= sizeof(*cs) && cs->bClockID == clock_id) { + if (cs->bLength < 5 + cs->bNrInPins) + return NULL; return cs; + } }
return NULL; @@ -75,7 +78,7 @@ static struct uac_clock_multiplier_descr while ((cs = snd_usb_find_csint_desc(ctrl_iface->extra, ctrl_iface->extralen, cs, UAC2_CLOCK_MULTIPLIER))) { - if (cs->bClockID == clock_id) + if (cs->bLength >= sizeof(*cs) && cs->bClockID == clock_id) return cs; }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Takashi Iwai tiwai@suse.de
commit 3d4e8303f2c747c8540a0a0126d0151514f6468b upstream.
Some timer compat ioctls have NULL checks of timer instance with snd_BUG_ON() that bring up WARN_ON() when the debug option is set. Actually the condition can be met in the normal situation and it's confusing and bad to spew kernel warnings with stack trace there. Let's remove snd_BUG_ON() invocation and replace with the simple checks. Also, correct the error code to EBADFD to follow the native ioctl error handling.
Reported-by: syzbot syzkaller@googlegroups.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/core/timer_compat.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-)
--- a/sound/core/timer_compat.c +++ b/sound/core/timer_compat.c @@ -66,11 +66,11 @@ static int snd_timer_user_info_compat(st struct snd_timer *t;
tu = file->private_data; - if (snd_BUG_ON(!tu->timeri)) - return -ENXIO; + if (!tu->timeri) + return -EBADFD; t = tu->timeri->timer; - if (snd_BUG_ON(!t)) - return -ENXIO; + if (!t) + return -EBADFD; memset(&info, 0, sizeof(info)); info.card = t->card ? t->card->number : -1; if (t->hw.flags & SNDRV_TIMER_HW_SLAVE) @@ -99,8 +99,8 @@ static int snd_timer_user_status_compat( struct snd_timer_status32 status; tu = file->private_data; - if (snd_BUG_ON(!tu->timeri)) - return -ENXIO; + if (!tu->timeri) + return -EBADFD; memset(&status, 0, sizeof(status)); status.tstamp.tv_sec = tu->tstamp.tv_sec; status.tstamp.tv_nsec = tu->tstamp.tv_nsec;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kailang Yang kailang@realtek.com
commit 3aabf94c2d95fe465d5fa8590113d1c1f7d8333d upstream.
Sound works after a cold boot but not after a reboot from windows. This patch will solve this issue. This is relation with Class-D power control.
[ The bug was reported in Bugzilla below for Sony VAIO SVS13A1C5E -- tiwai]
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=197737 Signed-off-by: Kailang Yang kailang@realtek.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/pci/hda/patch_realtek.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -341,6 +341,9 @@ static void alc_fill_eapd_coef(struct hd case 0x10ec0299: alc_update_coef_idx(codec, 0x10, 1<<9, 0); break; + case 0x10ec0275: + alc_update_coef_idx(codec, 0xe, 0, 1<<0); + break; case 0x10ec0293: alc_update_coef_idx(codec, 0xa, 1<<13, 0); break;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Takashi Iwai tiwai@suse.de
commit c2432466f583cb719b35a41e757da587d9ab1d00 upstream.
We got a regression report about the HD-audio HDMI chmap, where some surround channels are reported as UNKNOWN. The git bisection pointed the culprit at the commit 9b3dc8aa3fb1 ("ALSA: hda - Register chmap obj as priv data instead of codec"). The story behind scene is like this:
- While moving the code out of the legacy HDA to the HDA common place, the patch modifies the code to obtain the chmap array indirectly in a byte array, and it expands it to kctl value array. - At the latter operation, the size of the array is wrongly passed by sizeof() to the pointer. - It can be 4 on 32bit arch, thus too short for 6+ channels. (And that's the reason why it didn't hit other persons; it's 8 on 64bit arch, thus it's usually enough.)
The code was further changed meanwhile, but the problem persisted. Let's fix it by correctly evaluating the array size.
Fixes: 9b3dc8aa3fb1 ("ALSA: hda - Register chmap obj as priv data instead of codec") Reported-by: VDR User user.vdr@gmail.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/hda/hdmi_chmap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/sound/hda/hdmi_chmap.c +++ b/sound/hda/hdmi_chmap.c @@ -746,7 +746,7 @@ static int hdmi_chmap_ctl_get(struct snd memset(pcm_chmap, 0, sizeof(pcm_chmap)); chmap->ops.get_chmap(chmap->hdac, pcm_idx, pcm_chmap);
- for (i = 0; i < sizeof(chmap); i++) + for (i = 0; i < ARRAY_SIZE(pcm_chmap); i++) ucontrol->value.integer.value[i] = pcm_chmap[i];
return 0;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Takashi Iwai tiwai@suse.de
commit d6c0615f510bc1ee26cfb2b9a3343ac99b9c46fb upstream.
The previous fix for addressing the breakage in vmaster slave initialization, commit a91d66129fb9 ("ALSA: hda - Fix incorrect TLV callback check introduced during set_fs() removal"), introduced a new helper to process over each slave kctl. However, this helper passes only the original kctl, not the virtual slave kctl. As a result, HD-audio driver (which is the only user so far) couldn't initialize the slave correctly because it's trying to update the value directly with the original kctl, not with the mapped kctl.
This patch fixes the situation again by passing both the mapped slaved and original slave kctls to the function. Luckily there is a single caller as of now, so changing the call signature is no big matter.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=197959 Fixes: a91d66129fb9 ("ALSA: hda - Fix incorrect TLV callback check introduced during set_fs() removal") Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- include/sound/control.h | 4 +++- sound/core/vmaster.c | 6 ++++-- sound/pci/hda/hda_codec.c | 10 +++++++--- 3 files changed, 14 insertions(+), 6 deletions(-)
--- a/include/sound/control.h +++ b/include/sound/control.h @@ -249,7 +249,9 @@ int snd_ctl_add_vmaster_hook(struct snd_ void snd_ctl_sync_vmaster(struct snd_kcontrol *kctl, bool hook_only); #define snd_ctl_sync_vmaster_hook(kctl) snd_ctl_sync_vmaster(kctl, true) int snd_ctl_apply_vmaster_slaves(struct snd_kcontrol *kctl, - int (*func)(struct snd_kcontrol *, void *), + int (*func)(struct snd_kcontrol *vslave, + struct snd_kcontrol *slave, + void *arg), void *arg);
/* --- a/sound/core/vmaster.c +++ b/sound/core/vmaster.c @@ -495,7 +495,9 @@ EXPORT_SYMBOL_GPL(snd_ctl_sync_vmaster); * Returns 0 if successful, or a negative error code. */ int snd_ctl_apply_vmaster_slaves(struct snd_kcontrol *kctl, - int (*func)(struct snd_kcontrol *, void *), + int (*func)(struct snd_kcontrol *vslave, + struct snd_kcontrol *slave, + void *arg), void *arg) { struct link_master *master; @@ -507,7 +509,7 @@ int snd_ctl_apply_vmaster_slaves(struct if (err < 0) return err; list_for_each_entry(slave, &master->slaves, list) { - err = func(&slave->slave, arg); + err = func(slave->kctl, &slave->slave, arg); if (err < 0) return err; } --- a/sound/pci/hda/hda_codec.c +++ b/sound/pci/hda/hda_codec.c @@ -1823,7 +1823,9 @@ struct slave_init_arg { };
/* initialize the slave volume with 0dB via snd_ctl_apply_vmaster_slaves() */ -static int init_slave_0dB(struct snd_kcontrol *kctl, void *_arg) +static int init_slave_0dB(struct snd_kcontrol *slave, + struct snd_kcontrol *kctl, + void *_arg) { struct slave_init_arg *arg = _arg; int _tlv[4]; @@ -1860,7 +1862,7 @@ static int init_slave_0dB(struct snd_kco arg->step = step; val = -tlv[2] / step; if (val > 0) { - put_kctl_with_value(kctl, val); + put_kctl_with_value(slave, val); return val; }
@@ -1868,7 +1870,9 @@ static int init_slave_0dB(struct snd_kco }
/* unmute the slave via snd_ctl_apply_vmaster_slaves() */ -static int init_slave_unmute(struct snd_kcontrol *slave, void *_arg) +static int init_slave_unmute(struct snd_kcontrol *slave, + struct snd_kcontrol *kctl, + void *_arg) { return put_kctl_with_value(slave, 1); }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Kailang Yang kailang@realtek.com
commit 2d7fe6185722b0817bb345f62ab06b76a7b26542 upstream.
It maybe the typo for ALC700 support patch. To fix the bit value on this patch.
Fixes: 6fbae35a3170 ("ALSA: hda/realtek - Add support for new codecs ALC700/ALC701/ALC703") Signed-off-by: Kailang Yang kailang@realtek.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/pci/hda/patch_realtek.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -6866,7 +6866,7 @@ static int patch_alc269(struct hda_codec case 0x10ec0703: spec->codec_variant = ALC269_TYPE_ALC700; spec->gen.mixer_nid = 0; /* ALC700 does not have any loopback mixer path */ - alc_update_coef_idx(codec, 0x4a, 0, 1 << 15); /* Combo jack auto trigger control */ + alc_update_coef_idx(codec, 0x4a, 1 << 15, 0); /* Combo jack auto trigger control */ break;
}
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Maxime Ripard maxime.ripard@free-electrons.com
commit 560bfe774f058e97596f30ff71cffdac52b72914 upstream.
The current code had the condition backward when checking if the codec should be running in slave or master mode.
Fix it, and make the comment a bit more readable.
Fixes: 36c684936fae ("ASoC: Add sun8i digital audio codec") Signed-off-by: Maxime Ripard maxime.ripard@free-electrons.com Reviewed-by: Chen-Yu Tsai wens@csie.org Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/soc/sunxi/sun8i-codec.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
--- a/sound/soc/sunxi/sun8i-codec.c +++ b/sound/soc/sunxi/sun8i-codec.c @@ -170,11 +170,11 @@ static int sun8i_set_fmt(struct snd_soc_
/* clock masters */ switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) { - case SND_SOC_DAIFMT_CBS_CFS: /* DAI Slave */ - value = 0x0; /* Codec Master */ + case SND_SOC_DAIFMT_CBS_CFS: /* Codec slave, DAI master */ + value = 0x1; break; - case SND_SOC_DAIFMT_CBM_CFM: /* DAI Master */ - value = 0x1; /* Codec Slave */ + case SND_SOC_DAIFMT_CBM_CFM: /* Codec Master, DAI slave */ + value = 0x0; break; default: return -EINVAL;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Maxime Ripard maxime.ripard@free-electrons.com
commit 18c1bf35c1c09bca05cf70bc984a4764e0b0372b upstream.
Since its introduction, the codec had an inversion of the left and right channels. It turned out to be pretty simple as it appears that the codec doesn't have the same polarity on the LRCK signal than the I2S block.
Fix this by inverting our bit value for the LRCK inversion.
Fixes: 36c684936fae ("ASoC: Add sun8i digital audio codec") Signed-off-by: Maxime Ripard maxime.ripard@free-electrons.com Reviewed-by: Chen-Yu Tsai wens@csie.org Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/soc/sunxi/sun8i-codec.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/sound/soc/sunxi/sun8i-codec.c +++ b/sound/soc/sunxi/sun8i-codec.c @@ -199,7 +199,7 @@ static int sun8i_set_fmt(struct snd_soc_ value << SUN8I_AIF1CLK_CTRL_AIF1_BCLK_INV); regmap_update_bits(scodec->regmap, SUN8I_AIF1CLK_CTRL, BIT(SUN8I_AIF1CLK_CTRL_AIF1_LRCK_INV), - value << SUN8I_AIF1CLK_CTRL_AIF1_LRCK_INV); + !value << SUN8I_AIF1CLK_CTRL_AIF1_LRCK_INV);
/* DAI format */ switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) {
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Maxime Ripard maxime.ripard@free-electrons.com
commit 316b7758c998fb13371d14bb6c9e45ab129c19a7 upstream.
While the current code was reporting to be able to work in master mode, it failed to do so because the BCLK divider wasn't programmed, meaning that the BCLK would run at the PLL's frequency no matter the sample rate.
It was obviously a bit too fast.
Add support to retrieve the divider to use, and set it. Since our PLL is not always able to generate a perfect multiple of the sample rate, we'll have to choose the closest divider that matches our setup.
Fixes: 36c684936fae ("ASoC: Add sun8i digital audio codec") Reviewed-by: Chen-Yu Tsai wens@csie.org Signed-off-by: Maxime Ripard maxime.ripard@free-electrons.com Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- sound/soc/sunxi/sun8i-codec.c | 51 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 51 insertions(+)
--- a/sound/soc/sunxi/sun8i-codec.c +++ b/sound/soc/sunxi/sun8i-codec.c @@ -73,6 +73,7 @@ #define SUN8I_SYS_SR_CTRL_AIF2_FS_MASK GENMASK(11, 8) #define SUN8I_AIF1CLK_CTRL_AIF1_WORD_SIZ_MASK GENMASK(5, 4) #define SUN8I_AIF1CLK_CTRL_AIF1_LRCK_DIV_MASK GENMASK(8, 6) +#define SUN8I_AIF1CLK_CTRL_AIF1_BCLK_DIV_MASK GENMASK(12, 9)
struct sun8i_codec { struct device *dev; @@ -226,12 +227,57 @@ static int sun8i_set_fmt(struct snd_soc_ return 0; }
+struct sun8i_codec_clk_div { + u8 div; + u8 val; +}; + +static const struct sun8i_codec_clk_div sun8i_codec_bclk_div[] = { + { .div = 1, .val = 0 }, + { .div = 2, .val = 1 }, + { .div = 4, .val = 2 }, + { .div = 6, .val = 3 }, + { .div = 8, .val = 4 }, + { .div = 12, .val = 5 }, + { .div = 16, .val = 6 }, + { .div = 24, .val = 7 }, + { .div = 32, .val = 8 }, + { .div = 48, .val = 9 }, + { .div = 64, .val = 10 }, + { .div = 96, .val = 11 }, + { .div = 128, .val = 12 }, + { .div = 192, .val = 13 }, +}; + +static u8 sun8i_codec_get_bclk_div(struct sun8i_codec *scodec, + unsigned int rate, + unsigned int word_size) +{ + unsigned long clk_rate = clk_get_rate(scodec->clk_module); + unsigned int div = clk_rate / rate / word_size / 2; + unsigned int best_val = 0, best_diff = ~0; + int i; + + for (i = 0; i < ARRAY_SIZE(sun8i_codec_bclk_div); i++) { + const struct sun8i_codec_clk_div *bdiv = &sun8i_codec_bclk_div[i]; + unsigned int diff = abs(bdiv->div - div); + + if (diff < best_diff) { + best_diff = diff; + best_val = bdiv->val; + } + } + + return best_val; +} + static int sun8i_codec_hw_params(struct snd_pcm_substream *substream, struct snd_pcm_hw_params *params, struct snd_soc_dai *dai) { struct sun8i_codec *scodec = snd_soc_codec_get_drvdata(dai->codec); int sample_rate; + u8 bclk_div;
/* * The CPU DAI handles only a sample of 16 bits. Configure the @@ -241,6 +287,11 @@ static int sun8i_codec_hw_params(struct SUN8I_AIF1CLK_CTRL_AIF1_WORD_SIZ_MASK, SUN8I_AIF1CLK_CTRL_AIF1_WORD_SIZ_16);
+ bclk_div = sun8i_codec_get_bclk_div(scodec, params_rate(params), 16); + regmap_update_bits(scodec->regmap, SUN8I_AIF1CLK_CTRL, + SUN8I_AIF1CLK_CTRL_AIF1_BCLK_DIV_MASK, + bclk_div << SUN8I_AIF1CLK_CTRL_AIF1_BCLK_DIV); + regmap_update_bits(scodec->regmap, SUN8I_AIF1CLK_CTRL, SUN8I_AIF1CLK_CTRL_AIF1_LRCK_DIV_MASK, SUN8I_AIF1CLK_CTRL_AIF1_LRCK_DIV_16);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Joakim Tjernlund joakim.tjernlund@infinera.com
commit 07d70913dce59f3c8e5d0ca76250861158a9ca6c upstream.
Avoton/Rangeley are based on Silvermount micro-architecture, like Bay Trail, and uses the INTEL_SPI_BYT method to drive SPI.
Signed-off-by: Joakim Tjernlund joakim.tjernlund@infinera.com Acked-by: Mika Westerberg mika.westerberg@linux.intel.com Signed-off-by: Lee Jones lee.jones@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/mfd/lpc_ich.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/mfd/lpc_ich.c +++ b/drivers/mfd/lpc_ich.c @@ -522,6 +522,7 @@ static struct lpc_ich_info lpc_chipset_i .name = "Avoton SoC", .iTCO_version = 3, .gpio_version = AVOTON_GPIO, + .spi_type = INTEL_SPI_BYT, }, [LPC_BAYTRAIL] = { .name = "Bay Trail SoC",
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Al Viro viro@zeniv.linux.org.uk
commit 11d49e9d089ccec81be87c2386dfdd010d7f7f6e upstream.
we are advancing sg as we go, so the pages we need to drop in case of error are *before* the current sg.
Signed-off-by: Al Viro viro@zeniv.linux.org.uk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/vhost/scsi.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
--- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -688,6 +688,7 @@ vhost_scsi_iov_to_sgl(struct vhost_scsi_ struct scatterlist *sg, int sg_count) { size_t off = iter->iov_offset; + struct scatterlist *p = sg; int i, ret;
for (i = 0; i < iter->nr_segs; i++) { @@ -696,8 +697,8 @@ vhost_scsi_iov_to_sgl(struct vhost_scsi_
ret = vhost_scsi_map_to_sgl(cmd, base, len, sg, write); if (ret < 0) { - for (i = 0; i < sg_count; i++) { - struct page *page = sg_page(&sg[i]); + while (p < sg) { + struct page *page = sg_page(p++); if (page) put_page(page); }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tuomas Tynkkynen tuomas@tuxera.com
commit 61b272c3aa170b3e461b8df636407b29f35f98eb upstream.
Since commit c4fac9100456 ("9p: Implement show_options"), the mount options of 9p filesystems are printed out with some missing commas between the individual options:
p9-scratch on /mnt/scratch type 9p (rw,dirsync,loose,access=clienttrans=virtio)
Add them back.
Fixes: c4fac9100456 ("9p: Implement show_options") Signed-off-by: Tuomas Tynkkynen tuomas@tuxera.com Signed-off-by: Al Viro viro@zeniv.linux.org.uk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- net/9p/client.c | 2 +- net/9p/trans_fd.c | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-)
--- a/net/9p/client.c +++ b/net/9p/client.c @@ -82,7 +82,7 @@ int p9_show_client_options(struct seq_fi { if (clnt->msize != 8192) seq_printf(m, ",msize=%u", clnt->msize); - seq_printf(m, "trans=%s", clnt->trans_mod->name); + seq_printf(m, ",trans=%s", clnt->trans_mod->name);
switch (clnt->proto_version) { case p9_proto_legacy: --- a/net/9p/trans_fd.c +++ b/net/9p/trans_fd.c @@ -724,12 +724,12 @@ static int p9_fd_show_options(struct seq { if (clnt->trans_mod == &p9_tcp_trans) { if (clnt->trans_opts.tcp.port != P9_PORT) - seq_printf(m, "port=%u", clnt->trans_opts.tcp.port); + seq_printf(m, ",port=%u", clnt->trans_opts.tcp.port); } else if (clnt->trans_mod == &p9_fd_trans) { if (clnt->trans_opts.fd.rfd != ~0) - seq_printf(m, "rfd=%u", clnt->trans_opts.fd.rfd); + seq_printf(m, ",rfd=%u", clnt->trans_opts.fd.rfd); if (clnt->trans_opts.fd.wfd != ~0) - seq_printf(m, "wfd=%u", clnt->trans_opts.fd.wfd); + seq_printf(m, ",wfd=%u", clnt->trans_opts.fd.wfd); } return 0; }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tuomas Tynkkynen tuomas@tuxera.com
commit 8ee031631546cf2f7859cc69593bd60bbdd70b46 upstream.
Commit fd2421f54423 ("fs/9p: When doing inode lookup compare qid details and inode mode bits.") transformed v9fs_qid_iget() to use iget5_locked() instead of iget_locked(). However, the test() callback is not checking fid.path at all, which means that a lookup in the inode cache can now accidentally locate a completely wrong inode from the same inode hash bucket if the other fields (qid.type and qid.version) match.
Fixes: fd2421f54423 ("fs/9p: When doing inode lookup compare qid details and inode mode bits.") Reviewed-by: Latchesar Ionkov lucho@ionkov.net Signed-off-by: Tuomas Tynkkynen tuomas@tuxera.com Signed-off-by: Al Viro viro@zeniv.linux.org.uk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/9p/vfs_inode.c | 3 +++ fs/9p/vfs_inode_dotl.c | 3 +++ 2 files changed, 6 insertions(+)
--- a/fs/9p/vfs_inode.c +++ b/fs/9p/vfs_inode.c @@ -483,6 +483,9 @@ static int v9fs_test_inode(struct inode
if (v9inode->qid.type != st->qid.type) return 0; + + if (v9inode->qid.path != st->qid.path) + return 0; return 1; }
--- a/fs/9p/vfs_inode_dotl.c +++ b/fs/9p/vfs_inode_dotl.c @@ -87,6 +87,9 @@ static int v9fs_test_inode_dotl(struct i
if (v9inode->qid.type != st->qid.type) return 0; + + if (v9inode->qid.path != st->qid.path) + return 0; return 1; }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Tuomas Tynkkynen tuomas@tuxera.com
commit 9523feac272ccad2ad8186ba4fcc89103754de52 upstream.
Because userspace gets Very Unhappy when calls like stat() and execve() return -EINTR on 9p filesystem mounts. For instance, when bash is looking in PATH for things to execute and some SIGCHLD interrupts stat(), bash can throw a spurious 'command not found' since it doesn't retry the stat().
In practice, hitting the problem is rare and needs a really slow/bogged down 9p server.
Signed-off-by: Tuomas Tynkkynen tuomas@tuxera.com Signed-off-by: Al Viro viro@zeniv.linux.org.uk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- net/9p/client.c | 3 +-- net/9p/trans_virtio.c | 13 ++++++------- net/9p/trans_xen.c | 4 ++-- 3 files changed, 9 insertions(+), 11 deletions(-)
--- a/net/9p/client.c +++ b/net/9p/client.c @@ -773,8 +773,7 @@ p9_client_rpc(struct p9_client *c, int8_ } again: /* Wait for the response */ - err = wait_event_interruptible(*req->wq, - req->status >= REQ_STATUS_RCVD); + err = wait_event_killable(*req->wq, req->status >= REQ_STATUS_RCVD);
/* * Make sure our req is coherent with regard to updates in other --- a/net/9p/trans_virtio.c +++ b/net/9p/trans_virtio.c @@ -286,8 +286,8 @@ req_retry: if (err == -ENOSPC) { chan->ring_bufs_avail = 0; spin_unlock_irqrestore(&chan->lock, flags); - err = wait_event_interruptible(*chan->vc_wq, - chan->ring_bufs_avail); + err = wait_event_killable(*chan->vc_wq, + chan->ring_bufs_avail); if (err == -ERESTARTSYS) return err;
@@ -327,7 +327,7 @@ static int p9_get_mapped_pages(struct vi * Other zc request to finish here */ if (atomic_read(&vp_pinned) >= chan->p9_max_pages) { - err = wait_event_interruptible(vp_wq, + err = wait_event_killable(vp_wq, (atomic_read(&vp_pinned) < chan->p9_max_pages)); if (err == -ERESTARTSYS) return err; @@ -471,8 +471,8 @@ req_retry_pinned: if (err == -ENOSPC) { chan->ring_bufs_avail = 0; spin_unlock_irqrestore(&chan->lock, flags); - err = wait_event_interruptible(*chan->vc_wq, - chan->ring_bufs_avail); + err = wait_event_killable(*chan->vc_wq, + chan->ring_bufs_avail); if (err == -ERESTARTSYS) goto err_out;
@@ -489,8 +489,7 @@ req_retry_pinned: virtqueue_kick(chan->vq); spin_unlock_irqrestore(&chan->lock, flags); p9_debug(P9_DEBUG_TRANS, "virtio request kicked\n"); - err = wait_event_interruptible(*req->wq, - req->status >= REQ_STATUS_RCVD); + err = wait_event_killable(*req->wq, req->status >= REQ_STATUS_RCVD); /* * Non kernel buffers are pinned, unpin them */ --- a/net/9p/trans_xen.c +++ b/net/9p/trans_xen.c @@ -156,8 +156,8 @@ static int p9_xen_request(struct p9_clie ring = &priv->rings[num];
again: - while (wait_event_interruptible(ring->wq, - p9_xen_write_todo(ring, size)) != 0) + while (wait_event_killable(ring->wq, + p9_xen_write_todo(ring, size)) != 0) ;
spin_lock_irqsave(&ring->lock, flags);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bart Van Assche bart.vanassche@wdc.com
commit 8653188763b56e0bcbdcab30cc7b059672c900ac upstream.
Avoid that the following is reported while loading the qla2xxx kernel module:
BUG: using smp_processor_id() in preemptible [00000000] code: modprobe/783 caller is debug_smp_processor_id+0x17/0x20 CPU: 7 PID: 783 Comm: modprobe Not tainted 4.14.0-rc8-dbg+ #2 Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011 Call Trace: dump_stack+0x8e/0xce check_preemption_disabled+0xe3/0xf0 debug_smp_processor_id+0x17/0x20 qla2x00_probe_one+0xf43/0x26c0 [qla2xxx] pci_device_probe+0xca/0x140 driver_probe_device+0x2e2/0x440 __driver_attach+0xa3/0xe0 bus_for_each_dev+0x5f/0x90 driver_attach+0x19/0x20 bus_add_driver+0x1c0/0x260 driver_register+0x5b/0xd0 __pci_register_driver+0x63/0x70 qla2x00_module_init+0x1d6/0x222 [qla2xxx] do_one_initcall+0x3c/0x163 do_init_module+0x55/0x1eb load_module+0x20a2/0x2890 SYSC_finit_module+0xd7/0xf0 SyS_finit_module+0x9/0x10 entry_SYSCALL_64_fastpath+0x23/0xc2
Fixes: commit 8abfa9e22683 ("scsi: qla2xxx: Add function call to qpair for door bell") Signed-off-by: Bart Van Assche bart.vanassche@wdc.com Cc: Quinn Tran quinn.tran@cavium.com Cc: Himanshu Madhani himanshu.madhani@cavium.com Acked-by: Himanshu Madhani himanshu.madhani@cavium.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/scsi/qla2xxx/qla_os.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/scsi/qla2xxx/qla_os.c +++ b/drivers/scsi/qla2xxx/qla_os.c @@ -388,7 +388,7 @@ static void qla_init_base_qpair(struct s INIT_LIST_HEAD(&ha->base_qpair->nvme_done_list); ha->base_qpair->enable_class_2 = ql2xenableclass2; /* init qpair to this cpu. Will adjust at run time. */ - qla_cpu_update(rsp->qpair, smp_processor_id()); + qla_cpu_update(rsp->qpair, raw_smp_processor_id()); ha->base_qpair->pdev = ha->pdev;
if (IS_QLA27XX(ha) || IS_QLA83XX(ha))
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Damien Le Moal damien.lemoal@wdc.com
commit 4a109032e3941413d8a029f619543fc5aec1d26d upstream.
The three values starting at byte 8 of the Zoned Block Device Characteristics VPD page B6h are 32 bits values, not 64bits. So use get_unaligned_be32() to retrieve the values and not get_unaligned_be64()
Fixes: 89d947561077 ("sd: Implement support for ZBC devices") Signed-off-by: Damien Le Moal damien.lemoal@wdc.com Reviewed-by: Bart Van Assche Bart.VanAssche@wdc.com Reviewed-by: Johannes Thumshirn jthumshirn@suse.de Reviewed-by: Christoph Hellwig hch@lst.de Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/scsi/sd_zbc.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
--- a/drivers/scsi/sd_zbc.c +++ b/drivers/scsi/sd_zbc.c @@ -375,15 +375,15 @@ static int sd_zbc_read_zoned_characteris if (sdkp->device->type != TYPE_ZBC) { /* Host-aware */ sdkp->urswrz = 1; - sdkp->zones_optimal_open = get_unaligned_be64(&buf[8]); - sdkp->zones_optimal_nonseq = get_unaligned_be64(&buf[12]); + sdkp->zones_optimal_open = get_unaligned_be32(&buf[8]); + sdkp->zones_optimal_nonseq = get_unaligned_be32(&buf[12]); sdkp->zones_max_open = 0; } else { /* Host-managed */ sdkp->urswrz = buf[4] & 1; sdkp->zones_optimal_open = 0; sdkp->zones_optimal_nonseq = 0; - sdkp->zones_max_open = get_unaligned_be64(&buf[16]); + sdkp->zones_max_open = get_unaligned_be32(&buf[16]); }
return 0;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dick Kennedy dick.kennedy@broadcom.com
commit 1901762f2ca2747ed269239ca5332a8023ce4e3d upstream.
During pci hot plug, the kernel crashes in timer management code.
The sli4 remove_one handler is not stoping the timers as it starts to remove the port so that it can be swapped.
Fix: Stop the timers early in the handler routine.
Note: Fix in SLI-4 only. SLI-3 already stopped the timers properly.
Signed-off-by: Dick Kennedy dick.kennedy@broadcom.com Signed-off-by: James Smart james.smart@broadcom.com Reviewed-by: Johannes Thumshirn jthumshirn@suse.de Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/scsi/lpfc/lpfc_init.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/scsi/lpfc/lpfc_init.c +++ b/drivers/scsi/lpfc/lpfc_init.c @@ -11420,6 +11420,7 @@ lpfc_pci_remove_one_s4(struct pci_dev *p lpfc_debugfs_terminate(vport); lpfc_sli4_hba_unset(phba);
+ lpfc_stop_hba_timers(phba); spin_lock_irq(&phba->hbalock); list_del_init(&vport->listentry); spin_unlock_irq(&phba->hbalock);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dick Kennedy dick.kennedy@broadcom.com
commit 401bb4169da655f3e5d28d0b208182e1ab60bf2a upstream.
During pci hot plug, the kernel crashes in a list_add_call
The lookup by tag function will return null if the IOCB is out of range or does not have the on txcmplq flag set.
Fix: Check for null return from lookup by tag.
Signed-off-by: Dick Kennedy dick.kennedy@broadcom.com Signed-off-by: James Smart james.smart@broadcom.com Reviewed-by: Johannes Thumshirn jthumshirn@suse.de Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/scsi/lpfc/lpfc_sli.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-)
--- a/drivers/scsi/lpfc/lpfc_sli.c +++ b/drivers/scsi/lpfc/lpfc_sli.c @@ -12507,19 +12507,21 @@ lpfc_sli4_els_wcqe_to_rspiocbq(struct lp /* Look up the ELS command IOCB and create pseudo response IOCB */ cmdiocbq = lpfc_sli_iocbq_lookup_by_tag(phba, pring, bf_get(lpfc_wcqe_c_request_tag, wcqe)); - /* Put the iocb back on the txcmplq */ - lpfc_sli_ringtxcmpl_put(phba, pring, cmdiocbq); - spin_unlock_irqrestore(&pring->ring_lock, iflags); - if (unlikely(!cmdiocbq)) { + spin_unlock_irqrestore(&pring->ring_lock, iflags); lpfc_printf_log(phba, KERN_WARNING, LOG_SLI, "0386 ELS complete with no corresponding " - "cmdiocb: iotag (%d)\n", - bf_get(lpfc_wcqe_c_request_tag, wcqe)); + "cmdiocb: 0x%x 0x%x 0x%x 0x%x\n", + wcqe->word0, wcqe->total_data_placed, + wcqe->parameter, wcqe->word3); lpfc_sli_release_iocbq(phba, irspiocbq); return NULL; }
+ /* Put the iocb back on the txcmplq */ + lpfc_sli_ringtxcmpl_put(phba, pring, cmdiocbq); + spin_unlock_irqrestore(&pring->ring_lock, iflags); + /* Fake the irspiocbq and copy necessary response information */ lpfc_sli4_iocb_param_transfer(phba, irspiocbq, cmdiocbq, wcqe);
@@ -17137,7 +17139,8 @@ exit: if (pcmd && pcmd->virt) dma_pool_free(phba->lpfc_drb_pool, pcmd->virt, pcmd->phys); kfree(pcmd); - lpfc_sli_release_iocbq(phba, iocbq); + if (iocbq) + lpfc_sli_release_iocbq(phba, iocbq); lpfc_in_buf_free(phba, &dmabuf->dbuf); }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dick Kennedy dick.kennedy@broadcom.com
commit 1234a6d54fed8a00091968c4eb2fb52e1cbb8e2e upstream.
The driver crashes when attempting to use a freed ndpl pointer.
The pci_remove_one handler runs on a separate kernel thread. The order of the removal is starting by freeing all of the ndlps and then disabling interrupts. In between these two events the driver can still receive an ELS and process it. When it tries to use the ndlp pointer will be NULL
Change the order of the pci_remove_one vs disable interrupts so that interrupts are disabled before the ndlp's are freed.
Signed-off-by: Dick Kennedy dick.kennedy@broadcom.com Signed-off-by: James Smart james.smart@broadcom.com Reviewed-by: Johannes Thumshirn jthumshirn@suse.de Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/scsi/lpfc/lpfc_attr.c | 6 ++++-- drivers/scsi/lpfc/lpfc_bsg.c | 4 +++- drivers/scsi/lpfc/lpfc_els.c | 7 ++++++- drivers/scsi/lpfc/lpfc_hbadisc.c | 5 ++++- drivers/scsi/lpfc/lpfc_init.c | 14 +++++++------- drivers/scsi/lpfc/lpfc_nportdisc.c | 2 +- drivers/scsi/lpfc/lpfc_sli.c | 12 ++++++++++++ 7 files changed, 37 insertions(+), 13 deletions(-)
--- a/drivers/scsi/lpfc/lpfc_attr.c +++ b/drivers/scsi/lpfc/lpfc_attr.c @@ -3134,7 +3134,8 @@ lpfc_txq_hw_show(struct device *dev, str struct lpfc_hba *phba = ((struct lpfc_vport *) shost->hostdata)->phba; struct lpfc_sli_ring *pring = lpfc_phba_elsring(phba);
- return snprintf(buf, PAGE_SIZE, "%d\n", pring->txq_max); + return snprintf(buf, PAGE_SIZE, "%d\n", + pring ? pring->txq_max : 0); }
static DEVICE_ATTR(txq_hw, S_IRUGO, @@ -3147,7 +3148,8 @@ lpfc_txcmplq_hw_show(struct device *dev, struct lpfc_hba *phba = ((struct lpfc_vport *) shost->hostdata)->phba; struct lpfc_sli_ring *pring = lpfc_phba_elsring(phba);
- return snprintf(buf, PAGE_SIZE, "%d\n", pring->txcmplq_max); + return snprintf(buf, PAGE_SIZE, "%d\n", + pring ? pring->txcmplq_max : 0); }
static DEVICE_ATTR(txcmplq_hw, S_IRUGO, --- a/drivers/scsi/lpfc/lpfc_bsg.c +++ b/drivers/scsi/lpfc/lpfc_bsg.c @@ -2911,7 +2911,7 @@ static int lpfcdiag_loop_post_rxbufs(str } }
- if (!cmdiocbq || !rxbmp || !rxbpl || !rxbuffer) { + if (!cmdiocbq || !rxbmp || !rxbpl || !rxbuffer || !pring) { ret_val = -ENOMEM; goto err_post_rxbufs_exit; } @@ -5421,6 +5421,8 @@ lpfc_bsg_timeout(struct bsg_job *job) struct lpfc_iocbq *check_iocb, *next_iocb;
pring = lpfc_phba_elsring(phba); + if (unlikely(!pring)) + return -EIO;
/* if job's driver data is NULL, the command completed or is in the * the process of completing. In this case, return status to request --- a/drivers/scsi/lpfc/lpfc_els.c +++ b/drivers/scsi/lpfc/lpfc_els.c @@ -7430,6 +7430,8 @@ lpfc_els_timeout_handler(struct lpfc_vpo timeout = (uint32_t)(phba->fc_ratov << 1);
pring = lpfc_phba_elsring(phba); + if (unlikely(!pring)) + return;
if ((phba->pport->load_flag & FC_UNLOADING)) return; @@ -9310,6 +9312,9 @@ void lpfc_fabric_abort_nport(struct lpfc
pring = lpfc_phba_elsring(phba);
+ if (unlikely(!pring)) + return; + spin_lock_irq(&phba->hbalock); list_for_each_entry_safe(piocb, tmp_iocb, &phba->fabric_iocb_list, list) { @@ -9416,7 +9421,7 @@ lpfc_sli4_els_xri_aborted(struct lpfc_hb rxid, 1);
/* Check if TXQ queue needs to be serviced */ - if (!(list_empty(&pring->txq))) + if (pring && !list_empty(&pring->txq)) lpfc_worker_wake_up(phba); return; } --- a/drivers/scsi/lpfc/lpfc_hbadisc.c +++ b/drivers/scsi/lpfc/lpfc_hbadisc.c @@ -3324,7 +3324,8 @@ lpfc_mbx_cmpl_read_topology(struct lpfc_
/* Unblock ELS traffic */ pring = lpfc_phba_elsring(phba); - pring->flag &= ~LPFC_STOP_IOCB_EVENT; + if (pring) + pring->flag &= ~LPFC_STOP_IOCB_EVENT;
/* Check for error */ if (mb->mbxStatus) { @@ -5430,6 +5431,8 @@ lpfc_free_tx(struct lpfc_hba *phba, stru
psli = &phba->sli; pring = lpfc_phba_elsring(phba); + if (unlikely(!pring)) + return;
/* Error matching iocb on txq or txcmplq * First check the txq. --- a/drivers/scsi/lpfc/lpfc_init.c +++ b/drivers/scsi/lpfc/lpfc_init.c @@ -11404,6 +11404,13 @@ lpfc_pci_remove_one_s4(struct pci_dev *p /* Remove FC host and then SCSI host with the physical port */ fc_remove_host(shost); scsi_remove_host(shost); + /* + * Bring down the SLI Layer. This step disables all interrupts, + * clears the rings, discards all mailbox commands, and resets + * the HBA FCoE function. + */ + lpfc_debugfs_terminate(vport); + lpfc_sli4_hba_unset(phba);
/* Perform ndlp cleanup on the physical port. The nvme and nvmet * localports are destroyed after to cleanup all transport memory. @@ -11412,13 +11419,6 @@ lpfc_pci_remove_one_s4(struct pci_dev *p lpfc_nvmet_destroy_targetport(phba); lpfc_nvme_destroy_localport(vport);
- /* - * Bring down the SLI Layer. This step disables all interrupts, - * clears the rings, discards all mailbox commands, and resets - * the HBA FCoE function. - */ - lpfc_debugfs_terminate(vport); - lpfc_sli4_hba_unset(phba);
lpfc_stop_hba_timers(phba); spin_lock_irq(&phba->hbalock); --- a/drivers/scsi/lpfc/lpfc_nportdisc.c +++ b/drivers/scsi/lpfc/lpfc_nportdisc.c @@ -216,7 +216,7 @@ lpfc_els_abort(struct lpfc_hba *phba, st pring = lpfc_phba_elsring(phba);
/* In case of error recovery path, we might have a NULL pring here */ - if (!pring) + if (unlikely(!pring)) return;
/* Abort outstanding I/O on NPort <nlp_DID> */ --- a/drivers/scsi/lpfc/lpfc_sli.c +++ b/drivers/scsi/lpfc/lpfc_sli.c @@ -10632,6 +10632,14 @@ lpfc_sli_issue_abort_iotag(struct lpfc_h (cmdiocb->iocb_flag & LPFC_DRIVER_ABORTED) != 0) return 0;
+ if (!pring) { + if (cmdiocb->iocb_flag & LPFC_IO_FABRIC) + cmdiocb->fabric_iocb_cmpl = lpfc_ignore_els_cmpl; + else + cmdiocb->iocb_cmpl = lpfc_ignore_els_cmpl; + goto abort_iotag_exit; + } + /* * If we're unloading, don't abort iocb on the ELS ring, but change * the callback so that nothing happens when it finishes. @@ -12500,6 +12508,8 @@ lpfc_sli4_els_wcqe_to_rspiocbq(struct lp unsigned long iflags;
pring = lpfc_phba_elsring(phba); + if (unlikely(!pring)) + return NULL;
wcqe = &irspiocbq->cq_event.cqe.wcqe_cmpl; spin_lock_irqsave(&pring->ring_lock, iflags); @@ -18694,6 +18704,8 @@ lpfc_drain_txq(struct lpfc_hba *phba) uint32_t txq_cnt = 0;
pring = lpfc_phba_elsring(phba); + if (unlikely(!pring)) + return 0;
spin_lock_irqsave(&pring->ring_lock, iflags); list_for_each_entry(piocbq, &pring->txq, list) {
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dick Kennedy dick.kennedy@broadcom.com
commit 8e036a9497c5d565baafda4c648f2f372999a547 upstream.
The driver is encountering oops in lpfc_sli_calc_ring.
The driver is setting hba_wqidx for FCP based on the policy in use for NVME. The two may not be the same. Change to set the wqidx based on the FCP policy.
Signed-off-by: Dick Kennedy dick.kennedy@broadcom.com Signed-off-by: James Smart james.smart@broadcom.com Reviewed-by: Johannes Thumshirn jthumshirn@suse.de Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c index 2893d4fb9654..8c37885f4851 100644 --- a/drivers/scsi/lpfc/lpfc_sli.c +++ b/drivers/scsi/lpfc/lpfc_sli.c @@ -9396,10 +9396,13 @@ lpfc_sli4_calc_ring(struct lpfc_hba *phba, struct lpfc_iocbq *piocb) * for abort iocb hba_wqidx should already * be setup based on what work queue we used. */ - if (!(piocb->iocb_flag & LPFC_USE_FCPWQIDX)) + if (!(piocb->iocb_flag & LPFC_USE_FCPWQIDX)) { piocb->hba_wqidx = lpfc_sli4_scmd_to_wqidx_distr(phba, piocb->context1); + piocb->hba_wqidx = piocb->hba_wqidx % + phba->cfg_fcp_io_channel; + } return phba->sli4_hba.fcp_wq[piocb->hba_wqidx]->pring; } else { if (unlikely(!phba->sli4_hba.oas_wq))
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dick Kennedy dick.kennedy@broadcom.com
commit e7981a2c725f8e237f749fa1358997707d57e32c upstream.
if nvmet targetport registration fails, the driver encounters a NULL pointer oops in lpfc_hb_timeout_handler.
To fix: if registration fails, ensure nvmet_support is cleared on the port structure.
Also enhanced the log message on failure.
Signed-off-by: Dick Kennedy dick.kennedy@broadcom.com Signed-off-by: James Smart james.smart@broadcom.com Reviewed-by: Johannes Thumshirn jthumshirn@suse.de Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/scsi/lpfc/lpfc_nvmet.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-)
--- a/drivers/scsi/lpfc/lpfc_nvmet.c +++ b/drivers/scsi/lpfc/lpfc_nvmet.c @@ -1138,9 +1138,14 @@ lpfc_nvmet_create_targetport(struct lpfc #endif if (error) { lpfc_printf_log(phba, KERN_ERR, LOG_NVME_DISC, - "6025 Cannot register NVME targetport " - "x%x\n", error); + "6025 Cannot register NVME targetport x%x: " + "portnm %llx nodenm %llx segs %d qs %d\n", + error, + pinfo.port_name, pinfo.node_name, + lpfc_tgttemplate.max_sgl_segments, + lpfc_tgttemplate.max_hw_queues); phba->targetport = NULL; + phba->nvmet_support = 0;
lpfc_nvmet_cleanup_io_context(phba);
@@ -1152,9 +1157,11 @@ lpfc_nvmet_create_targetport(struct lpfc lpfc_printf_log(phba, KERN_INFO, LOG_NVME_DISC, "6026 Registered NVME " "targetport: %p, private %p " - "portnm %llx nodenm %llx\n", + "portnm %llx nodenm %llx segs %d qs %d\n", phba->targetport, tgtp, - pinfo.port_name, pinfo.node_name); + pinfo.port_name, pinfo.node_name, + lpfc_tgttemplate.max_sgl_segments, + lpfc_tgttemplate.max_hw_queues);
atomic_set(&tgtp->rcv_ls_req_in, 0); atomic_set(&tgtp->rcv_ls_req_out, 0);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nicholas Bellinger nab@linux-iscsi.org
commit ae072726f6109bb1c94841d6fb3a82dde298ea85 upstream.
Since commit 59b6986dbf fixed a potential NULL pointer dereference by allocating a se_tmr_req for ISCSI_TM_FUNC_TASK_REASSIGN, the se_tmr_req is currently leaked by iscsit_free_cmd() because no iscsi_cmd->se_cmd.se_tfo was associated.
To address this, treat ISCSI_TM_FUNC_TASK_REASSIGN like any other TMR and call transport_init_se_cmd() + target_get_sess_cmd() to setup iscsi_cmd->se_cmd.se_tfo with se_cmd->cmd_kref of 2.
This will ensure normal release operation once se_cmd->cmd_kref reaches zero and target_release_cmd_kref() is invoked, se_tmr_req will be released via existing target_free_cmd_mem() and core_tmr_release_req() code.
Reported-by: Donald White dew@datera.io Cc: Donald White dew@datera.io Cc: Mike Christie mchristi@redhat.com Cc: Hannes Reinecke hare@suse.com Signed-off-by: Nicholas Bellinger nab@linux-iscsi.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/target/iscsi/iscsi_target.c | 22 +++++++++------------- 1 file changed, 9 insertions(+), 13 deletions(-)
--- a/drivers/target/iscsi/iscsi_target.c +++ b/drivers/target/iscsi/iscsi_target.c @@ -1960,7 +1960,6 @@ iscsit_handle_task_mgt_cmd(struct iscsi_ struct iscsi_tmr_req *tmr_req; struct iscsi_tm *hdr; int out_of_order_cmdsn = 0, ret; - bool sess_ref = false; u8 function, tcm_function = TMR_UNKNOWN;
hdr = (struct iscsi_tm *) buf; @@ -1993,22 +1992,23 @@ iscsit_handle_task_mgt_cmd(struct iscsi_
cmd->data_direction = DMA_NONE; cmd->tmr_req = kzalloc(sizeof(*cmd->tmr_req), GFP_KERNEL); - if (!cmd->tmr_req) + if (!cmd->tmr_req) { return iscsit_add_reject_cmd(cmd, ISCSI_REASON_BOOKMARK_NO_RESOURCES, buf); + } + + transport_init_se_cmd(&cmd->se_cmd, &iscsi_ops, + conn->sess->se_sess, 0, DMA_NONE, + TCM_SIMPLE_TAG, cmd->sense_buffer + 2); + + target_get_sess_cmd(&cmd->se_cmd, true);
/* * TASK_REASSIGN for ERL=2 / connection stays inside of * LIO-Target $FABRIC_MOD */ if (function != ISCSI_TM_FUNC_TASK_REASSIGN) { - transport_init_se_cmd(&cmd->se_cmd, &iscsi_ops, - conn->sess->se_sess, 0, DMA_NONE, - TCM_SIMPLE_TAG, cmd->sense_buffer + 2); - - target_get_sess_cmd(&cmd->se_cmd, true); - sess_ref = true; tcm_function = iscsit_convert_tmf(function); if (tcm_function == TMR_UNKNOWN) { pr_err("Unknown iSCSI TMR Function:" @@ -2124,12 +2124,8 @@ attach: * For connection recovery, this is also the default action for * TMR TASK_REASSIGN. */ - if (sess_ref) { - pr_debug("Handle TMR, using sess_ref=true check\n"); - target_put_sess_cmd(&cmd->se_cmd); - } - iscsit_add_cmd_to_response_queue(cmd, conn, cmd->i_state); + target_put_sess_cmd(&cmd->se_cmd); return 0; } EXPORT_SYMBOL(iscsit_handle_task_mgt_cmd);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nicholas Bellinger nab@linux-iscsi.org
commit 3fc9fb13a4b2576aeab86c62fd64eb29ab68659c upstream.
This patch fixes a se_cmd->cmd_kref reference leak that can occur when a non immediate TMR is proceeded our of command sequence number order, and CMDSN_LOWER_THAN_EXP is returned by iscsit_sequence_cmd().
To address this bug, call target_put_sess_cmd() during this special case following what iscsit_process_scsi_cmd() does upon CMDSN_LOWER_THAN_EXP.
Cc: Mike Christie mchristi@redhat.com Cc: Hannes Reinecke hare@suse.com Signed-off-by: Nicholas Bellinger nab@linux-iscsi.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/target/iscsi/iscsi_target.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
--- a/drivers/target/iscsi/iscsi_target.c +++ b/drivers/target/iscsi/iscsi_target.c @@ -2099,12 +2099,14 @@ attach:
if (!(hdr->opcode & ISCSI_OP_IMMEDIATE)) { int cmdsn_ret = iscsit_sequence_cmd(conn, cmd, buf, hdr->cmdsn); - if (cmdsn_ret == CMDSN_HIGHER_THAN_EXP) + if (cmdsn_ret == CMDSN_HIGHER_THAN_EXP) { out_of_order_cmdsn = 1; - else if (cmdsn_ret == CMDSN_LOWER_THAN_EXP) + } else if (cmdsn_ret == CMDSN_LOWER_THAN_EXP) { + target_put_sess_cmd(&cmd->se_cmd); return 0; - else if (cmdsn_ret == CMDSN_ERROR_CANNOT_RECOVER) + } else if (cmdsn_ret == CMDSN_ERROR_CANNOT_RECOVER) { return -1; + } } iscsit_ack_from_expstatsn(conn, be32_to_cpu(hdr->exp_statsn));
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: tangwenji tang.wenji@zte.com.cn
commit 88fb2fa7db7510bf1078226ab48d162d9854f3d4 upstream.
The target system kernel crash when the initiator executes the sg_persist -A command,because of the second argument to be set to NULL when core_tmr_lun_reset is called in core_scsi3_pro_preempt function.
This fixes a regression originally introduced by:
commit 51ec502a32665fed66c7f03799ede4023b212536 Author: Bart Van Assche bart.vanassche@sandisk.com Date: Tue Feb 14 16:25:54 2017 -0800
target: Delete tmr from list before processing
Signed-off-by: tangwenji tang.wenji@zte.com.cn Signed-off-by: Nicholas Bellinger nab@linux-iscsi.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/target/target_core_tmr.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/target/target_core_tmr.c +++ b/drivers/target/target_core_tmr.c @@ -217,7 +217,8 @@ static void core_tmr_drain_tmr_list( * LUN_RESET tmr.. */ spin_lock_irqsave(&dev->se_tmr_lock, flags); - list_del_init(&tmr->tmr_list); + if (tmr) + list_del_init(&tmr->tmr_list); list_for_each_entry_safe(tmr_p, tmr_pp, &dev->dev_tmr_list, tmr_list) { cmd = tmr_p->task_cmd; if (!cmd) {
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: tangwenji tang.wenji@zte.com.cn
commit c58a252beb04cf0e02d6a746b2ed7ea89b6deb71 upstream.
When at least two initiators register pr on the same LUN, the target returns the exception data due to buffer offset error, therefore the initiator executes command 'sg_persist -s' may cause the initiator to appear segfault error.
This fixes a regression originally introduced by:
commit a85d667e58bddf73be84d1981b41eaac985ed216 Author: Bart Van Assche bart.vanassche@sandisk.com Date: Tue May 23 16:48:27 2017 -0700
target: Use {get,put}_unaligned_be*() instead of open coding these functions
Signed-off-by: tangwenji tang.wenji@zte.com.cn Signed-off-by: Nicholas Bellinger nab@linux-iscsi.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/target/target_core_pr.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/target/target_core_pr.c +++ b/drivers/target/target_core_pr.c @@ -4011,6 +4011,7 @@ core_scsi3_pri_read_full_status(struct s * Set the ADDITIONAL DESCRIPTOR LENGTH */ put_unaligned_be32(desc_len, &buf[off]); + off += 4; /* * Size of full desctipor header minus TransportID * containing $FABRIC_MOD specific) initiator device/port
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nicholas Bellinger nab@linux-iscsi.org
commit 1c79df1f349fb6050016cea4ef1dfbc3853a5685 upstream.
This patch fixes a bug during QUEUE_FULL where transport_complete_qf() calls transport_complete_task_attr() after it's already been invoked by target_complete_ok_work() or transport_generic_request_failure() during initial completion, preceeding QUEUE_FULL.
This will result in se_device->simple_cmds, se_device->dev_cur_ordered_id and/or se_device->dev_ordered_sync being updated multiple times for a single se_cmd.
To address this bug, clear SCF_TASK_ATTR_SET after the first call to transport_complete_task_attr(), and avoid updating SCSI task attribute related counters for any subsequent calls.
Also, when a se_cmd is deferred due to ordered tags and executed via target_restart_delayed_cmds(), set CMD_T_SENT before execution matching what target_execute_cmd() does.
Cc: Michael Cyr mikecyr@linux.vnet.ibm.com Cc: Bryant G. Ly bryantly@linux.vnet.ibm.com Cc: Mike Christie mchristi@redhat.com Cc: Hannes Reinecke hare@suse.com Signed-off-by: Nicholas Bellinger nab@linux-iscsi.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/target/target_core_transport.c | 4 ++++ 1 file changed, 4 insertions(+)
--- a/drivers/target/target_core_transport.c +++ b/drivers/target/target_core_transport.c @@ -2010,6 +2010,8 @@ static void target_restart_delayed_cmds( list_del(&cmd->se_delayed_node); spin_unlock(&dev->delayed_cmd_lock);
+ cmd->transport_state |= CMD_T_SENT; + __target_execute_cmd(cmd, true);
if (cmd->sam_task_attr == TCM_ORDERED_TAG) @@ -2045,6 +2047,8 @@ static void transport_complete_task_attr pr_debug("Incremented dev_cur_ordered_id: %u for ORDERED\n", dev->dev_cur_ordered_id); } + cmd->se_cmd_flags &= ~SCF_TASK_ATTR_SET; + restart: target_restart_delayed_cmds(dev); }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nicholas Bellinger nab@linux-iscsi.org
commit fd2f928b0ddd2fe8876d4f1344df2ace2b715a4d upstream.
With the recent addition of transport_check_aborted_status() within transport_generic_request_failure() to avoid sending a SCSI status exception after CMD_T_ABORTED w/ TAS=1 has occured, it introduced a COMPARE_AND_WRITE early failure regression.
Namely when COMPARE_AND_WRITE fails and se_device->caw_sem has been taken by sbc_compare_and_write(), if the new check for transport_check_aborted_status() returns true and exits, cmd->transport_complete_callback() -> compare_and_write_post() is skipped never releasing se_device->caw_sem.
This regression was originally introduced by:
commit e3b88ee95b4e4bf3e9729a4695d695b9c7c296c8 Author: Bart Van Assche bart.vanassche@sandisk.com Date: Tue Feb 14 16:25:45 2017 -0800
target: Fix handling of aborted failed commands
To address this bug, move the transport_check_aborted_status() call after transport_complete_task_attr() and cmd->transport_complete_callback().
Cc: Mike Christie mchristi@redhat.com Cc: Hannes Reinecke hare@suse.com Cc: Bart Van Assche bart.vanassche@sandisk.com Signed-off-by: Nicholas Bellinger nab@linux-iscsi.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/target/target_core_transport.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
--- a/drivers/target/target_core_transport.c +++ b/drivers/target/target_core_transport.c @@ -1730,9 +1730,6 @@ void transport_generic_request_failure(s { int ret = 0, post_ret = 0;
- if (transport_check_aborted_status(cmd, 1)) - return; - pr_debug("-----[ Storage Engine Exception; sense_reason %d\n", sense_reason); target_show_cmd("-----[ ", cmd); @@ -1741,6 +1738,7 @@ void transport_generic_request_failure(s * For SAM Task Attribute emulation for failed struct se_cmd */ transport_complete_task_attr(cmd); + /* * Handle special case for COMPARE_AND_WRITE failure, where the * callback is expected to drop the per device ->caw_sem. @@ -1749,6 +1747,9 @@ void transport_generic_request_failure(s cmd->transport_complete_callback) cmd->transport_complete_callback(cmd, false, &post_ret);
+ if (transport_check_aborted_status(cmd, 1)) + return; + switch (sense_reason) { case TCM_NON_EXISTENT_LUN: case TCM_UNSUPPORTED_SCSI_OPCODE:
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nicholas Bellinger nab@linux-iscsi.org
commit 9574a497df2bbc0a676b609ce0dd24d237cee3a6 upstream.
This patch fixes a potential end-less loop during QUEUE_FULL, where cmd->se_tfo->write_pending() callback fails repeatedly but __transport_wait_for_tasks() has already been invoked to quiese the outstanding se_cmd descriptor.
To address this bug, this patch adds a CMD_T_STOP|CMD_T_ABORTED check within transport_write_pending_qf() and invokes the existing se_cmd->t_transport_stop_comp to signal quiese completion back to __transport_wait_for_tasks().
Cc: Mike Christie mchristi@redhat.com Cc: Hannes Reinecke hare@suse.com Cc: Bryant G. Ly bryantly@linux.vnet.ibm.com Cc: Michael Cyr mikecyr@linux.vnet.ibm.com Cc: Potnuri Bharat Teja bharat@chelsio.com Cc: Sagi Grimberg sagi@grimberg.me Signed-off-by: Nicholas Bellinger nab@linux-iscsi.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/target/target_core_transport.c | 13 +++++++++++++ 1 file changed, 13 insertions(+)
--- a/drivers/target/target_core_transport.c +++ b/drivers/target/target_core_transport.c @@ -2575,7 +2575,20 @@ EXPORT_SYMBOL(transport_generic_new_cmd)
static void transport_write_pending_qf(struct se_cmd *cmd) { + unsigned long flags; int ret; + bool stop; + + spin_lock_irqsave(&cmd->t_state_lock, flags); + stop = (cmd->transport_state & (CMD_T_STOP | CMD_T_ABORTED)); + spin_unlock_irqrestore(&cmd->t_state_lock, flags); + + if (stop) { + pr_debug("%s:%d CMD_T_STOP|CMD_T_ABORTED for ITT: 0x%08llx\n", + __func__, __LINE__, cmd->tag); + complete_all(&cmd->t_transport_stop_comp); + return; + }
ret = cmd->se_tfo->write_pending(cmd); if (ret) {
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nicholas Bellinger nab@linux-iscsi.org
commit 1c21a48055a67ceb693e9c2587824a8de60a217c upstream.
This patch fixes bug where early se_cmd exceptions that occur before backend execution can result in use-after-free if/when a subsequent ABORT_TASK occurs for the same tag.
Since an early se_cmd exception will have had se_cmd added to se_session->sess_cmd_list via target_get_sess_cmd(), it will not have CMD_T_COMPLETE set by the usual target_complete_cmd() backend completion path.
This causes a subsequent ABORT_TASK + __target_check_io_state() to signal ABORT_TASK should proceed. As core_tmr_abort_task() executes, it will bring the outstanding se_cmd->cmd_kref count down to zero releasing se_cmd, after se_cmd has already been queued with error status into fabric driver response path code.
To address this bug, introduce a CMD_T_PRE_EXECUTE bit that is set at target_get_sess_cmd() time, and cleared immediately before backend driver dispatch in target_execute_cmd() once CMD_T_ACTIVE is set.
Then, check CMD_T_PRE_EXECUTE within __target_check_io_state() to determine when an early exception has occured, and avoid aborting this se_cmd since it will have already been queued into fabric driver response path code.
Reported-by: Donald White dew@datera.io Cc: Donald White dew@datera.io Cc: Mike Christie mchristi@redhat.com Cc: Hannes Reinecke hare@suse.com Signed-off-by: Nicholas Bellinger nab@linux-iscsi.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/target/target_core_tmr.c | 9 +++++++++ drivers/target/target_core_transport.c | 2 ++ include/target/target_core_base.h | 1 + 3 files changed, 12 insertions(+)
--- a/drivers/target/target_core_tmr.c +++ b/drivers/target/target_core_tmr.c @@ -133,6 +133,15 @@ static bool __target_check_io_state(stru spin_unlock(&se_cmd->t_state_lock); return false; } + if (se_cmd->transport_state & CMD_T_PRE_EXECUTE) { + if (se_cmd->scsi_status) { + pr_debug("Attempted to abort io tag: %llu early failure" + " status: 0x%02x\n", se_cmd->tag, + se_cmd->scsi_status); + spin_unlock(&se_cmd->t_state_lock); + return false; + } + } if (sess->sess_tearing_down || se_cmd->cmd_wait_set) { pr_debug("Attempted to abort io tag: %llu already shutdown," " skipping\n", se_cmd->tag); --- a/drivers/target/target_core_transport.c +++ b/drivers/target/target_core_transport.c @@ -1974,6 +1974,7 @@ void target_execute_cmd(struct se_cmd *c }
cmd->t_state = TRANSPORT_PROCESSING; + cmd->transport_state &= ~CMD_T_PRE_EXECUTE; cmd->transport_state |= CMD_T_ACTIVE | CMD_T_SENT; spin_unlock_irq(&cmd->t_state_lock);
@@ -2682,6 +2683,7 @@ int target_get_sess_cmd(struct se_cmd *s ret = -ESHUTDOWN; goto out; } + se_cmd->transport_state |= CMD_T_PRE_EXECUTE; list_add_tail(&se_cmd->se_cmd_list, &se_sess->sess_cmd_list); out: spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags); --- a/include/target/target_core_base.h +++ b/include/target/target_core_base.h @@ -490,6 +490,7 @@ struct se_cmd { #define CMD_T_STOP (1 << 5) #define CMD_T_TAS (1 << 10) #define CMD_T_FABRIC_STOP (1 << 11) +#define CMD_T_PRE_EXECUTE (1 << 12) spinlock_t t_state_lock; struct kref cmd_kref; struct completion t_transport_stop_comp;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Boris Brezillon boris.brezillon@free-electrons.com
commit 1530578abdac4edce9244c7a1962ded3ffdb58ce upstream.
Commit e8e3edb95ce6 ("mtd: create per-device and module-scope debugfs entries") tried to make MTD related debugfs stuff consistent across the MTD framework by creating a root <debugfs>/mtd/ directory containing one directory per MTD device.
The problem is that, by default, the MTD layer only registers the master device if no partitions are defined for this master. This behavior breaks all drivers that expect mtd->dbg.dfs_dir to be filled correctly after calling mtd_device_register() in order to add their own debugfs entries.
The only way we can force all MTD masters to be registered no matter if they expose partitions or not is by enabling the CONFIG_MTD_PARTITIONED_MASTER option.
In such situations, there's no other solution but to accept skipping debugfs initialization when dbg.dfs_dir is invalid, and when this happens, inform the user that he should consider enabling CONFIG_MTD_PARTITIONED_MASTER.
Fixes: e8e3edb95ce6 ("mtd: create per-device and module-scope debugfs entries") Cc: Mario J. Rugiero mrugiero@gmail.com Signed-off-by: Boris Brezillon boris.brezillon@free-electrons.com Reported-by: Richard Weinberger richard@nod.at Signed-off-by: Richard Weinberger richard@nod.at Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/mtd/devices/docg3.c | 7 ++++++- drivers/mtd/nand/nandsim.c | 13 +++++++++---- 2 files changed, 15 insertions(+), 5 deletions(-)
--- a/drivers/mtd/devices/docg3.c +++ b/drivers/mtd/devices/docg3.c @@ -1814,8 +1814,13 @@ static void __init doc_dbg_register(stru struct dentry *root = floor->dbg.dfs_dir; struct docg3 *docg3 = floor->priv;
- if (IS_ERR_OR_NULL(root)) + if (IS_ERR_OR_NULL(root)) { + if (IS_ENABLED(CONFIG_DEBUG_FS) && + !IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER)) + dev_warn(floor->dev.parent, + "CONFIG_MTD_PARTITIONED_MASTER must be enabled to expose debugfs stuff\n"); return; + }
debugfs_create_file("docg3_flashcontrol", S_IRUSR, root, docg3, &flashcontrol_fops); --- a/drivers/mtd/nand/nandsim.c +++ b/drivers/mtd/nand/nandsim.c @@ -520,11 +520,16 @@ static int nandsim_debugfs_create(struct struct dentry *root = nsmtd->dbg.dfs_dir; struct dentry *dent;
- if (!IS_ENABLED(CONFIG_DEBUG_FS)) + /* + * Just skip debugfs initialization when the debugfs directory is + * missing. + */ + if (IS_ERR_OR_NULL(root)) { + if (IS_ENABLED(CONFIG_DEBUG_FS) && + !IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER)) + NS_WARN("CONFIG_MTD_PARTITIONED_MASTER must be enabled to expose debugfs stuff\n"); return 0; - - if (IS_ERR_OR_NULL(root)) - return -1; + }
dent = debugfs_create_file("nandsim_wear_report", S_IRUSR, root, dev, &dfs_fops);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Boris Brezillon boris.brezillon@free-electrons.com
commit b9bb98424c51437973b854691aa1e9b2bfd348f5 upstream.
Commit 6e532afaca8e ("mtd: nand: atmel: Add PM ops") started to use the nand_reset() function which was not yet exported by the NAND framework (because it was only used internally before that). Export this symbol to avoid build errors when the driver is enabled as a module.
Fixes: 6e532afaca8e ("mtd: nand: atmel: Add PM ops") Signed-off-by: Boris Brezillon boris.brezillon@free-electrons.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/mtd/nand/nand_base.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/mtd/nand/nand_base.c +++ b/drivers/mtd/nand/nand_base.c @@ -1246,6 +1246,7 @@ int nand_reset(struct nand_chip *chip, i
return 0; } +EXPORT_SYMBOL_GPL(nand_reset);
/** * nand_check_erased_buf - check if a buffer contains (almost) only 0xff data
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Boris Brezillon boris.brezillon@free-electrons.com
commit 1533bfa6f6b6bcca1ea1f172ef4a1c5ce5e7b335 upstream.
commit 6e532afaca8e ("mtd: nand: atmel: Add PM ops") was defining PM ops but nothing was using/referencing those PM ops.
Fixes: 6e532afaca8e ("mtd: nand: atmel: Add PM ops") Cc: Romain Izard romain.izard.pro@gmail.com Signed-off-by: Boris Brezillon boris.brezillon@free-electrons.com Acked-by: Wenyou Yang wenyou.yang@microchip.com Tested-by: Romain Izard romain.izard.pro@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/mtd/nand/atmel/nand-controller.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/mtd/nand/atmel/nand-controller.c +++ b/drivers/mtd/nand/atmel/nand-controller.c @@ -2547,6 +2547,7 @@ static struct platform_driver atmel_nand .driver = { .name = "atmel-nand-controller", .of_match_table = of_match_ptr(atmel_nand_controller_of_ids), + .pm = &atmel_nand_controller_pm_ops, }, .probe = atmel_nand_controller_probe, .remove = atmel_nand_controller_remove,
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Roger Quadros rogerq@ti.com
commit 739c64414f01748a36e7d82c8e0611dea94412bd upstream.
Since v4.12, NAND subpage writes were causing a NULL pointer dereference on OMAP platforms (omap2-nand) using OMAP_ECC_BCH4_CODE_HW, OMAP_ECC_BCH8_CODE_HW and OMAP_ECC_BCH16_CODE_HW.
This is because for those ECC modes, omap_calculate_ecc_bch() generates ECC bytes for the entire (multi-sector) page and this can overflow the ECC buffer provided by nand_write_subpage_hwecc() as it expects ecc.calculate() to return ECC bytes for just one sector.
However, the root cause of the problem is present since v3.9 but was not seen then as NAND buffers were being allocated as one big chunk prior to commit 3deb9979c731 ("mtd: nand: allocate aligned buffers if NAND_OWN_BUFFERS is unset").
Fix the issue by providing a OMAP optimized write_subpage() implementation.
Fixes: 62116e5171e0 ("mtd: nand: omap2: Support for hardware BCH error correction.") Signed-off-by: Roger Quadros rogerq@ti.com Signed-off-by: Boris Brezillon boris.brezillon@free-electrons.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/mtd/nand/omap2.c | 339 +++++++++++++++++++++++++++++++---------------- 1 file changed, 224 insertions(+), 115 deletions(-)
--- a/drivers/mtd/nand/omap2.c +++ b/drivers/mtd/nand/omap2.c @@ -1133,129 +1133,172 @@ static u8 bch8_polynomial[] = {0xef, 0x 0x97, 0x79, 0xe5, 0x24, 0xb5};
/** - * omap_calculate_ecc_bch - Generate bytes of ECC bytes + * _omap_calculate_ecc_bch - Generate ECC bytes for one sector * @mtd: MTD device structure * @dat: The pointer to data on which ecc is computed * @ecc_code: The ecc_code buffer + * @i: The sector number (for a multi sector page) * - * Support calculating of BCH4/8 ecc vectors for the page + * Support calculating of BCH4/8/16 ECC vectors for one sector + * within a page. Sector number is in @i. */ -static int __maybe_unused omap_calculate_ecc_bch(struct mtd_info *mtd, - const u_char *dat, u_char *ecc_calc) +static int _omap_calculate_ecc_bch(struct mtd_info *mtd, + const u_char *dat, u_char *ecc_calc, int i) { struct omap_nand_info *info = mtd_to_omap(mtd); int eccbytes = info->nand.ecc.bytes; struct gpmc_nand_regs *gpmc_regs = &info->reg; u8 *ecc_code; - unsigned long nsectors, bch_val1, bch_val2, bch_val3, bch_val4; + unsigned long bch_val1, bch_val2, bch_val3, bch_val4; u32 val; - int i, j; + int j; + + ecc_code = ecc_calc; + switch (info->ecc_opt) { + case OMAP_ECC_BCH8_CODE_HW_DETECTION_SW: + case OMAP_ECC_BCH8_CODE_HW: + bch_val1 = readl(gpmc_regs->gpmc_bch_result0[i]); + bch_val2 = readl(gpmc_regs->gpmc_bch_result1[i]); + bch_val3 = readl(gpmc_regs->gpmc_bch_result2[i]); + bch_val4 = readl(gpmc_regs->gpmc_bch_result3[i]); + *ecc_code++ = (bch_val4 & 0xFF); + *ecc_code++ = ((bch_val3 >> 24) & 0xFF); + *ecc_code++ = ((bch_val3 >> 16) & 0xFF); + *ecc_code++ = ((bch_val3 >> 8) & 0xFF); + *ecc_code++ = (bch_val3 & 0xFF); + *ecc_code++ = ((bch_val2 >> 24) & 0xFF); + *ecc_code++ = ((bch_val2 >> 16) & 0xFF); + *ecc_code++ = ((bch_val2 >> 8) & 0xFF); + *ecc_code++ = (bch_val2 & 0xFF); + *ecc_code++ = ((bch_val1 >> 24) & 0xFF); + *ecc_code++ = ((bch_val1 >> 16) & 0xFF); + *ecc_code++ = ((bch_val1 >> 8) & 0xFF); + *ecc_code++ = (bch_val1 & 0xFF); + break; + case OMAP_ECC_BCH4_CODE_HW_DETECTION_SW: + case OMAP_ECC_BCH4_CODE_HW: + bch_val1 = readl(gpmc_regs->gpmc_bch_result0[i]); + bch_val2 = readl(gpmc_regs->gpmc_bch_result1[i]); + *ecc_code++ = ((bch_val2 >> 12) & 0xFF); + *ecc_code++ = ((bch_val2 >> 4) & 0xFF); + *ecc_code++ = ((bch_val2 & 0xF) << 4) | + ((bch_val1 >> 28) & 0xF); + *ecc_code++ = ((bch_val1 >> 20) & 0xFF); + *ecc_code++ = ((bch_val1 >> 12) & 0xFF); + *ecc_code++ = ((bch_val1 >> 4) & 0xFF); + *ecc_code++ = ((bch_val1 & 0xF) << 4); + break; + case OMAP_ECC_BCH16_CODE_HW: + val = readl(gpmc_regs->gpmc_bch_result6[i]); + ecc_code[0] = ((val >> 8) & 0xFF); + ecc_code[1] = ((val >> 0) & 0xFF); + val = readl(gpmc_regs->gpmc_bch_result5[i]); + ecc_code[2] = ((val >> 24) & 0xFF); + ecc_code[3] = ((val >> 16) & 0xFF); + ecc_code[4] = ((val >> 8) & 0xFF); + ecc_code[5] = ((val >> 0) & 0xFF); + val = readl(gpmc_regs->gpmc_bch_result4[i]); + ecc_code[6] = ((val >> 24) & 0xFF); + ecc_code[7] = ((val >> 16) & 0xFF); + ecc_code[8] = ((val >> 8) & 0xFF); + ecc_code[9] = ((val >> 0) & 0xFF); + val = readl(gpmc_regs->gpmc_bch_result3[i]); + ecc_code[10] = ((val >> 24) & 0xFF); + ecc_code[11] = ((val >> 16) & 0xFF); + ecc_code[12] = ((val >> 8) & 0xFF); + ecc_code[13] = ((val >> 0) & 0xFF); + val = readl(gpmc_regs->gpmc_bch_result2[i]); + ecc_code[14] = ((val >> 24) & 0xFF); + ecc_code[15] = ((val >> 16) & 0xFF); + ecc_code[16] = ((val >> 8) & 0xFF); + ecc_code[17] = ((val >> 0) & 0xFF); + val = readl(gpmc_regs->gpmc_bch_result1[i]); + ecc_code[18] = ((val >> 24) & 0xFF); + ecc_code[19] = ((val >> 16) & 0xFF); + ecc_code[20] = ((val >> 8) & 0xFF); + ecc_code[21] = ((val >> 0) & 0xFF); + val = readl(gpmc_regs->gpmc_bch_result0[i]); + ecc_code[22] = ((val >> 24) & 0xFF); + ecc_code[23] = ((val >> 16) & 0xFF); + ecc_code[24] = ((val >> 8) & 0xFF); + ecc_code[25] = ((val >> 0) & 0xFF); + break; + default: + return -EINVAL; + } + + /* ECC scheme specific syndrome customizations */ + switch (info->ecc_opt) { + case OMAP_ECC_BCH4_CODE_HW_DETECTION_SW: + /* Add constant polynomial to remainder, so that + * ECC of blank pages results in 0x0 on reading back + */ + for (j = 0; j < eccbytes; j++) + ecc_calc[j] ^= bch4_polynomial[j]; + break; + case OMAP_ECC_BCH4_CODE_HW: + /* Set 8th ECC byte as 0x0 for ROM compatibility */ + ecc_calc[eccbytes - 1] = 0x0; + break; + case OMAP_ECC_BCH8_CODE_HW_DETECTION_SW: + /* Add constant polynomial to remainder, so that + * ECC of blank pages results in 0x0 on reading back + */ + for (j = 0; j < eccbytes; j++) + ecc_calc[j] ^= bch8_polynomial[j]; + break; + case OMAP_ECC_BCH8_CODE_HW: + /* Set 14th ECC byte as 0x0 for ROM compatibility */ + ecc_calc[eccbytes - 1] = 0x0; + break; + case OMAP_ECC_BCH16_CODE_HW: + break; + default: + return -EINVAL; + } + + return 0; +} + +/** + * omap_calculate_ecc_bch_sw - ECC generator for sector for SW based correction + * @mtd: MTD device structure + * @dat: The pointer to data on which ecc is computed + * @ecc_code: The ecc_code buffer + * + * Support calculating of BCH4/8/16 ECC vectors for one sector. This is used + * when SW based correction is required as ECC is required for one sector + * at a time. + */ +static int omap_calculate_ecc_bch_sw(struct mtd_info *mtd, + const u_char *dat, u_char *ecc_calc) +{ + return _omap_calculate_ecc_bch(mtd, dat, ecc_calc, 0); +} + +/** + * omap_calculate_ecc_bch_multi - Generate ECC for multiple sectors + * @mtd: MTD device structure + * @dat: The pointer to data on which ecc is computed + * @ecc_code: The ecc_code buffer + * + * Support calculating of BCH4/8/16 ecc vectors for the entire page in one go. + */ +static int omap_calculate_ecc_bch_multi(struct mtd_info *mtd, + const u_char *dat, u_char *ecc_calc) +{ + struct omap_nand_info *info = mtd_to_omap(mtd); + int eccbytes = info->nand.ecc.bytes; + unsigned long nsectors; + int i, ret;
nsectors = ((readl(info->reg.gpmc_ecc_config) >> 4) & 0x7) + 1; for (i = 0; i < nsectors; i++) { - ecc_code = ecc_calc; - switch (info->ecc_opt) { - case OMAP_ECC_BCH8_CODE_HW_DETECTION_SW: - case OMAP_ECC_BCH8_CODE_HW: - bch_val1 = readl(gpmc_regs->gpmc_bch_result0[i]); - bch_val2 = readl(gpmc_regs->gpmc_bch_result1[i]); - bch_val3 = readl(gpmc_regs->gpmc_bch_result2[i]); - bch_val4 = readl(gpmc_regs->gpmc_bch_result3[i]); - *ecc_code++ = (bch_val4 & 0xFF); - *ecc_code++ = ((bch_val3 >> 24) & 0xFF); - *ecc_code++ = ((bch_val3 >> 16) & 0xFF); - *ecc_code++ = ((bch_val3 >> 8) & 0xFF); - *ecc_code++ = (bch_val3 & 0xFF); - *ecc_code++ = ((bch_val2 >> 24) & 0xFF); - *ecc_code++ = ((bch_val2 >> 16) & 0xFF); - *ecc_code++ = ((bch_val2 >> 8) & 0xFF); - *ecc_code++ = (bch_val2 & 0xFF); - *ecc_code++ = ((bch_val1 >> 24) & 0xFF); - *ecc_code++ = ((bch_val1 >> 16) & 0xFF); - *ecc_code++ = ((bch_val1 >> 8) & 0xFF); - *ecc_code++ = (bch_val1 & 0xFF); - break; - case OMAP_ECC_BCH4_CODE_HW_DETECTION_SW: - case OMAP_ECC_BCH4_CODE_HW: - bch_val1 = readl(gpmc_regs->gpmc_bch_result0[i]); - bch_val2 = readl(gpmc_regs->gpmc_bch_result1[i]); - *ecc_code++ = ((bch_val2 >> 12) & 0xFF); - *ecc_code++ = ((bch_val2 >> 4) & 0xFF); - *ecc_code++ = ((bch_val2 & 0xF) << 4) | - ((bch_val1 >> 28) & 0xF); - *ecc_code++ = ((bch_val1 >> 20) & 0xFF); - *ecc_code++ = ((bch_val1 >> 12) & 0xFF); - *ecc_code++ = ((bch_val1 >> 4) & 0xFF); - *ecc_code++ = ((bch_val1 & 0xF) << 4); - break; - case OMAP_ECC_BCH16_CODE_HW: - val = readl(gpmc_regs->gpmc_bch_result6[i]); - ecc_code[0] = ((val >> 8) & 0xFF); - ecc_code[1] = ((val >> 0) & 0xFF); - val = readl(gpmc_regs->gpmc_bch_result5[i]); - ecc_code[2] = ((val >> 24) & 0xFF); - ecc_code[3] = ((val >> 16) & 0xFF); - ecc_code[4] = ((val >> 8) & 0xFF); - ecc_code[5] = ((val >> 0) & 0xFF); - val = readl(gpmc_regs->gpmc_bch_result4[i]); - ecc_code[6] = ((val >> 24) & 0xFF); - ecc_code[7] = ((val >> 16) & 0xFF); - ecc_code[8] = ((val >> 8) & 0xFF); - ecc_code[9] = ((val >> 0) & 0xFF); - val = readl(gpmc_regs->gpmc_bch_result3[i]); - ecc_code[10] = ((val >> 24) & 0xFF); - ecc_code[11] = ((val >> 16) & 0xFF); - ecc_code[12] = ((val >> 8) & 0xFF); - ecc_code[13] = ((val >> 0) & 0xFF); - val = readl(gpmc_regs->gpmc_bch_result2[i]); - ecc_code[14] = ((val >> 24) & 0xFF); - ecc_code[15] = ((val >> 16) & 0xFF); - ecc_code[16] = ((val >> 8) & 0xFF); - ecc_code[17] = ((val >> 0) & 0xFF); - val = readl(gpmc_regs->gpmc_bch_result1[i]); - ecc_code[18] = ((val >> 24) & 0xFF); - ecc_code[19] = ((val >> 16) & 0xFF); - ecc_code[20] = ((val >> 8) & 0xFF); - ecc_code[21] = ((val >> 0) & 0xFF); - val = readl(gpmc_regs->gpmc_bch_result0[i]); - ecc_code[22] = ((val >> 24) & 0xFF); - ecc_code[23] = ((val >> 16) & 0xFF); - ecc_code[24] = ((val >> 8) & 0xFF); - ecc_code[25] = ((val >> 0) & 0xFF); - break; - default: - return -EINVAL; - } - - /* ECC scheme specific syndrome customizations */ - switch (info->ecc_opt) { - case OMAP_ECC_BCH4_CODE_HW_DETECTION_SW: - /* Add constant polynomial to remainder, so that - * ECC of blank pages results in 0x0 on reading back */ - for (j = 0; j < eccbytes; j++) - ecc_calc[j] ^= bch4_polynomial[j]; - break; - case OMAP_ECC_BCH4_CODE_HW: - /* Set 8th ECC byte as 0x0 for ROM compatibility */ - ecc_calc[eccbytes - 1] = 0x0; - break; - case OMAP_ECC_BCH8_CODE_HW_DETECTION_SW: - /* Add constant polynomial to remainder, so that - * ECC of blank pages results in 0x0 on reading back */ - for (j = 0; j < eccbytes; j++) - ecc_calc[j] ^= bch8_polynomial[j]; - break; - case OMAP_ECC_BCH8_CODE_HW: - /* Set 14th ECC byte as 0x0 for ROM compatibility */ - ecc_calc[eccbytes - 1] = 0x0; - break; - case OMAP_ECC_BCH16_CODE_HW: - break; - default: - return -EINVAL; - } + ret = _omap_calculate_ecc_bch(mtd, dat, ecc_calc, i); + if (ret) + return ret;
- ecc_calc += eccbytes; + ecc_calc += eccbytes; }
return 0; @@ -1496,7 +1539,7 @@ static int omap_write_page_bch(struct mt chip->write_buf(mtd, buf, mtd->writesize);
/* Update ecc vector from GPMC result registers */ - chip->ecc.calculate(mtd, buf, &ecc_calc[0]); + omap_calculate_ecc_bch_multi(mtd, buf, &ecc_calc[0]);
ret = mtd_ooblayout_set_eccbytes(mtd, ecc_calc, chip->oob_poi, 0, chip->ecc.total); @@ -1509,6 +1552,72 @@ static int omap_write_page_bch(struct mt }
/** + * omap_write_subpage_bch - BCH hardware ECC based subpage write + * @mtd: mtd info structure + * @chip: nand chip info structure + * @offset: column address of subpage within the page + * @data_len: data length + * @buf: data buffer + * @oob_required: must write chip->oob_poi to OOB + * @page: page number to write + * + * OMAP optimized subpage write method. + */ +static int omap_write_subpage_bch(struct mtd_info *mtd, + struct nand_chip *chip, u32 offset, + u32 data_len, const u8 *buf, + int oob_required, int page) +{ + u8 *ecc_calc = chip->buffers->ecccalc; + int ecc_size = chip->ecc.size; + int ecc_bytes = chip->ecc.bytes; + int ecc_steps = chip->ecc.steps; + u32 start_step = offset / ecc_size; + u32 end_step = (offset + data_len - 1) / ecc_size; + int step, ret = 0; + + /* + * Write entire page at one go as it would be optimal + * as ECC is calculated by hardware. + * ECC is calculated for all subpages but we choose + * only what we want. + */ + + /* Enable GPMC ECC engine */ + chip->ecc.hwctl(mtd, NAND_ECC_WRITE); + + /* Write data */ + chip->write_buf(mtd, buf, mtd->writesize); + + for (step = 0; step < ecc_steps; step++) { + /* mask ECC of un-touched subpages by padding 0xFF */ + if (step < start_step || step > end_step) + memset(ecc_calc, 0xff, ecc_bytes); + else + ret = _omap_calculate_ecc_bch(mtd, buf, ecc_calc, step); + + if (ret) + return ret; + + buf += ecc_size; + ecc_calc += ecc_bytes; + } + + /* copy calculated ECC for whole page to chip->buffer->oob */ + /* this include masked-value(0xFF) for unwritten subpages */ + ecc_calc = chip->buffers->ecccalc; + ret = mtd_ooblayout_set_eccbytes(mtd, ecc_calc, chip->oob_poi, 0, + chip->ecc.total); + if (ret) + return ret; + + /* write OOB buffer to NAND device */ + chip->write_buf(mtd, chip->oob_poi, mtd->oobsize); + + return 0; +} + +/** * omap_read_page_bch - BCH ecc based page read function for entire page * @mtd: mtd info structure * @chip: nand chip info structure @@ -1544,7 +1653,7 @@ static int omap_read_page_bch(struct mtd chip->ecc.total);
/* Calculate ecc bytes */ - chip->ecc.calculate(mtd, buf, ecc_calc); + omap_calculate_ecc_bch_multi(mtd, buf, ecc_calc);
ret = mtd_ooblayout_get_eccbytes(mtd, ecc_code, chip->oob_poi, 0, chip->ecc.total); @@ -2044,7 +2153,7 @@ static int omap_nand_probe(struct platfo nand_chip->ecc.strength = 4; nand_chip->ecc.hwctl = omap_enable_hwecc_bch; nand_chip->ecc.correct = nand_bch_correct_data; - nand_chip->ecc.calculate = omap_calculate_ecc_bch; + nand_chip->ecc.calculate = omap_calculate_ecc_bch_sw; mtd_set_ooblayout(mtd, &omap_sw_ooblayout_ops); /* Reserve one byte for the OMAP marker */ oobbytes_per_step = nand_chip->ecc.bytes + 1; @@ -2066,9 +2175,9 @@ static int omap_nand_probe(struct platfo nand_chip->ecc.strength = 4; nand_chip->ecc.hwctl = omap_enable_hwecc_bch; nand_chip->ecc.correct = omap_elm_correct_data; - nand_chip->ecc.calculate = omap_calculate_ecc_bch; nand_chip->ecc.read_page = omap_read_page_bch; nand_chip->ecc.write_page = omap_write_page_bch; + nand_chip->ecc.write_subpage = omap_write_subpage_bch; mtd_set_ooblayout(mtd, &omap_ooblayout_ops); oobbytes_per_step = nand_chip->ecc.bytes;
@@ -2087,7 +2196,7 @@ static int omap_nand_probe(struct platfo nand_chip->ecc.strength = 8; nand_chip->ecc.hwctl = omap_enable_hwecc_bch; nand_chip->ecc.correct = nand_bch_correct_data; - nand_chip->ecc.calculate = omap_calculate_ecc_bch; + nand_chip->ecc.calculate = omap_calculate_ecc_bch_sw; mtd_set_ooblayout(mtd, &omap_sw_ooblayout_ops); /* Reserve one byte for the OMAP marker */ oobbytes_per_step = nand_chip->ecc.bytes + 1; @@ -2109,9 +2218,9 @@ static int omap_nand_probe(struct platfo nand_chip->ecc.strength = 8; nand_chip->ecc.hwctl = omap_enable_hwecc_bch; nand_chip->ecc.correct = omap_elm_correct_data; - nand_chip->ecc.calculate = omap_calculate_ecc_bch; nand_chip->ecc.read_page = omap_read_page_bch; nand_chip->ecc.write_page = omap_write_page_bch; + nand_chip->ecc.write_subpage = omap_write_subpage_bch; mtd_set_ooblayout(mtd, &omap_ooblayout_ops); oobbytes_per_step = nand_chip->ecc.bytes;
@@ -2131,9 +2240,9 @@ static int omap_nand_probe(struct platfo nand_chip->ecc.strength = 16; nand_chip->ecc.hwctl = omap_enable_hwecc_bch; nand_chip->ecc.correct = omap_elm_correct_data; - nand_chip->ecc.calculate = omap_calculate_ecc_bch; nand_chip->ecc.read_page = omap_read_page_bch; nand_chip->ecc.write_page = omap_write_page_bch; + nand_chip->ecc.write_subpage = omap_write_subpage_bch; mtd_set_ooblayout(mtd, &omap_ooblayout_ops); oobbytes_per_step = nand_chip->ecc.bytes;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Brent Taylor motobud@gmail.com
commit 30863e38ebeb500a31cecee8096fb5002677dd9b upstream.
When mtdoops calls mtd_panic_write(), it eventually calls panic_nand_write() in nand_base.c. In order to properly wait for the nand chip to be ready in panic_nand_wait(), the chip must first be selected.
When using the atmel nand flash controller, a panic would occur due to a NULL pointer exception.
Fixes: 2af7c6539931 ("mtd: Add panic_write for NAND flashes") Signed-off-by: Brent Taylor motobud@gmail.com Signed-off-by: Boris Brezillon boris.brezillon@free-electrons.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/mtd/nand/nand_base.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
--- a/drivers/mtd/nand/nand_base.c +++ b/drivers/mtd/nand/nand_base.c @@ -2800,15 +2800,18 @@ static int panic_nand_write(struct mtd_i size_t *retlen, const uint8_t *buf) { struct nand_chip *chip = mtd_to_nand(mtd); + int chipnr = (int)(to >> chip->chip_shift); struct mtd_oob_ops ops; int ret;
- /* Wait for the device to get ready */ - panic_nand_wait(mtd, chip, 400); - /* Grab the device */ panic_nand_get_device(chip, mtd, FL_WRITING);
+ chip->select_chip(mtd, chipnr); + + /* Wait for the device to get ready */ + panic_nand_wait(mtd, chip, 400); + memset(&ops, 0, sizeof(ops)); ops.len = len; ops.datbuf = (uint8_t *)buf;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Xiaolei Li xiaolei.li@mediatek.com
commit 1d2fcdcf33339c7c8016243de0f7f31cf6845e8d upstream.
For MT2701 NAND Controller, there may generate infinite ECC decode IRQ during long time burn test on some platforms. Once this issue occurred, the ECC decode IRQ status cannot be cleared in the IRQ handler function, and threads cannot be scheduled.
ECC HW generates decode IRQ each sector, so there will have more than one decode IRQ if read one page of large page NAND.
Currently, ECC IRQ handle flow is that we will check whether it is decode IRQ at first by reading the register ECC_DECIRQ_STA. This is a read-clear type register. If this IRQ is decode IRQ, then the ECC IRQ signal will be cleared at the same time. Secondly, we will check whether all sectors are decoded by reading the register ECC_DECDONE. This is because the current IRQ may be not dealed in time, and the next sectors have been decoded before reading the register ECC_DECIRQ_STA. Then, the next sectors's decode IRQs will not be generated. Thirdly, if all sectors are decoded by comparing with ecc->sectors, then we will complete ecc->done, set ecc->sectors as 0, and disable ECC IRQ by programming the register ECC_IRQ_REG(op) as 0. Otherwise, wait for the next ECC IRQ.
But, there is a timing issue between step one and two. When we read the reigster ECC_DECIRQ_STA, all sectors are decoded except the last sector, and the ECC IRQ signal is cleared. But the last sector is decoded before reading ECC_DECDONE, so the ECC IRQ signal is enabled again by ECC HW, and it means we will receive one extra ECC IRQ later. In step three, we will find that all sectors were decoded, then disable ECC IRQ and return. When deal with the extra ECC IRQ, the ECC IRQ status cannot be cleared anymore. That is because the register ECC_DECIRQ_STA can only be cleared when the register ECC_IRQ_REG(op) is enabled. But actually we have disabled ECC IRQ in the previous ECC IRQ handle. So, there will keep receiving ECC decode IRQ.
Now, we read the register ECC_DECIRQ_STA once again before completing the ecc done event. This ensures that there will be no extra ECC decode IRQ.
Also, remove writel(0, ecc->regs + ECC_IRQ_REG(op)) from irq handler, because ECC IRQ is disabled in mtk_ecc_disable(). And clear ECC_DECIRQ_STA in mtk_ecc_disable() in case there is a timeout to wait decode IRQ.
Fixes: 1d6b1e464950 ("mtd: mediatek: driver for MTK Smart Device") Signed-off-by: Xiaolei Li xiaolei.li@mediatek.com Signed-off-by: Boris Brezillon boris.brezillon@free-electrons.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/mtd/nand/mtk_ecc.c | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-)
--- a/drivers/mtd/nand/mtk_ecc.c +++ b/drivers/mtd/nand/mtk_ecc.c @@ -115,6 +115,11 @@ static irqreturn_t mtk_ecc_irq(int irq, op = ECC_DECODE; dec = readw(ecc->regs + ECC_DECDONE); if (dec & ecc->sectors) { + /* + * Clear decode IRQ status once again to ensure that + * there will be no extra IRQ. + */ + readw(ecc->regs + ECC_DECIRQ_STA); ecc->sectors = 0; complete(&ecc->done); } else { @@ -130,8 +135,6 @@ static irqreturn_t mtk_ecc_irq(int irq, } }
- writel(0, ecc->regs + ECC_IRQ_REG(op)); - return IRQ_HANDLED; }
@@ -307,6 +310,12 @@ void mtk_ecc_disable(struct mtk_ecc *ecc
/* disable it */ mtk_ecc_wait_idle(ecc, op); + if (op == ECC_DECODE) + /* + * Clear decode IRQ status in case there is a timeout to wait + * decode IRQ. + */ + readw(ecc->regs + ECC_DECIRQ_STA); writew(0, ecc->regs + ECC_IRQ_REG(op)); writew(ECC_OP_DISABLE, ecc->regs + ECC_CTL_REG(op));
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Anup Patel anup.patel@broadcom.com
commit a371c10ea4b38a5f120e86d906d404d50a0f4660 upstream.
As-per suggestion from FlexRM HW folks, we have to first set FlexRM ring flush state and then clear it for FlexRM ring flush to work properly.
Currently, the FlexRM driver has incomplete FlexRM ring flush sequence which causes repeated insmod+rmmod of mailbox client drivers to fail.
This patch fixes FlexRM ring flush sequence in flexrm_shutdown() as described above.
Fixes: dbc049eee730 ("mailbox: Add driver for Broadcom FlexRM ring manager")
Signed-off-by: Anup Patel anup.patel@broadcom.com Reviewed-by: Scott Branden scott.branden@broadcom.com Signed-off-by: Jassi Brar jaswinder.singh@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/mailbox/bcm-flexrm-mailbox.c | 22 +++++++++++++++++++--- 1 file changed, 19 insertions(+), 3 deletions(-)
--- a/drivers/mailbox/bcm-flexrm-mailbox.c +++ b/drivers/mailbox/bcm-flexrm-mailbox.c @@ -1365,8 +1365,8 @@ static void flexrm_shutdown(struct mbox_ /* Disable/inactivate ring */ writel_relaxed(0x0, ring->regs + RING_CONTROL);
- /* Flush ring with timeout of 1s */ - timeout = 1000; + /* Set ring flush state */ + timeout = 1000; /* timeout of 1s */ writel_relaxed(BIT(CONTROL_FLUSH_SHIFT), ring->regs + RING_CONTROL); do { @@ -1374,7 +1374,23 @@ static void flexrm_shutdown(struct mbox_ FLUSH_DONE_MASK) break; mdelay(1); - } while (timeout--); + } while (--timeout); + if (!timeout) + dev_err(ring->mbox->dev, + "setting ring%d flush state timedout\n", ring->num); + + /* Clear ring flush state */ + timeout = 1000; /* timeout of 1s */ + writel_relaxed(0x0, ring + RING_CONTROL); + do { + if (!(readl_relaxed(ring + RING_FLUSH_DONE) & + FLUSH_DONE_MASK)) + break; + mdelay(1); + } while (--timeout); + if (!timeout) + dev_err(ring->mbox->dev, + "clearing ring%d flush state timedout\n", ring->num);
/* Abort all in-flight requests */ for (reqid = 0; reqid < RING_MAX_REQ_COUNT; reqid++) {
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andrey Konovalov andreyknvl@google.com
commit fc09785de0a364427a5df63d703bae9a306ed116 upstream.
ieee80211_register_hw() in p54_register_common() may fail and leds won't get initialized. Currently p54_unregister_common() doesn't check that and always calls p54_unregister_leds(). The fix is to check priv->registered flag before calling p54_unregister_leds().
Found by syzkaller.
INFO: trying to register non-static key. the code is fine but needs lockdep annotation. turning off the locking correctness validator. CPU: 1 PID: 1404 Comm: kworker/1:1 Not tainted 4.14.0-rc1-42251-gebb2c2437d80-dirty #205 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011 Workqueue: usb_hub_wq hub_event Call Trace: __dump_stack lib/dump_stack.c:16 dump_stack+0x292/0x395 lib/dump_stack.c:52 register_lock_class+0x6c4/0x1a00 kernel/locking/lockdep.c:769 __lock_acquire+0x27e/0x4550 kernel/locking/lockdep.c:3385 lock_acquire+0x259/0x620 kernel/locking/lockdep.c:4002 flush_work+0xf0/0x8c0 kernel/workqueue.c:2886 __cancel_work_timer+0x51d/0x870 kernel/workqueue.c:2961 cancel_delayed_work_sync+0x1f/0x30 kernel/workqueue.c:3081 p54_unregister_leds+0x6c/0xc0 drivers/net/wireless/intersil/p54/led.c:160 p54_unregister_common+0x3d/0xb0 drivers/net/wireless/intersil/p54/main.c:856 p54u_disconnect+0x86/0x120 drivers/net/wireless/intersil/p54/p54usb.c:1073 usb_unbind_interface+0x21c/0xa90 drivers/usb/core/driver.c:423 __device_release_driver drivers/base/dd.c:861 device_release_driver_internal+0x4f4/0x5c0 drivers/base/dd.c:893 device_release_driver+0x1e/0x30 drivers/base/dd.c:918 bus_remove_device+0x2f4/0x4b0 drivers/base/bus.c:565 device_del+0x5c4/0xab0 drivers/base/core.c:1985 usb_disable_device+0x1e9/0x680 drivers/usb/core/message.c:1170 usb_disconnect+0x260/0x7a0 drivers/usb/core/hub.c:2124 hub_port_connect drivers/usb/core/hub.c:4754 hub_port_connect_change drivers/usb/core/hub.c:5009 port_event drivers/usb/core/hub.c:5115 hub_event+0x1318/0x3740 drivers/usb/core/hub.c:5195 process_one_work+0xc7f/0x1db0 kernel/workqueue.c:2119 process_scheduled_works kernel/workqueue.c:2179 worker_thread+0xb2b/0x1850 kernel/workqueue.c:2255 kthread+0x3a1/0x470 kernel/kthread.c:231 ret_from_fork+0x2a/0x40 arch/x86/entry/entry_64.S:431
Signed-off-by: Andrey Konovalov andreyknvl@google.com Acked-by: Christian Lamparter chunkeey@googlemail.com Signed-off-by: Kalle Valo kvalo@codeaurora.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/wireless/intersil/p54/main.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)
--- a/drivers/net/wireless/intersil/p54/main.c +++ b/drivers/net/wireless/intersil/p54/main.c @@ -852,12 +852,11 @@ void p54_unregister_common(struct ieee80 { struct p54_common *priv = dev->priv;
-#ifdef CONFIG_P54_LEDS - p54_unregister_leds(priv); -#endif /* CONFIG_P54_LEDS */ - if (priv->registered) { priv->registered = false; +#ifdef CONFIG_P54_LEDS + p54_unregister_leds(priv); +#endif /* CONFIG_P54_LEDS */ ieee80211_unregister_hw(dev); }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bart Van Assche bart.vanassche@wdc.com
commit 4e9b6f20828ac880dbc1fa2fdbafae779473d1af upstream.
Make sure that if the timeout timer fires after a queue has been marked "dying" that the affected requests are finished.
Reported-by: chenxiang (M) chenxiang66@hisilicon.com Fixes: commit 287922eb0b18 ("block: defer timeouts to a workqueue") Signed-off-by: Bart Van Assche bart.vanassche@wdc.com Tested-by: chenxiang (M) chenxiang66@hisilicon.com Cc: Christoph Hellwig hch@lst.de Cc: Keith Busch keith.busch@intel.com Cc: Hannes Reinecke hare@suse.com Cc: Ming Lei ming.lei@redhat.com Cc: Johannes Thumshirn jthumshirn@suse.de Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- block/blk-core.c | 2 ++ block/blk-timeout.c | 3 --- 2 files changed, 2 insertions(+), 3 deletions(-)
--- a/block/blk-core.c +++ b/block/blk-core.c @@ -333,6 +333,7 @@ EXPORT_SYMBOL(blk_stop_queue); void blk_sync_queue(struct request_queue *q) { del_timer_sync(&q->timeout); + cancel_work_sync(&q->timeout_work);
if (q->mq_ops) { struct blk_mq_hw_ctx *hctx; @@ -844,6 +845,7 @@ struct request_queue *blk_alloc_queue_no setup_timer(&q->backing_dev_info->laptop_mode_wb_timer, laptop_mode_timer_fn, (unsigned long) q); setup_timer(&q->timeout, blk_rq_timed_out_timer, (unsigned long) q); + INIT_WORK(&q->timeout_work, NULL); INIT_LIST_HEAD(&q->queue_head); INIT_LIST_HEAD(&q->timeout_list); INIT_LIST_HEAD(&q->icq_list); --- a/block/blk-timeout.c +++ b/block/blk-timeout.c @@ -134,8 +134,6 @@ void blk_timeout_work(struct work_struct struct request *rq, *tmp; int next_set = 0;
- if (blk_queue_enter(q, true)) - return; spin_lock_irqsave(q->queue_lock, flags);
list_for_each_entry_safe(rq, tmp, &q->timeout_list, timeout_list) @@ -145,7 +143,6 @@ void blk_timeout_work(struct work_struct mod_timer(&q->timeout, round_jiffies_up(next));
spin_unlock_irqrestore(q->queue_lock, flags); - blk_queue_exit(q); }
/**
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nate Dailey nate.dailey@stratus.com
commit f6eca2d43ed694ab8124dd24c88277f7eca93b7d upstream.
If freeze_array is attempted in the middle of close_sync/ wait_all_barriers, deadlock can occur.
freeze_array will wait for nr_pending and nr_queued to line up. wait_all_barriers increments nr_pending for each barrier bucket, one at a time, but doesn't actually issue IO that could be counted in nr_queued. So freeze_array is blocked until wait_all_barriers completes and allow_all_barriers runs. At the same time, when _wait_barrier sees array_frozen == 1, it stops and waits for freeze_array to complete.
Prevent the deadlock by making close_sync call _wait_barrier and _allow_barrier for one bucket at a time, instead of deferring the _allow_barrier calls until after all _wait_barriers are complete.
Signed-off-by: Nate Dailey nate.dailey@stratus.com Fix: fd76863e37fe(RAID1: a new I/O barrier implementation to remove resync window) Reviewed-by: Coly Li colyli@suse.de Signed-off-by: Shaohua Li shli@fb.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/md/raid1.c | 24 ++++++------------------ 1 file changed, 6 insertions(+), 18 deletions(-)
--- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -990,14 +990,6 @@ static void wait_barrier(struct r1conf * _wait_barrier(conf, idx); }
-static void wait_all_barriers(struct r1conf *conf) -{ - int idx; - - for (idx = 0; idx < BARRIER_BUCKETS_NR; idx++) - _wait_barrier(conf, idx); -} - static void _allow_barrier(struct r1conf *conf, int idx) { atomic_dec(&conf->nr_pending[idx]); @@ -1011,14 +1003,6 @@ static void allow_barrier(struct r1conf _allow_barrier(conf, idx); }
-static void allow_all_barriers(struct r1conf *conf) -{ - int idx; - - for (idx = 0; idx < BARRIER_BUCKETS_NR; idx++) - _allow_barrier(conf, idx); -} - /* conf->resync_lock should be held */ static int get_unqueued_pending(struct r1conf *conf) { @@ -1654,8 +1638,12 @@ static void print_conf(struct r1conf *co
static void close_sync(struct r1conf *conf) { - wait_all_barriers(conf); - allow_all_barriers(conf); + int idx; + + for (idx = 0; idx < BARRIER_BUCKETS_NR; idx++) { + _wait_barrier(conf, idx); + _allow_barrier(conf, idx); + }
mempool_destroy(conf->r1buf_pool); conf->r1buf_pool = NULL;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Marc Zyngier marc.zyngier@arm.com
commit 4f8413a3a799c958f7a10a6310a451e6b8aef5ad upstream.
When requesting a shared interrupt, we assume that the firmware support code (DT or ACPI) has called irqd_set_trigger_type already, so that we can retrieve it and check that the requester is being reasonnable.
Unfortunately, we still have non-DT, non-ACPI systems around, and these guys won't call irqd_set_trigger_type before requesting the interrupt. The consequence is that we fail the request that would have worked before.
We can either chase all these use cases (boring), or address it in core code (easier). Let's have a per-irq_desc flag that indicates whether irqd_set_trigger_type has been called, and let's just check it when checking for a shared interrupt. If it hasn't been set, just take whatever the interrupt requester asks.
Fixes: 382bd4de6182 ("genirq: Use irqd_get_trigger_type to compare the trigger type for shared IRQs") Reported-and-tested-by: Petr Cvek petrcvekcz@gmail.com Signed-off-by: Marc Zyngier marc.zyngier@arm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- include/linux/irq.h | 11 ++++++++++- kernel/irq/manage.c | 13 ++++++++++++- 2 files changed, 22 insertions(+), 2 deletions(-)
--- a/include/linux/irq.h +++ b/include/linux/irq.h @@ -211,6 +211,7 @@ struct irq_data { * IRQD_MANAGED_SHUTDOWN - Interrupt was shutdown due to empty affinity * mask. Applies only to affinity managed irqs. * IRQD_SINGLE_TARGET - IRQ allows only a single affinity target + * IRQD_DEFAULT_TRIGGER_SET - Expected trigger already been set */ enum { IRQD_TRIGGER_MASK = 0xf, @@ -231,6 +232,7 @@ enum { IRQD_IRQ_STARTED = (1 << 22), IRQD_MANAGED_SHUTDOWN = (1 << 23), IRQD_SINGLE_TARGET = (1 << 24), + IRQD_DEFAULT_TRIGGER_SET = (1 << 25), };
#define __irqd_to_state(d) ACCESS_PRIVATE((d)->common, state_use_accessors) @@ -260,18 +262,25 @@ static inline void irqd_mark_affinity_wa __irqd_to_state(d) |= IRQD_AFFINITY_SET; }
+static inline bool irqd_trigger_type_was_set(struct irq_data *d) +{ + return __irqd_to_state(d) & IRQD_DEFAULT_TRIGGER_SET; +} + static inline u32 irqd_get_trigger_type(struct irq_data *d) { return __irqd_to_state(d) & IRQD_TRIGGER_MASK; }
/* - * Must only be called inside irq_chip.irq_set_type() functions. + * Must only be called inside irq_chip.irq_set_type() functions or + * from the DT/ACPI setup code. */ static inline void irqd_set_trigger_type(struct irq_data *d, u32 type) { __irqd_to_state(d) &= ~IRQD_TRIGGER_MASK; __irqd_to_state(d) |= type & IRQD_TRIGGER_MASK; + __irqd_to_state(d) |= IRQD_DEFAULT_TRIGGER_SET; }
static inline bool irqd_is_level_type(struct irq_data *d) --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -1245,7 +1245,18 @@ __setup_irq(unsigned int irq, struct irq * set the trigger type must match. Also all must * agree on ONESHOT. */ - unsigned int oldtype = irqd_get_trigger_type(&desc->irq_data); + unsigned int oldtype; + + /* + * If nobody did set the configuration before, inherit + * the one provided by the requester. + */ + if (irqd_trigger_type_was_set(&desc->irq_data)) { + oldtype = irqd_get_trigger_type(&desc->irq_data); + } else { + oldtype = new->flags & IRQF_TRIGGER_MASK; + irqd_set_trigger_type(&desc->irq_data, oldtype); + }
if (!((old->flags & new->flags) & IRQF_SHARED) || (oldtype != (new->flags & IRQF_TRIGGER_MASK)) ||
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Johan Hovold johan@kernel.org
commit 00ee9a1ca5080202bc37b44e998c3b2c74d45817 upstream.
Fix child-node lookup during initialisation, which ended up searching the whole device tree depth-first starting at the parent rather than just matching on its children.
To make things worse, the parent gic node was prematurely freed, while the ppi-partitions node was leaked.
Fixes: e3825ba1af3a ("irqchip/gic-v3: Add support for partitioned PPIs") Signed-off-by: Johan Hovold johan@kernel.org Signed-off-by: Marc Zyngier marc.zyngier@arm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/irqchip/irq-gic-v3.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
--- a/drivers/irqchip/irq-gic-v3.c +++ b/drivers/irqchip/irq-gic-v3.c @@ -1071,18 +1071,18 @@ static void __init gic_populate_ppi_part int nr_parts; struct partition_affinity *parts;
- parts_node = of_find_node_by_name(gic_node, "ppi-partitions"); + parts_node = of_get_child_by_name(gic_node, "ppi-partitions"); if (!parts_node) return;
nr_parts = of_get_child_count(parts_node);
if (!nr_parts) - return; + goto out_put_node;
parts = kzalloc(sizeof(*parts) * nr_parts, GFP_KERNEL); if (WARN_ON(!parts)) - return; + goto out_put_node;
for_each_child_of_node(parts_node, child_part) { struct partition_affinity *part; @@ -1149,6 +1149,9 @@ static void __init gic_populate_ppi_part
gic_data.ppi_descs[i] = desc; } + +out_put_node: + of_node_put(parts_node); }
static void __init gic_of_setup_kvm_info(struct device_node *node)
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Vasily Averin vvs@virtuozzo.com
commit dc3033e16c59a2c4e62b31341258a5786cbcee56 upstream.
lockd_up() can call lockd_unregister_notifiers twice: inside lockd_start_svc() when it calls lockd_svc_exit_thread() and then in error path of lockd_up()
Patch forces lockd_start_svc() to unregister notifiers in all error cases and removes extra unregister in error path of lockd_up().
Fixes: cb7d224f82e4 "lockd: unregister notifier blocks if the service ..." Signed-off-by: Vasily Averin vvs@virtuozzo.com Reviewed-by: Jeff Layton jlayton@kernel.org Signed-off-by: J. Bruce Fields bfields@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/lockd/svc.c | 20 +++++++++----------- 1 file changed, 9 insertions(+), 11 deletions(-)
--- a/fs/lockd/svc.c +++ b/fs/lockd/svc.c @@ -369,6 +369,7 @@ static int lockd_start_svc(struct svc_se printk(KERN_WARNING "lockd_up: svc_rqst allocation failed, error=%d\n", error); + lockd_unregister_notifiers(); goto out_rqst; }
@@ -459,13 +460,16 @@ int lockd_up(struct net *net) }
error = lockd_up_net(serv, net); - if (error < 0) - goto err_net; + if (error < 0) { + lockd_unregister_notifiers(); + goto err_put; + }
error = lockd_start_svc(serv); - if (error < 0) - goto err_start; - + if (error < 0) { + lockd_down_net(serv, net); + goto err_put; + } nlmsvc_users++; /* * Note: svc_serv structures have an initial use count of 1, @@ -476,12 +480,6 @@ err_put: err_create: mutex_unlock(&nlmsvc_mutex); return error; - -err_start: - lockd_down_net(serv, net); -err_net: - lockd_unregister_notifiers(); - goto err_put; } EXPORT_SYMBOL_GPL(lockd_up);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Paul Mackerras paulus@ozlabs.org
commit 00bb6ae5006205e041ce9784c819460562351d47 upstream.
When running a guest on a POWER9 system with the in-kernel XICS emulation disabled (for example by running QEMU with the parameter "-machine pseries,kernel_irqchip=off"), the kernel does not pass the XICS-related hypercalls such as H_CPPR up to userspace for emulation there as it should.
The reason for this is that the real-mode handlers for these hypercalls don't check whether a XICS device has been instantiated before calling the xics-on-xive code. That code doesn't check either, leading to potential NULL pointer dereferences because vcpu->arch.xive_vcpu is NULL. Those dereferences won't cause an exception in real mode but will lead to kernel memory corruption.
This fixes it by adding kvmppc_xics_enabled() checks before calling the XICS functions.
Fixes: 5af50993850a ("KVM: PPC: Book3S HV: Native usage of the XIVE interrupt controller") Signed-off-by: Paul Mackerras paulus@ozlabs.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/kvm/book3s_hv_builtin.c | 12 ++++++++++++ 1 file changed, 12 insertions(+)
--- a/arch/powerpc/kvm/book3s_hv_builtin.c +++ b/arch/powerpc/kvm/book3s_hv_builtin.c @@ -529,6 +529,8 @@ static inline bool is_rm(void)
unsigned long kvmppc_rm_h_xirr(struct kvm_vcpu *vcpu) { + if (!kvmppc_xics_enabled(vcpu)) + return H_TOO_HARD; if (xive_enabled()) { if (is_rm()) return xive_rm_h_xirr(vcpu); @@ -541,6 +543,8 @@ unsigned long kvmppc_rm_h_xirr(struct kv
unsigned long kvmppc_rm_h_xirr_x(struct kvm_vcpu *vcpu) { + if (!kvmppc_xics_enabled(vcpu)) + return H_TOO_HARD; vcpu->arch.gpr[5] = get_tb(); if (xive_enabled()) { if (is_rm()) @@ -554,6 +558,8 @@ unsigned long kvmppc_rm_h_xirr_x(struct
unsigned long kvmppc_rm_h_ipoll(struct kvm_vcpu *vcpu, unsigned long server) { + if (!kvmppc_xics_enabled(vcpu)) + return H_TOO_HARD; if (xive_enabled()) { if (is_rm()) return xive_rm_h_ipoll(vcpu, server); @@ -567,6 +573,8 @@ unsigned long kvmppc_rm_h_ipoll(struct k int kvmppc_rm_h_ipi(struct kvm_vcpu *vcpu, unsigned long server, unsigned long mfrr) { + if (!kvmppc_xics_enabled(vcpu)) + return H_TOO_HARD; if (xive_enabled()) { if (is_rm()) return xive_rm_h_ipi(vcpu, server, mfrr); @@ -579,6 +587,8 @@ int kvmppc_rm_h_ipi(struct kvm_vcpu *vcp
int kvmppc_rm_h_cppr(struct kvm_vcpu *vcpu, unsigned long cppr) { + if (!kvmppc_xics_enabled(vcpu)) + return H_TOO_HARD; if (xive_enabled()) { if (is_rm()) return xive_rm_h_cppr(vcpu, cppr); @@ -591,6 +601,8 @@ int kvmppc_rm_h_cppr(struct kvm_vcpu *vc
int kvmppc_rm_h_eoi(struct kvm_vcpu *vcpu, unsigned long xirr) { + if (!kvmppc_xics_enabled(vcpu)) + return H_TOO_HARD; if (xive_enabled()) { if (is_rm()) return xive_rm_h_eoi(vcpu, xirr);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ladi Prosek lprosek@redhat.com
commit 21f2d551183847bc7fbe8d866151d00cdad18752 upstream.
Intel SDM 27.5.2 Loading Host Segment and Descriptor-Table Registers:
"The GDTR and IDTR limits are each set to FFFFH."
Signed-off-by: Ladi Prosek lprosek@redhat.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/kvm/vmx.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -11325,6 +11325,8 @@ static void load_vmcs12_host_state(struc vmcs_writel(GUEST_SYSENTER_EIP, vmcs12->host_ia32_sysenter_eip); vmcs_writel(GUEST_IDTR_BASE, vmcs12->host_idtr_base); vmcs_writel(GUEST_GDTR_BASE, vmcs12->host_gdtr_base); + vmcs_write32(GUEST_IDTR_LIMIT, 0xFFFF); + vmcs_write32(GUEST_GDTR_LIMIT, 0xFFFF);
/* If not VM_EXIT_CLEAR_BNDCFGS, the L2 value propagates to L1. */ if (vmcs12->vm_exit_controls & VM_EXIT_CLEAR_BNDCFGS)
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jeff Moyer jmoyer@redhat.com
commit 957ac8c421ad8b5eef9b17fe98e146d8311a541e upstream.
PMD faults on a zero length file on a file system mounted with -o dax will not generate SIGBUS as expected.
fd = open(...O_TRUNC); addr = mmap(NULL, 2*1024*1024, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0); *addr = 'a'; <expect SIGBUS>
The problem is this code in dax_iomap_pmd_fault:
max_pgoff = (i_size_read(inode) - 1) >> PAGE_SHIFT;
If the inode size is zero, we end up with a max_pgoff that is way larger than 0. :) Fix it by using DIV_ROUND_UP, as is done elsewhere in the kernel.
I tested this with some simple test code that ensured that SIGBUS was received where expected.
Fixes: 642261ac995e ("dax: add struct iomap based DAX PMD support") Signed-off-by: Jeff Moyer jmoyer@redhat.com Signed-off-by: Dan Williams dan.j.williams@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- fs/dax.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
--- a/fs/dax.c +++ b/fs/dax.c @@ -1327,7 +1327,7 @@ static int dax_iomap_pmd_fault(struct vm * this is a reliable test. */ pgoff = linear_page_index(vma, pmd_addr); - max_pgoff = (i_size_read(inode) - 1) >> PAGE_SHIFT; + max_pgoff = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
trace_dax_pmd_fault(inode, vmf, max_pgoff, 0);
@@ -1351,13 +1351,13 @@ static int dax_iomap_pmd_fault(struct vm if ((pmd_addr + PMD_SIZE) > vma->vm_end) goto fallback;
- if (pgoff > max_pgoff) { + if (pgoff >= max_pgoff) { result = VM_FAULT_SIGBUS; goto out; }
/* If the PMD would extend beyond the file size */ - if ((pgoff | PG_PMD_COLOUR) > max_pgoff) + if ((pgoff | PG_PMD_COLOUR) >= max_pgoff) goto fallback;
/*
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mikulas Patocka mpatocka@redhat.com
commit 9f586fff6574f6ecbf323f92d44ffaf0d96225fe upstream.
Don't crash in case of allocation failure in dax_alloc_inode.
syzkaller hit the following crash on e4880bc5dfb1
kasan: CONFIG_KASAN_INLINE enabled kasan: GPF could be caused by NULL-ptr deref or user memory access [..] RIP: 0010:dax_alloc_inode+0x3b/0x70 drivers/dax/super.c:348 Call Trace: alloc_inode+0x65/0x180 fs/inode.c:208 new_inode_pseudo+0x69/0x190 fs/inode.c:890 new_inode+0x1c/0x40 fs/inode.c:919 mount_pseudo_xattr+0x288/0x560 fs/libfs.c:261 mount_pseudo include/linux/fs.h:2137 [inline] dax_mount+0x2e/0x40 drivers/dax/super.c:388 mount_fs+0x66/0x2d0 fs/super.c:1223
Fixes: 7b6be8444e0f ("dax: refactor dax-fs into a generic provider...") Reported-by: syzbot syzkaller@googlegroups.com Signed-off-by: Mikulas Patocka mpatocka@redhat.com Signed-off-by: Dan Williams dan.j.williams@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/dax/super.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/drivers/dax/super.c +++ b/drivers/dax/super.c @@ -344,6 +344,9 @@ static struct inode *dax_alloc_inode(str struct inode *inode;
dax_dev = kmem_cache_alloc(dax_cache, GFP_KERNEL); + if (!dax_dev) + return NULL; + inode = &dax_dev->inode; inode->i_rdev = 0; return inode;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Trond Myklebust trond.myklebust@primarydata.com
commit e9d4bf219c83d09579bc62512fea2ca10f025d93 upstream.
There is no guarantee that either the request or the svc_xprt exist by the time we get round to printing the trace message.
Signed-off-by: Trond Myklebust trond.myklebust@primarydata.com Signed-off-by: J. Bruce Fields bfields@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- include/trace/events/sunrpc.h | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-)
--- a/include/trace/events/sunrpc.h +++ b/include/trace/events/sunrpc.h @@ -456,20 +456,22 @@ TRACE_EVENT(svc_recv, TP_ARGS(rqst, status),
TP_STRUCT__entry( - __field(struct sockaddr *, addr) __field(__be32, xid) __field(int, status) __field(unsigned long, flags) + __dynamic_array(unsigned char, addr, rqst->rq_addrlen) ),
TP_fast_assign( - __entry->addr = (struct sockaddr *)&rqst->rq_addr; __entry->xid = status > 0 ? rqst->rq_xid : 0; __entry->status = status; __entry->flags = rqst->rq_flags; + memcpy(__get_dynamic_array(addr), + &rqst->rq_addr, rqst->rq_addrlen); ),
- TP_printk("addr=%pIScp xid=0x%x status=%d flags=%s", __entry->addr, + TP_printk("addr=%pIScp xid=0x%x status=%d flags=%s", + (struct sockaddr *)__get_dynamic_array(addr), be32_to_cpu(__entry->xid), __entry->status, show_rqstp_flags(__entry->flags)) ); @@ -514,22 +516,23 @@ DECLARE_EVENT_CLASS(svc_rqst_status, TP_ARGS(rqst, status),
TP_STRUCT__entry( - __field(struct sockaddr *, addr) __field(__be32, xid) - __field(int, dropme) __field(int, status) __field(unsigned long, flags) + __dynamic_array(unsigned char, addr, rqst->rq_addrlen) ),
TP_fast_assign( - __entry->addr = (struct sockaddr *)&rqst->rq_addr; __entry->xid = rqst->rq_xid; __entry->status = status; __entry->flags = rqst->rq_flags; + memcpy(__get_dynamic_array(addr), + &rqst->rq_addr, rqst->rq_addrlen); ),
TP_printk("addr=%pIScp rq_xid=0x%x status=%d flags=%s", - __entry->addr, be32_to_cpu(__entry->xid), + (struct sockaddr *)__get_dynamic_array(addr), + be32_to_cpu(__entry->xid), __entry->status, show_rqstp_flags(__entry->flags)) );
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Johan Hovold johan@kernel.org
commit 33ec6dbc5a02677509d97fe36cd2105753f0f0ea upstream.
Fix child node-lookup during probe, which ended up searching the whole device tree depth-first starting at parent rather than just matching on its children.
Note that the original premature free of the parent node has already been fixed separately, but that fix was apparently never backported to stable.
Fixes: 9ac33b0ce81f ("CLK: TI: Driver for DRA7 ATL (Audio Tracking Logic)") Fixes: 660e15519399 ("clk: ti: dra7-atl-clock: Fix of_node reference counting") Cc: Peter Ujfalusi peter.ujfalusi@ti.com Signed-off-by: Johan Hovold johan@kernel.org Acked-by: Peter Ujfalusi peter.ujfalusi@ti.com Signed-off-by: Stephen Boyd sboyd@codeaurora.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/clk/ti/clk-dra7-atl.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
--- a/drivers/clk/ti/clk-dra7-atl.c +++ b/drivers/clk/ti/clk-dra7-atl.c @@ -274,8 +274,7 @@ static int of_dra7_atl_clk_probe(struct
/* Get configuration for the ATL instances */ snprintf(prop, sizeof(prop), "atl%u", i); - of_node_get(node); - cfg_node = of_find_node_by_name(node, prop); + cfg_node = of_get_child_by_name(node, prop); if (cfg_node) { ret = of_property_read_u32(cfg_node, "bws", &cdesc->bws);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dan Williams dan.j.williams@intel.com
commit d34cb808402898e53b9a9bcbbedd01667a78723b upstream.
If we successfully enable a DIMM then it must not be locked and we can clear the label-read failure condition. Otherwise, we need to reload the entire bus provider driver to achieve the same effect, and that can disrupt unrelated DIMMs and namespaces.
Fixes: 9d62ed965118 ("libnvdimm: handle locked label storage areas") Signed-off-by: Dan Williams dan.j.williams@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/nvdimm/dimm.c | 1 + drivers/nvdimm/dimm_devs.c | 7 +++++++ drivers/nvdimm/nd.h | 1 + 3 files changed, 9 insertions(+)
--- a/drivers/nvdimm/dimm.c +++ b/drivers/nvdimm/dimm.c @@ -68,6 +68,7 @@ static int nvdimm_probe(struct device *d rc = nd_label_reserve_dpa(ndd); if (ndd->ns_current >= 0) nvdimm_set_aliasing(dev); + nvdimm_clear_locked(dev); nvdimm_bus_unlock(dev);
if (rc) --- a/drivers/nvdimm/dimm_devs.c +++ b/drivers/nvdimm/dimm_devs.c @@ -200,6 +200,13 @@ void nvdimm_set_locked(struct device *de set_bit(NDD_LOCKED, &nvdimm->flags); }
+void nvdimm_clear_locked(struct device *dev) +{ + struct nvdimm *nvdimm = to_nvdimm(dev); + + clear_bit(NDD_LOCKED, &nvdimm->flags); +} + static void nvdimm_release(struct device *dev) { struct nvdimm *nvdimm = to_nvdimm(dev); --- a/drivers/nvdimm/nd.h +++ b/drivers/nvdimm/nd.h @@ -254,6 +254,7 @@ long nvdimm_clear_poison(struct device * unsigned int len); void nvdimm_set_aliasing(struct device *dev); void nvdimm_set_locked(struct device *dev); +void nvdimm_clear_locked(struct device *dev); struct nd_btt *to_nd_btt(struct device *dev);
struct nd_gen_sb {
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dan Williams dan.j.williams@intel.com
commit 26417ae4fc6108f8db436f24108b08f68bdc520e upstream.
For the same reason that /proc/iomem returns 0's for non-root readers and acpi tables are root-only, make the 'resource' attribute for pfn devices only readable by root. Otherwise we disclose physical address information.
Fixes: f6ed58c70d14 ("libnvdimm, pfn: 'resource'-address and 'size'...") Reported-by: Dave Hansen dave.hansen@linux.intel.com Signed-off-by: Dan Williams dan.j.williams@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/nvdimm/pfn_devs.c | 8 ++++++++ 1 file changed, 8 insertions(+)
--- a/drivers/nvdimm/pfn_devs.c +++ b/drivers/nvdimm/pfn_devs.c @@ -282,8 +282,16 @@ static struct attribute *nd_pfn_attribut NULL, };
+static umode_t pfn_visible(struct kobject *kobj, struct attribute *a, int n) +{ + if (a == &dev_attr_resource.attr) + return 0400; + return a->mode; +} + struct attribute_group nd_pfn_attribute_group = { .attrs = nd_pfn_attributes, + .is_visible = pfn_visible, };
static const struct attribute_group *nd_pfn_attribute_groups[] = {
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dan Williams dan.j.williams@intel.com
commit b18d4b8a25af6fe83d7692191d6ff962ea611c4f upstream.
The set of valid sequence numbers is {1,2,3}. The specification indicates that an implementation should consider 0 a sign of a critical error:
UEFI 2.7: 13.19 NVDIMM Label Protocol
Software never writes the sequence number 00, so a correctly check-summed Index Block with this sequence number probably indicates a critical error. When software discovers this case it treats it as an invalid Index Block indication.
While the expectation is that the invalid block is just thrown away, the Robustness Principle says we should fix this to make both sequence numbers valid.
Fixes: f524bf271a5c ("libnvdimm: write pmem label set") Reported-by: Juston Li juston.li@intel.com Signed-off-by: Dan Williams dan.j.williams@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/nvdimm/label.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/nvdimm/label.c +++ b/drivers/nvdimm/label.c @@ -1050,7 +1050,7 @@ static int init_labels(struct nd_mapping nsindex = to_namespace_index(ndd, 0); memset(nsindex, 0, ndd->nsarea.config_size); for (i = 0; i < 2; i++) { - int rc = nd_label_write_index(ndd, i, i*2, ND_NSINDEX_INIT); + int rc = nd_label_write_index(ndd, i, 3 - i, ND_NSINDEX_INIT);
if (rc) return rc;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dan Williams dan.j.williams@intel.com
commit b8ff981f88df03c72a4de2f6eaa9ce447a10ac03 upstream.
For the same reason that /proc/iomem returns 0's for non-root readers and acpi tables are root-only, make the 'resource' attribute for region devices only readable by root. Otherwise we disclose physical address information.
Fixes: 802f4be6feee ("libnvdimm: Add 'resource' sysfs attribute to regions") Cc: Dave Jiang dave.jiang@intel.com Cc: Johannes Thumshirn jthumshirn@suse.de Reported-by: Dave Hansen dave.hansen@linux.intel.com Signed-off-by: Dan Williams dan.j.williams@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/nvdimm/region_devs.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)
--- a/drivers/nvdimm/region_devs.c +++ b/drivers/nvdimm/region_devs.c @@ -562,8 +562,12 @@ static umode_t region_visible(struct kob if (!is_nd_pmem(dev) && a == &dev_attr_badblocks.attr) return 0;
- if (!is_nd_pmem(dev) && a == &dev_attr_resource.attr) - return 0; + if (a == &dev_attr_resource.attr) { + if (is_nd_pmem(dev)) + return 0400; + else + return 0; + }
if (a == &dev_attr_deep_flush.attr) { int has_flush = nvdimm_has_flush(nd_region);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dan Williams dan.j.williams@intel.com
commit c1fb3542074fd0c4d901d778bd52455111e4eb6f upstream.
For the same reason that /proc/iomem returns 0's for non-root readers and acpi tables are root-only, make the 'resource' attribute for namespace devices only readable by root. Otherwise we disclose physical address information.
Fixes: bf9bccc14c05 ("libnvdimm: pmem label sets and namespace instantiation") Reported-by: Dave Hansen dave.hansen@linux.intel.com Signed-off-by: Dan Williams dan.j.williams@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/nvdimm/namespace_devs.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/nvdimm/namespace_devs.c +++ b/drivers/nvdimm/namespace_devs.c @@ -1620,7 +1620,7 @@ static umode_t namespace_visible(struct if (a == &dev_attr_resource.attr) { if (is_namespace_blk(dev)) return 0; - return a->mode; + return 0400; }
if (is_namespace_pmem(dev) || is_namespace_blk(dev)) {
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Chuck Lever chuck.lever@oracle.com
commit 0bad47cada5defba13e98827d22d06f13258dfb3 upstream.
During each NFSv4 callback Call, an RDMA Send completion frees the page that contains the RPC Call message. If the upper layer determines that a retransmit is necessary, this is too soon.
One possible symptom: after a GARBAGE_ARGS response an NFSv4.1 callback request, the following BUG fires on the NFS server:
kernel: BUG: Bad page state in process kworker/0:2H pfn:7d3ce2 kernel: page:ffffea001f4f3880 count:-2 mapcount:0 mapping: (null) index:0x0 kernel: flags: 0x2fffff80000000() kernel: raw: 002fffff80000000 0000000000000000 0000000000000000 fffffffeffffffff kernel: raw: dead000000000100 dead000000000200 0000000000000000 0000000000000000 kernel: page dumped because: nonzero _refcount kernel: Modules linked in: cts rpcsec_gss_krb5 ocfs2_dlmfs ocfs2_stack_o2cb ocfs2_dlm ocfs2_nodemanager ocfs2_stackglue rpcrdm a ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pc lmul crc32_pclmul ghash_clmulni_intel pcbc iTCO_wdt iTCO_vendor_support aesni_intel crypto_simd glue_helper cryptd pcspkr lpc_ich i2c_i801 mei_me mf d_core mei raid0 sg wmi ioatdma ipmi_si ipmi_devintf ipmi_msghandler shpchp acpi_power_meter acpi_pad nfsd nfs_acl lockd auth_rpcgss grace sunrpc ip_tables xfs libcrc32c mlx4_en mlx4_ib mlx5_ib ib_core sd_mod sr_mod cdrom ast drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm ahci crc32c_intel libahci drm mlx5_core igb libata mlx4_core dca i2c_algo_bit i2c_core nvme kernel: ptp nvme_core pps_core dm_mirror dm_region_hash dm_log dm_mod dax kernel: CPU: 0 PID: 11495 Comm: kworker/0:2H Not tainted 4.14.0-rc3-00001-g577ce48 #811 kernel: Hardware name: Supermicro Super Server/X10SRL-F, BIOS 1.0c 09/09/2015 kernel: Workqueue: ib-comp-wq ib_cq_poll_work [ib_core] kernel: Call Trace: kernel: dump_stack+0x62/0x80 kernel: bad_page+0xfe/0x11a kernel: free_pages_check_bad+0x76/0x78 kernel: free_pcppages_bulk+0x364/0x441 kernel: ? ttwu_do_activate.isra.61+0x71/0x78 kernel: free_hot_cold_page+0x1c5/0x202 kernel: __put_page+0x2c/0x36 kernel: svc_rdma_put_context+0xd9/0xe4 [rpcrdma] kernel: svc_rdma_wc_send+0x50/0x98 [rpcrdma]
This issue exists all the way back to v4.5, but refactoring and code re-organization prevents this simple patch from applying to kernels older than v4.12. The fix is the same, however, if someone needs to backport it.
Reported-by: Ben Coddington bcodding@redhat.com BugLink: https://bugzilla.linux-nfs.org/show_bug.cgi?id=314 Fixes: 5d252f90a800 ('svcrdma: Add class for RDMA backwards ... ') Signed-off-by: Chuck Lever chuck.lever@oracle.com Reviewed-by: Jeff Layton jlayton@redhat.com Signed-off-by: J. Bruce Fields bfields@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- net/sunrpc/xprtrdma/svc_rdma_backchannel.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
--- a/net/sunrpc/xprtrdma/svc_rdma_backchannel.c +++ b/net/sunrpc/xprtrdma/svc_rdma_backchannel.c @@ -133,6 +133,10 @@ static int svc_rdma_bc_sendto(struct svc if (ret) goto out_err;
+ /* Bump page refcnt so Send completion doesn't release + * the rq_buffer before all retransmits are complete. + */ + get_page(virt_to_page(rqst->rq_buffer)); ret = svc_rdma_post_send_wr(rdma, ctxt, 1, 0); if (ret) goto out_unmap; @@ -165,7 +169,6 @@ xprt_rdma_bc_allocate(struct rpc_task *t return -EINVAL; }
- /* svc_rdma_sendto releases this page */ page = alloc_page(RPCRDMA_DEF_GFP); if (!page) return -ENOMEM; @@ -184,6 +187,7 @@ xprt_rdma_bc_free(struct rpc_task *task) { struct rpc_rqst *rqst = task->tk_rqstp;
+ put_page(virt_to_page(rqst->rq_buffer)); kfree(rqst->rq_rbuffer); }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bart Van Assche bart.vanassche@wdc.com
commit c70ca38960399a63d5c048b7b700612ea321d17e upstream.
Make srpt_parse_i_port_id() return a negative value if hex2bin() fails.
Fixes: commit a42d985bd5b2 ("ib_srpt: Initial SRP Target merge for v3.3-rc1") Signed-off-by: Bart Van Assche bart.vanassche@wdc.com Signed-off-by: Doug Ledford dledford@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/infiniband/ulp/srpt/ib_srpt.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-)
--- a/drivers/infiniband/ulp/srpt/ib_srpt.c +++ b/drivers/infiniband/ulp/srpt/ib_srpt.c @@ -2777,7 +2777,7 @@ static int srpt_parse_i_port_id(u8 i_por { const char *p; unsigned len, count, leading_zero_bytes; - int ret, rc; + int ret;
p = name; if (strncasecmp(p, "0x", 2) == 0) @@ -2789,10 +2789,9 @@ static int srpt_parse_i_port_id(u8 i_por count = min(len / 2, 16U); leading_zero_bytes = 16 - count; memset(i_port_id, 0, leading_zero_bytes); - rc = hex2bin(i_port_id + leading_zero_bytes, p, count); - if (rc < 0) - pr_debug("hex2bin failed for srpt_parse_i_port_id: %d\n", rc); - ret = 0; + ret = hex2bin(i_port_id + leading_zero_bytes, p, count); + if (ret < 0) + pr_debug("hex2bin failed for srpt_parse_i_port_id: %d\n", ret); out: return ret; }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Parav Pandit parav@mellanox.com
commit 5a3dc32372439eb9a0d6027c54cbfff64803fce5 upstream.
In recent code, two path record entries are alwasy cleared while allocated could be either one or two path record entries. This leads to zero out of unallocated memory.
This fix initializes alternative path record only when alternative path is set.
While we are at it, path record allocation doesn't check for OPA alternative path, but rest of the code checks for OPA alternative path. Path record allocation code doesn't check for OPA alternative LID. This can further lead to memory corruption when only one path record is allocated, but there is actually alternative OPA path record present in CM request.
Fixes: 9fdca4da4d8c ("IB/SA: Split struct sa_path_rec based on IB and ROCE specific fields") Signed-off-by: Parav Pandit parav@mellanox.com Reviewed-by: Moni Shoua monis@mellanox.com Signed-off-by: Leon Romanovsky leon@kernel.org Signed-off-by: Doug Ledford dledford@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/infiniband/core/cm.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-)
--- a/drivers/infiniband/core/cm.c +++ b/drivers/infiniband/core/cm.c @@ -1575,7 +1575,7 @@ static void cm_format_req_event(struct c param->bth_pkey = cm_get_bth_pkey(work); param->port = cm_id_priv->av.port->port_num; param->primary_path = &work->path[0]; - if (req_msg->alt_local_lid) + if (cm_req_has_alt_path(req_msg)) param->alternate_path = &work->path[1]; else param->alternate_path = NULL; @@ -1856,7 +1856,8 @@ static int cm_req_handler(struct cm_work cm_process_routed_req(req_msg, work->mad_recv_wc->wc);
memset(&work->path[0], 0, sizeof(work->path[0])); - memset(&work->path[1], 0, sizeof(work->path[1])); + if (cm_req_has_alt_path(req_msg)) + memset(&work->path[1], 0, sizeof(work->path[1])); grh = rdma_ah_read_grh(&cm_id_priv->av.ah_attr); ret = ib_get_cached_gid(work->port->cm_dev->ib_device, work->port->port_num, @@ -3817,14 +3818,16 @@ static void cm_recv_handler(struct ib_ma struct cm_port *port = mad_agent->context; struct cm_work *work; enum ib_cm_event_type event; + bool alt_path = false; u16 attr_id; int paths = 0; int going_down = 0;
switch (mad_recv_wc->recv_buf.mad->mad_hdr.attr_id) { case CM_REQ_ATTR_ID: - paths = 1 + (((struct cm_req_msg *) mad_recv_wc->recv_buf.mad)-> - alt_local_lid != 0); + alt_path = cm_req_has_alt_path((struct cm_req_msg *) + mad_recv_wc->recv_buf.mad); + paths = 1 + (alt_path != 0); event = IB_CM_REQ_RECEIVED; break; case CM_MRA_ATTR_ID:
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Michael J. Ruhl michael.j.ruhl@intel.com
commit d7d626179fb283aba73699071af0df6d00e32138 upstream.
The addition of the VNIC contexts to num_rcv_contexts changes the meaning of the sysfs value nctxts from available user contexts, to user contexts + reserved VNIC contexts.
User applications that use nctxts are now broken.
Update the calculation so that VNIC contexts are used only if there are hardware contexts available, and do not silently affect nctxts.
Update code to use the calculated VNIC context number.
Update the sysfs value nctxts to be available user contexts only.
Fixes: 2280740f01ae ("IB/hfi1: Virtual Network Interface Controller (VNIC) HW support") Reviewed-by: Ira Weiny ira.weiny@intel.com Reviewed-by: Niranjana Vishwanathapura Niranjana.Vishwanathapura@intel.com Reviewed-by: Mike Marciniszyn mike.marciniszyn@intel.com Signed-off-by: Michael J. Ruhl michael.j.ruhl@intel.com Signed-off-by: Dennis Dalessandro dennis.dalessandro@intel.com Signed-off-by: Doug Ledford dledford@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/infiniband/hw/hfi1/chip.c | 35 +++++++++++++++++++-------------- drivers/infiniband/hw/hfi1/hfi.h | 2 + drivers/infiniband/hw/hfi1/sysfs.c | 2 - drivers/infiniband/hw/hfi1/vnic_main.c | 7 ++++-- 4 files changed, 29 insertions(+), 17 deletions(-)
--- a/drivers/infiniband/hw/hfi1/chip.c +++ b/drivers/infiniband/hw/hfi1/chip.c @@ -13074,7 +13074,7 @@ static int request_msix_irqs(struct hfi1 first_sdma = last_general; last_sdma = first_sdma + dd->num_sdma; first_rx = last_sdma; - last_rx = first_rx + dd->n_krcv_queues + HFI1_NUM_VNIC_CTXT; + last_rx = first_rx + dd->n_krcv_queues + dd->num_vnic_contexts;
/* VNIC MSIx interrupts get mapped when VNIC contexts are created */ dd->first_dyn_msix_idx = first_rx + dd->n_krcv_queues; @@ -13294,8 +13294,9 @@ static int set_up_interrupts(struct hfi1 * slow source, SDMACleanupDone) * N interrupts - one per used SDMA engine * M interrupt - one per kernel receive context + * V interrupt - one for each VNIC context */ - total = 1 + dd->num_sdma + dd->n_krcv_queues + HFI1_NUM_VNIC_CTXT; + total = 1 + dd->num_sdma + dd->n_krcv_queues + dd->num_vnic_contexts;
/* ask for MSI-X interrupts */ request = request_msix(dd, total); @@ -13356,10 +13357,12 @@ fail: * in array of contexts * freectxts - number of free user contexts * num_send_contexts - number of PIO send contexts being used + * num_vnic_contexts - number of contexts reserved for VNIC */ static int set_up_context_variables(struct hfi1_devdata *dd) { unsigned long num_kernel_contexts; + u16 num_vnic_contexts = HFI1_NUM_VNIC_CTXT; int total_contexts; int ret; unsigned ngroups; @@ -13393,6 +13396,14 @@ static int set_up_context_variables(stru num_kernel_contexts); num_kernel_contexts = dd->chip_send_contexts - num_vls - 1; } + + /* Accommodate VNIC contexts if possible */ + if ((num_kernel_contexts + num_vnic_contexts) > dd->chip_rcv_contexts) { + dd_dev_err(dd, "No receive contexts available for VNIC\n"); + num_vnic_contexts = 0; + } + total_contexts = num_kernel_contexts + num_vnic_contexts; + /* * User contexts: * - default to 1 user context per real (non-HT) CPU core if @@ -13402,19 +13413,16 @@ static int set_up_context_variables(stru num_user_contexts = cpumask_weight(&node_affinity.real_cpu_mask);
- total_contexts = num_kernel_contexts + num_user_contexts; - /* * Adjust the counts given a global max. */ - if (total_contexts > dd->chip_rcv_contexts) { + if (total_contexts + num_user_contexts > dd->chip_rcv_contexts) { dd_dev_err(dd, "Reducing # user receive contexts to: %d, from %d\n", - (int)(dd->chip_rcv_contexts - num_kernel_contexts), + (int)(dd->chip_rcv_contexts - total_contexts), (int)num_user_contexts); - num_user_contexts = dd->chip_rcv_contexts - num_kernel_contexts; /* recalculate */ - total_contexts = num_kernel_contexts + num_user_contexts; + num_user_contexts = dd->chip_rcv_contexts - total_contexts; }
/* each user context requires an entry in the RMT */ @@ -13427,25 +13435,24 @@ static int set_up_context_variables(stru user_rmt_reduced); /* recalculate */ num_user_contexts = user_rmt_reduced; - total_contexts = num_kernel_contexts + num_user_contexts; }
- /* Accommodate VNIC contexts */ - if ((total_contexts + HFI1_NUM_VNIC_CTXT) <= dd->chip_rcv_contexts) - total_contexts += HFI1_NUM_VNIC_CTXT; + total_contexts += num_user_contexts;
/* the first N are kernel contexts, the rest are user/vnic contexts */ dd->num_rcv_contexts = total_contexts; dd->n_krcv_queues = num_kernel_contexts; dd->first_dyn_alloc_ctxt = num_kernel_contexts; + dd->num_vnic_contexts = num_vnic_contexts; dd->num_user_contexts = num_user_contexts; dd->freectxts = num_user_contexts; dd_dev_info(dd, - "rcv contexts: chip %d, used %d (kernel %d, user %d)\n", + "rcv contexts: chip %d, used %d (kernel %d, vnic %u, user %u)\n", (int)dd->chip_rcv_contexts, (int)dd->num_rcv_contexts, (int)dd->n_krcv_queues, - (int)dd->num_rcv_contexts - dd->n_krcv_queues); + dd->num_vnic_contexts, + dd->num_user_contexts);
/* * Receive array allocation: --- a/drivers/infiniband/hw/hfi1/hfi.h +++ b/drivers/infiniband/hw/hfi1/hfi.h @@ -1047,6 +1047,8 @@ struct hfi1_devdata { u64 z_send_schedule;
u64 __percpu *send_schedule; + /* number of reserved contexts for VNIC usage */ + u16 num_vnic_contexts; /* number of receive contexts in use by the driver */ u32 num_rcv_contexts; /* number of pio send contexts in use by the driver */ --- a/drivers/infiniband/hw/hfi1/sysfs.c +++ b/drivers/infiniband/hw/hfi1/sysfs.c @@ -543,7 +543,7 @@ static ssize_t show_nctxts(struct device * give a more accurate picture of total contexts available. */ return scnprintf(buf, PAGE_SIZE, "%u\n", - min(dd->num_rcv_contexts - dd->first_dyn_alloc_ctxt, + min(dd->num_user_contexts, (u32)dd->sc_sizes[SC_USER].count)); }
--- a/drivers/infiniband/hw/hfi1/vnic_main.c +++ b/drivers/infiniband/hw/hfi1/vnic_main.c @@ -840,6 +840,9 @@ struct net_device *hfi1_vnic_alloc_rn(st struct rdma_netdev *rn; int i, size, rc;
+ if (!dd->num_vnic_contexts) + return ERR_PTR(-ENOMEM); + if (!port_num || (port_num > dd->num_pports)) return ERR_PTR(-EINVAL);
@@ -848,7 +851,7 @@ struct net_device *hfi1_vnic_alloc_rn(st
size = sizeof(struct opa_vnic_rdma_netdev) + sizeof(*vinfo); netdev = alloc_netdev_mqs(size, name, name_assign_type, setup, - dd->chip_sdma_engines, HFI1_NUM_VNIC_CTXT); + dd->chip_sdma_engines, dd->num_vnic_contexts); if (!netdev) return ERR_PTR(-ENOMEM);
@@ -856,7 +859,7 @@ struct net_device *hfi1_vnic_alloc_rn(st vinfo = opa_vnic_dev_priv(netdev); vinfo->dd = dd; vinfo->num_tx_q = dd->chip_sdma_engines; - vinfo->num_rx_q = HFI1_NUM_VNIC_CTXT; + vinfo->num_rx_q = dd->num_vnic_contexts; vinfo->netdev = netdev; rn->free_rdma_netdev = hfi1_vnic_free_rn; rn->set_id = hfi1_vnic_set_vesw_id;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bart Van Assche bart.vanassche@wdc.com
commit 8a0d18c62121d3c554a83eb96e2752861d84d937 upstream.
This patch fixes the following kernel crash:
general protection fault: 0000 [#1] PREEMPT SMP Workqueue: ib_mad2 timeout_sends [ib_core] Call Trace: ib_sa_path_rec_callback+0x1c4/0x1d0 [ib_core] send_handler+0xb2/0xd0 [ib_core] timeout_sends+0x14d/0x220 [ib_core] process_one_work+0x200/0x630 worker_thread+0x4e/0x3b0 kthread+0x113/0x150
Fixes: commit aef9ec39c47f ("IB: Add SCSI RDMA Protocol (SRP) initiator") Signed-off-by: Bart Van Assche bart.vanassche@wdc.com Reviewed-by: Sagi Grimberg sagi@grimberg.me Signed-off-by: Doug Ledford dledford@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/infiniband/ulp/srp/ib_srp.c | 25 +++++++++++++++++++------ 1 file changed, 19 insertions(+), 6 deletions(-)
--- a/drivers/infiniband/ulp/srp/ib_srp.c +++ b/drivers/infiniband/ulp/srp/ib_srp.c @@ -665,12 +665,19 @@ static void srp_path_rec_completion(int static int srp_lookup_path(struct srp_rdma_ch *ch) { struct srp_target_port *target = ch->target; - int ret; + int ret = -ENODEV;
ch->path.numb_path = 1;
init_completion(&ch->done);
+ /* + * Avoid that the SCSI host can be removed by srp_remove_target() + * before srp_path_rec_completion() is called. + */ + if (!scsi_host_get(target->scsi_host)) + goto out; + ch->path_query_id = ib_sa_path_rec_get(&srp_sa_client, target->srp_host->srp_dev->dev, target->srp_host->port, @@ -684,18 +691,24 @@ static int srp_lookup_path(struct srp_rd GFP_KERNEL, srp_path_rec_completion, ch, &ch->path_query); - if (ch->path_query_id < 0) - return ch->path_query_id; + ret = ch->path_query_id; + if (ret < 0) + goto put;
ret = wait_for_completion_interruptible(&ch->done); if (ret < 0) - return ret; + goto put;
- if (ch->status < 0) + ret = ch->status; + if (ret < 0) shost_printk(KERN_WARNING, target->scsi_host, PFX "Path record query failed\n");
- return ch->status; +put: + scsi_host_put(target->scsi_host); + +out: + return ret; }
static int srp_send_req(struct srp_rdma_ch *ch, bool multich)
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Parav Pandit parav@mellanox.com
commit 89548bcafec7ecfeea58c553f0834b5d575a66eb upstream.
Below kernel crash is observed when Pkey security enforcement fails on received MADs. This issue is reported in [1].
ib_free_recv_mad() accesses the rmpp_list, whose initialization is needed before accessing it. When security enformcent fails on received MADs, MAD processing avoided due to security checks failed.
OpenSM[3770]: SM port is down kernel: BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 kernel: IP: ib_free_recv_mad+0x44/0xa0 [ib_core] kernel: PGD 0 kernel: P4D 0 kernel: kernel: Oops: 0002 [#1] SMP kernel: CPU: 0 PID: 2833 Comm: kworker/0:1H Tainted: P IO 4.13.4-1-pve #1 kernel: Hardware name: Dell XS23-TY3 /9CMP63, BIOS 1.71 09/17/2013 kernel: Workqueue: ib-comp-wq ib_cq_poll_work [ib_core] kernel: task: ffffa069c6541600 task.stack: ffffb9a729054000 kernel: RIP: 0010:ib_free_recv_mad+0x44/0xa0 [ib_core] kernel: RSP: 0018:ffffb9a729057d38 EFLAGS: 00010286 kernel: RAX: ffffa069cb138a48 RBX: ffffa069cb138a10 RCX: 0000000000000000 kernel: RDX: ffffb9a729057d38 RSI: 0000000000000000 RDI: ffffa069cb138a20 kernel: RBP: ffffb9a729057d60 R08: ffffa072d2d49800 R09: ffffa069cb138ae0 kernel: R10: ffffa069cb138ae0 R11: ffffa072b3994e00 R12: ffffb9a729057d38 kernel: R13: ffffa069d1c90000 R14: 0000000000000000 R15: ffffa069d1c90880 kernel: FS: 0000000000000000(0000) GS:ffffa069dba00000(0000) knlGS:0000000000000000 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 kernel: CR2: 0000000000000008 CR3: 00000011f51f2000 CR4: 00000000000006f0 kernel: Call Trace: kernel: ib_mad_recv_done+0x5cc/0xb50 [ib_core] kernel: __ib_process_cq+0x5c/0xb0 [ib_core] kernel: ib_cq_poll_work+0x20/0x60 [ib_core] kernel: process_one_work+0x1e9/0x410 kernel: worker_thread+0x4b/0x410 kernel: kthread+0x109/0x140 kernel: ? process_one_work+0x410/0x410 kernel: ? kthread_create_on_node+0x70/0x70 kernel: ? SyS_exit_group+0x14/0x20 kernel: ret_from_fork+0x25/0x30 kernel: RIP: ib_free_recv_mad+0x44/0xa0 [ib_core] RSP: ffffb9a729057d38 kernel: CR2: 0000000000000008
[1] : https://www.spinics.net/lists/linux-rdma/msg56190.html
Fixes: 47a2b338fe63 ("IB/core: Enforce security on management datagrams") Signed-off-by: Parav Pandit parav@mellanox.com Reported-by: Chris Blake chrisrblake93@gmail.com Reviewed-by: Daniel Jurgens danielj@mellanox.com Reviewed-by: Hal Rosenstock hal@mellanox.com Signed-off-by: Doug Ledford dledford@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/infiniband/core/mad.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/infiniband/core/mad.c +++ b/drivers/infiniband/core/mad.c @@ -1974,14 +1974,15 @@ static void ib_mad_complete_recv(struct unsigned long flags; int ret;
+ INIT_LIST_HEAD(&mad_recv_wc->rmpp_list); ret = ib_mad_enforce_security(mad_agent_priv, mad_recv_wc->wc->pkey_index); if (ret) { ib_free_recv_mad(mad_recv_wc); deref_mad_agent(mad_agent_priv); + return; }
- INIT_LIST_HEAD(&mad_recv_wc->rmpp_list); list_add(&mad_recv_wc->recv_buf.list, &mad_recv_wc->rmpp_list); if (ib_mad_kernel_rmpp_agent(&mad_agent_priv->agent)) { mad_recv_wc = ib_process_rmpp_recv_wc(mad_agent_priv,
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Daniel Jurgens danielj@mellanox.com
commit 877add28178a7fa3c68f29c450d050a8e6513f08 upstream.
When modify QP is called on a shared QP update the security context for the real QP. When security is subsequently enforced the shared QP handles will be checked as well.
Without this change shared QP handles get added to the port/pkey lists, which is a bug, because not all shared QP handles will be checked for access. Also the shared QP security context wouldn't get removed from the port/pkey lists causing access to free memory and list corruption when they are destroyed.
Fixes: d291f1a65232 ("IB/core: Enforce PKey security on QPs") Signed-off-by: Daniel Jurgens danielj@mellanox.com Reviewed-by: Parav Pandit parav@mellanox.com Signed-off-by: Leon Romanovsky leon@kernel.org Signed-off-by: Doug Ledford dledford@redhat.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/infiniband/core/security.c | 51 ++++++++++++++++++++----------------- 1 file changed, 28 insertions(+), 23 deletions(-)
--- a/drivers/infiniband/core/security.c +++ b/drivers/infiniband/core/security.c @@ -87,16 +87,14 @@ static int enforce_qp_pkey_security(u16 if (ret) return ret;
- if (qp_sec->qp == qp_sec->qp->real_qp) { - list_for_each_entry(shared_qp_sec, - &qp_sec->shared_qp_list, - shared_qp_list) { - ret = security_ib_pkey_access(shared_qp_sec->security, - subnet_prefix, - pkey); - if (ret) - return ret; - } + list_for_each_entry(shared_qp_sec, + &qp_sec->shared_qp_list, + shared_qp_list) { + ret = security_ib_pkey_access(shared_qp_sec->security, + subnet_prefix, + pkey); + if (ret) + return ret; } return 0; } @@ -560,15 +558,22 @@ int ib_security_modify_qp(struct ib_qp * int ret = 0; struct ib_ports_pkeys *tmp_pps; struct ib_ports_pkeys *new_pps; - bool special_qp = (qp->qp_type == IB_QPT_SMI || - qp->qp_type == IB_QPT_GSI || - qp->qp_type >= IB_QPT_RESERVED1); + struct ib_qp *real_qp = qp->real_qp; + bool special_qp = (real_qp->qp_type == IB_QPT_SMI || + real_qp->qp_type == IB_QPT_GSI || + real_qp->qp_type >= IB_QPT_RESERVED1); bool pps_change = ((qp_attr_mask & (IB_QP_PKEY_INDEX | IB_QP_PORT)) || (qp_attr_mask & IB_QP_ALT_PATH));
+ /* The port/pkey settings are maintained only for the real QP. Open + * handles on the real QP will be in the shared_qp_list. When + * enforcing security on the real QP all the shared QPs will be + * checked as well. + */ + if (pps_change && !special_qp) { - mutex_lock(&qp->qp_sec->mutex); - new_pps = get_new_pps(qp, + mutex_lock(&real_qp->qp_sec->mutex); + new_pps = get_new_pps(real_qp, qp_attr, qp_attr_mask);
@@ -586,14 +591,14 @@ int ib_security_modify_qp(struct ib_qp *
if (!ret) ret = check_qp_port_pkey_settings(new_pps, - qp->qp_sec); + real_qp->qp_sec); }
if (!ret) - ret = qp->device->modify_qp(qp->real_qp, - qp_attr, - qp_attr_mask, - udata); + ret = real_qp->device->modify_qp(real_qp, + qp_attr, + qp_attr_mask, + udata);
if (pps_change && !special_qp) { /* Clean up the lists and free the appropriate @@ -602,8 +607,8 @@ int ib_security_modify_qp(struct ib_qp * if (ret) { tmp_pps = new_pps; } else { - tmp_pps = qp->qp_sec->ports_pkeys; - qp->qp_sec->ports_pkeys = new_pps; + tmp_pps = real_qp->qp_sec->ports_pkeys; + real_qp->qp_sec->ports_pkeys = new_pps; }
if (tmp_pps) { @@ -611,7 +616,7 @@ int ib_security_modify_qp(struct ib_qp * port_pkey_list_remove(&tmp_pps->alt); } kfree(tmp_pps); - mutex_unlock(&qp->qp_sec->mutex); + mutex_unlock(&real_qp->qp_sec->mutex); } return ret; }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Johan Hovold johan@kernel.org
commit c45e3e4c5b134b081e8af362109905427967eb19 upstream.
A recent change fixing NFC device allocation itself introduced an error-handling bug by returning an error pointer in case device-id allocation failed. This is clearly broken as the callers still expected NULL to be returned on errors as detected by Dan's static checker.
Fix this up by returning NULL in the event that we've run out of memory when allocating a new device id.
Note that the offending commit is marked for stable (3.8) so this fix needs to be backported along with it.
Fixes: 20777bc57c34 ("NFC: fix broken device allocation") Reported-by: Dan Carpenter dan.carpenter@oracle.com Signed-off-by: Johan Hovold johan@kernel.org Signed-off-by: Samuel Ortiz sameo@linux.intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- net/nfc/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/net/nfc/core.c +++ b/net/nfc/core.c @@ -1106,7 +1106,7 @@ struct nfc_dev *nfc_allocate_device(stru err_free_dev: kfree(dev);
- return ERR_PTR(rc); + return NULL; } EXPORT_SYMBOL(nfc_allocate_device);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bin Meng bmeng.cn@gmail.com
commit 9d63f17661e25fd28714dac94bdebc4ff5b75f09 upstream.
There are two bugs in current intel_spi_sw_cycle():
- The 'data byte count' field should be the number of bytes transferred minus 1 - SSFSTS_CTL is the offset from ispi->sregs, not ispi->base
Signed-off-by: Bin Meng bmeng.cn@gmail.com Acked-by: Mika Westerberg mika.westerberg@linux.intel.com Signed-off-by: Cyrille Pitchen cyrille.pitchen@wedev4u.fr Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/mtd/spi-nor/intel-spi.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/mtd/spi-nor/intel-spi.c +++ b/drivers/mtd/spi-nor/intel-spi.c @@ -422,7 +422,7 @@ static int intel_spi_sw_cycle(struct int if (ret < 0) return ret;
- val = (len << SSFSTS_CTL_DBC_SHIFT) | SSFSTS_CTL_DS; + val = ((len - 1) << SSFSTS_CTL_DBC_SHIFT) | SSFSTS_CTL_DS; val |= ret << SSFSTS_CTL_COP_SHIFT; val |= SSFSTS_CTL_FCERR | SSFSTS_CTL_FDONE; val |= SSFSTS_CTL_SCGO; @@ -432,7 +432,7 @@ static int intel_spi_sw_cycle(struct int if (ret) return ret;
- status = readl(ispi->base + SSFSTS_CTL); + status = readl(ispi->sregs + SSFSTS_CTL); if (status & SSFSTS_CTL_FCERR) return -EIO; else if (status & SSFSTS_CTL_AEL)
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Brian King brking@linux.vnet.ibm.com
commit 52c6912fde0133981ee50ba08808f257829c4c93 upstream.
The original issue being fixed in this patch was seen with the ixgbe driver, but the same issue exists with i40e as well, as the code is very similar. read_barrier_depends is not sufficient to ensure loads following it are not speculatively loaded out of order by the CPU, which can result in stale data being loaded, causing potential system crashes.
Signed-off-by: Brian King brking@linux.vnet.ibm.com Acked-by: Jesse Brandeburg jesse.brandeburg@intel.com Tested-by: Andrew Bowers andrewx.bowers@intel.com Signed-off-by: Jeff Kirsher jeffrey.t.kirsher@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/ethernet/intel/i40e/i40e_main.c | 2 +- drivers/net/ethernet/intel/i40e/i40e_txrx.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -3760,7 +3760,7 @@ static bool i40e_clean_fdir_tx_irq(struc break;
/* prevent any other reads prior to eop_desc */ - read_barrier_depends(); + smp_rmb();
/* if the descriptor isn't done, no work yet to do */ if (!(eop_desc->cmd_type_offset_bsz & --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -759,7 +759,7 @@ static bool i40e_clean_tx_irq(struct i40 break;
/* prevent any other reads prior to eop_desc */ - read_barrier_depends(); + smp_rmb();
i40e_trace(clean_tx_irq, tx_ring, tx_desc, tx_buf); /* we have caught up to head, no work left to do */
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Brian King brking@linux.vnet.ibm.com
commit c4cb99185b4cc96c0a1c70104dc21ae14d7e7f28 upstream.
The original issue being fixed in this patch was seen with the ixgbe driver, but the same issue exists with igb as well, as the code is very similar. read_barrier_depends is not sufficient to ensure loads following it are not speculatively loaded out of order by the CPU, which can result in stale data being loaded, causing potential system crashes.
Signed-off-by: Brian King brking@linux.vnet.ibm.com Acked-by: Jesse Brandeburg jesse.brandeburg@intel.com Tested-by: Aaron Brown aaron.f.brown@intel.com Signed-off-by: Jeff Kirsher jeffrey.t.kirsher@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/ethernet/intel/igb/igb_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -6970,7 +6970,7 @@ static bool igb_clean_tx_irq(struct igb_ break;
/* prevent any other reads prior to eop_desc */ - read_barrier_depends(); + smp_rmb();
/* if DD is not set pending work has not been completed */ if (!(eop_desc->wb.status & cpu_to_le32(E1000_TXD_STAT_DD)))
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Brian King brking@linux.vnet.ibm.com
commit 1e1f9ca546556e508d021545861f6b5fc75a95fe upstream.
The original issue being fixed in this patch was seen with the ixgbe driver, but the same issue exists with igbvf as well, as the code is very similar. read_barrier_depends is not sufficient to ensure loads following it are not speculatively loaded out of order by the CPU, which can result in stale data being loaded, causing potential system crashes.
Signed-off-by: Brian King brking@linux.vnet.ibm.com Acked-by: Jesse Brandeburg jesse.brandeburg@intel.com Tested-by: Aaron Brown aaron.f.brown@intel.com Signed-off-by: Jeff Kirsher jeffrey.t.kirsher@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/ethernet/intel/igbvf/netdev.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/net/ethernet/intel/igbvf/netdev.c +++ b/drivers/net/ethernet/intel/igbvf/netdev.c @@ -810,7 +810,7 @@ static bool igbvf_clean_tx_irq(struct ig break;
/* prevent any other reads prior to eop_desc */ - read_barrier_depends(); + smp_rmb();
/* if DD is not set pending work has not been completed */ if (!(eop_desc->wb.status & cpu_to_le32(E1000_TXD_STAT_DD)))
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Brian King brking@linux.vnet.ibm.com
commit ae0c585d93dfaf923d2c7eb44b2c3ab92854ea9b upstream.
The original issue being fixed in this patch was seen with the ixgbe driver, but the same issue exists with ixgbevf as well, as the code is very similar. read_barrier_depends is not sufficient to ensure loads following it are not speculatively loaded out of order by the CPU, which can result in stale data being loaded, causing potential system crashes.
Signed-off-by: Brian King brking@linux.vnet.ibm.com Acked-by: Jesse Brandeburg jesse.brandeburg@intel.com Tested-by: Andrew Bowers andrewx.bowers@intel.com Signed-off-by: Jeff Kirsher jeffrey.t.kirsher@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c @@ -326,7 +326,7 @@ static bool ixgbevf_clean_tx_irq(struct break;
/* prevent any other reads prior to eop_desc */ - read_barrier_depends(); + smp_rmb();
/* if DD is not set pending work has not been completed */ if (!(eop_desc->wb.status & cpu_to_le32(IXGBE_TXD_STAT_DD)))
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Brian King brking@linux.vnet.ibm.com
commit f72271e2a0ae4277d53c4053f5eed8bb346ba38a upstream.
The original issue being fixed in this patch was seen with the ixgbe driver, but the same issue exists with i40evf as well, as the code is very similar. read_barrier_depends is not sufficient to ensure loads following it are not speculatively loaded out of order by the CPU, which can result in stale data being loaded, causing potential system crashes.
Signed-off-by: Brian King brking@linux.vnet.ibm.com Acked-by: Jesse Brandeburg jesse.brandeburg@intel.com Tested-by: Andrew Bowers andrewx.bowers@intel.com Signed-off-by: Jeff Kirsher jeffrey.t.kirsher@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/ethernet/intel/i40evf/i40e_txrx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/net/ethernet/intel/i40evf/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40evf/i40e_txrx.c @@ -179,7 +179,7 @@ static bool i40e_clean_tx_irq(struct i40 break;
/* prevent any other reads prior to eop_desc */ - read_barrier_depends(); + smp_rmb();
i40e_trace(clean_tx_irq, tx_ring, tx_desc, tx_buf); /* if the descriptor isn't done, no work yet to do */
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Brian King brking@linux.vnet.ibm.com
commit 7b8edcc685b5e2c3c37aa13dc50a88e84a5bfef8 upstream.
The original issue being fixed in this patch was seen with the ixgbe driver, but the same issue exists with fm10k as well, as the code is very similar. read_barrier_depends is not sufficient to ensure loads following it are not speculatively loaded out of order by the CPU, which can result in stale data being loaded, causing potential system crashes.
Signed-off-by: Brian King brking@linux.vnet.ibm.com Acked-by: Jesse Brandeburg jesse.brandeburg@intel.com Signed-off-by: Jeff Kirsher jeffrey.t.kirsher@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/ethernet/intel/fm10k/fm10k_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/net/ethernet/intel/fm10k/fm10k_main.c +++ b/drivers/net/ethernet/intel/fm10k/fm10k_main.c @@ -1229,7 +1229,7 @@ static bool fm10k_clean_tx_irq(struct fm break;
/* prevent any other reads prior to eop_desc */ - read_barrier_depends(); + smp_rmb();
/* if DD is not set pending work has not been completed */ if (!(eop_desc->flags & FM10K_TXD_FLAG_DONE))
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Brian King brking@linux.vnet.ibm.com
commit 0a9a17e3bb4564caf4bfe2a6783ae1287667d188 upstream.
This patch fixes an issue seen on Power systems with ixgbe which results in skb list corruption and an eventual kernel oops. The following is what was observed:
CPU 1 CPU2 ============================ ============================ 1: ixgbe_xmit_frame_ring ixgbe_clean_tx_irq 2: first->skb = skb eop_desc = tx_buffer->next_to_watch 3: ixgbe_tx_map read_barrier_depends() 4: wmb check adapter written status bit 5: first->next_to_watch = tx_desc napi_consume_skb(tx_buffer->skb ..); 6: writel(i, tx_ring->tail);
The read_barrier_depends is insufficient to ensure that tx_buffer->skb does not get loaded prior to tx_buffer->next_to_watch, which then results in loading a stale skb pointer. This patch replaces the read_barrier_depends with smp_rmb to ensure loads are ordered with respect to the load of tx_buffer->next_to_watch.
Signed-off-by: Brian King brking@linux.vnet.ibm.com Acked-by: Jesse Brandeburg jesse.brandeburg@intel.com Tested-by: Andrew Bowers andrewx.bowers@intel.com Signed-off-by: Jeff Kirsher jeffrey.t.kirsher@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c @@ -1192,7 +1192,7 @@ static bool ixgbe_clean_tx_irq(struct ix break;
/* prevent any other reads prior to eop_desc */ - read_barrier_depends(); + smp_rmb();
/* if DD is not set pending work has not been completed */ if (!(eop_desc->wb.status & cpu_to_le32(IXGBE_TXD_STAT_DD)))
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: John David Anglin dave.anglin@bell.net
commit 05f016d2ca7a4fab99d5d5472168506ddf95e74f upstream.
As noted by Christoph Biedl, passing a pointer size of 4 in the new CAS implementation causes a kernel crash. The attached patch corrects the off by one error in the argument validity check.
In reviewing the code, I noticed that we only perform word operations with the pointer size argument. The subi instruction intentionally uses a word condition on 64-bit kernels. Nullification was used instead of a cmpib instruction as the branch should never be taken. The shlw pseudo-operation generates a depw,z instruction and it clears the target before doing a shift left word deposit. Thus, we don't need to clip the upper 32 bits of this argument on 64-bit kernels.
Tested with a gcc testsuite run with a 64-bit kernel. The gcc atomic code in libgcc is the only direct user of the new CAS implementation that I am aware of.
Signed-off-by: John David Anglin dave.anglin@bell.net Signed-off-by: Helge Deller deller@gmx.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/parisc/kernel/syscall.S | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
--- a/arch/parisc/kernel/syscall.S +++ b/arch/parisc/kernel/syscall.S @@ -690,15 +690,15 @@ cas_action: /* ELF32 Process entry path */ lws_compare_and_swap_2: #ifdef CONFIG_64BIT - /* Clip the input registers */ + /* Clip the input registers. We don't need to clip %r23 as we + only use it for word operations */ depdi 0, 31, 32, %r26 depdi 0, 31, 32, %r25 depdi 0, 31, 32, %r24 - depdi 0, 31, 32, %r23 #endif
/* Check the validity of the size pointer */ - subi,>>= 4, %r23, %r0 + subi,>>= 3, %r23, %r0 b,n lws_exit_nosys
/* Jump to the functions which will load the old and new values into
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Christophe Leroy christophe.leroy@c-s.fr
commit 252eb55816a6f69ef9464cad303cdb3326cdc61d upstream.
On powerpc32, patch_instruction() is called by apply_feature_fixups() which is called from early_init()
There is the following note in front of early_init(): * Note that the kernel may be running at an address which is different * from the address that it was linked at, so we must use RELOC/PTRRELOC * to access static data (including strings). -- paulus
Therefore, slab_is_available() cannot be called yet, and text_poke_area must be addressed with PTRRELOC()
Fixes: 95902e6c8864 ("powerpc/mm: Implement STRICT_KERNEL_RWX on PPC32") Reported-by: Meelis Roos mroos@linux.ee Signed-off-by: Christophe Leroy christophe.leroy@c-s.fr Signed-off-by: Michael Ellerman mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/lib/code-patching.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-)
--- a/arch/powerpc/lib/code-patching.c +++ b/arch/powerpc/lib/code-patching.c @@ -21,6 +21,7 @@ #include <asm/tlbflush.h> #include <asm/page.h> #include <asm/code-patching.h> +#include <asm/setup.h>
static int __patch_instruction(unsigned int *addr, unsigned int instr) { @@ -146,11 +147,8 @@ int patch_instruction(unsigned int *addr * During early early boot patch_instruction is called * when text_poke_area is not ready, but we still need * to allow patching. We just do the plain old patching - * We use slab_is_available and per cpu read * via this_cpu_read - * of text_poke_area. Per-CPU areas might not be up early - * this can create problems with just using this_cpu_read() */ - if (!slab_is_available() || !this_cpu_read(text_poke_area)) + if (!this_cpu_read(*PTRRELOC(&text_poke_area))) return __patch_instruction(addr, instr);
local_irq_save(flags);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Balbir Singh bsingharora@gmail.com
commit f79ad50ea3c73fb1ea5b09e95c864e5bb263adfb upstream.
When using the radix MMU on Power9 DD1, to work around a hardware problem, radix__pte_update() is required to do a two stage update of the PTE. First we write a zero value into the PTE, then we flush the TLB, and then we write the new PTE value.
In the normal case that works OK, but it does not work if we're updating the PTE that maps the code we're executing, because the mapping is removed by the TLB flush and we can no longer execute from it. Unfortunately the STRICT_RWX code needs to do exactly that.
The exact symptoms when we hit this case vary, sometimes we print an oops and then get stuck after that, but I've also seen a machine just get stuck continually page faulting with no oops printed. The variance is presumably due to the exact layout of the text and the page size used for the mappings. In all cases we are unable to boot to a shell.
There are possible solutions such as creating a second mapping of the TLB flush code, executing from that, and then jumping back to the original. However we don't want to add that level of complexity for a DD1 work around.
So just detect that we're running on Power9 DD1 and refrain from changing the permissions, effectively disabling STRICT_RWX on Power9 DD1.
Fixes: 7614ff3272a1 ("powerpc/mm/radix: Implement STRICT_RWX/mark_rodata_ro() for Radix") Reported-by: Andrew Jeffery andrew@aj.id.au [Changelog as suggested by Michael Ellerman mpe@ellerman.id.au] Signed-off-by: Balbir Singh bsingharora@gmail.com Signed-off-by: Michael Ellerman mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/mm/pgtable-radix.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
--- a/arch/powerpc/mm/pgtable-radix.c +++ b/arch/powerpc/mm/pgtable-radix.c @@ -169,6 +169,16 @@ void radix__mark_rodata_ro(void) { unsigned long start, end;
+ /* + * mark_rodata_ro() will mark itself as !writable at some point. + * Due to DD1 workaround in radix__pte_update(), we'll end up with + * an invalid pte and the system will crash quite severly. + */ + if (cpu_has_feature(CPU_FTR_POWER9_DD1)) { + pr_warn("Warning: Unable to mark rodata read only on P9 DD1\n"); + return; + } + start = (unsigned long)_stext; end = (unsigned long)__init_begin;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Michael Ellerman mpe@ellerman.id.au
commit f3f1dfd600ff82b18b7ea73d80eb27f476a6aa97 upstream.
init_imc_pmu() uses topology_physical_package_id() to detect the node id of the processor it is on to get local memory, but that's wrong, and can lead to crashes. Fix it to use cpu_to_node().
Fixes: 885dcd709ba9 ("powerpc/perf: Add nest IMC PMU support") Reported-By: Rob Lippert rlippert@google.com Tested-By: Madhavan Srinivasan maddy@linux.vnet.ibm.com Signed-off-by: Madhavan Srinivasan maddy@linux.vnet.ibm.com Signed-off-by: Michael Ellerman mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/perf/imc-pmu.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-)
--- a/arch/powerpc/perf/imc-pmu.c +++ b/arch/powerpc/perf/imc-pmu.c @@ -467,7 +467,7 @@ static int nest_imc_event_init(struct pe * Nest HW counter memory resides in a per-chip reserve-memory (HOMER). * Get the base memory addresss for this cpu. */ - chip_id = topology_physical_package_id(event->cpu); + chip_id = cpu_to_chip_id(event->cpu); pcni = pmu->mem_info; do { if (pcni->id == chip_id) { @@ -524,19 +524,19 @@ static int nest_imc_event_init(struct pe */ static int core_imc_mem_init(int cpu, int size) { - int phys_id, rc = 0, core_id = (cpu / threads_per_core); + int nid, rc = 0, core_id = (cpu / threads_per_core); struct imc_mem_info *mem_info;
/* * alloc_pages_node() will allocate memory for core in the * local node only. */ - phys_id = topology_physical_package_id(cpu); + nid = cpu_to_node(cpu); mem_info = &core_imc_pmu->mem_info[core_id]; mem_info->id = core_id;
/* We need only vbase for core counters */ - mem_info->vbase = page_address(alloc_pages_node(phys_id, + mem_info->vbase = page_address(alloc_pages_node(nid, GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE | __GFP_NOWARN, get_order(size))); if (!mem_info->vbase) @@ -797,14 +797,14 @@ static int core_imc_event_init(struct pe static int thread_imc_mem_alloc(int cpu_id, int size) { u64 ldbar_value, *local_mem = per_cpu(thread_imc_mem, cpu_id); - int phys_id = topology_physical_package_id(cpu_id); + int nid = cpu_to_node(cpu_id);
if (!local_mem) { /* * This case could happen only once at start, since we dont * free the memory in cpu offline path. */ - local_mem = page_address(alloc_pages_node(phys_id, + local_mem = page_address(alloc_pages_node(nid, GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE | __GFP_NOWARN, get_order(size))); if (!local_mem)
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Naveen N. Rao naveen.n.rao@linux.vnet.ibm.com
commit 46725b17f1c6c815a41429259b3f070c01e71bc1 upstream.
When a uprobe is installed on an instruction that we currently do not emulate, we copy the instruction into a xol buffer and single step that instruction. If that instruction generates a fault, we abort the single stepping before invoking the signal handler. Once the signal handler is done, the uprobe trap is hit again since the instruction is retried and the process repeats.
We use uprobe_deny_signal() to detect if the xol instruction triggered a signal. If so, we clear TIF_SIGPENDING and set TIF_UPROBE so that the signal is not handled until after the single stepping is aborted. In this case, uprobe_deny_signal() returns true and get_signal() ends up returning 0. However, in do_signal(), we are not looking at the return value, but depending on ksig.sig for further action, all with an uninitialized ksig that is not touched in this scenario. Fix the same by initializing ksig.sig to 0.
Fixes: 129b69df9c90 ("powerpc: Use get_signal() signal_setup_done()") Reported-by: Anton Blanchard anton@samba.org Signed-off-by: Naveen N. Rao naveen.n.rao@linux.vnet.ibm.com Signed-off-by: Michael Ellerman mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/kernel/signal.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/powerpc/kernel/signal.c +++ b/arch/powerpc/kernel/signal.c @@ -103,7 +103,7 @@ static void check_syscall_restart(struct static void do_signal(struct task_struct *tsk) { sigset_t *oldset = sigmask_to_save(); - struct ksignal ksig; + struct ksignal ksig = { .sig = 0 }; int ret; int is32 = is_32bit_task();
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Michael Ellerman mpe@ellerman.id.au
commit 475b581ff57bc01437cbc680e281869918447763 upstream.
On 64-bit Book3s, when we take an instruction fault the reason for the fault may be reported in SRR1. For data faults the reason is reported in DSISR (Data Storage Instruction Status Register).
The reasons reported in each do not necessarily correspond, so we mask the SRR1 bits before copying them to the DSISR, which is then used by the page fault code.
Prior to commit b4c001dc44f0 ("powerpc/mm: Use symbolic constants for filtering SRR1 bits on ISIs") we used a hard-coded mask of 0x58200000, which corresponds to:
DSISR_NOHPTE 0x40000000 /* no translation found */ DSISR_NOEXEC_OR_G 0x10000000 /* exec of no-exec or guarded */ DSISR_PROTFAULT 0x08000000 /* protection fault */ DSISR_KEYFAULT 0x00200000 /* Storage Key fault */
That commit added a #define for the mask, DSISR_SRR1_MATCH_64S, but incorrectly used a different similarly named DSISR_BAD_FAULT_64S.
This had the effect of changing the mask to 0xa43a0000, which omits everything but DSISR_KEYFAULT.
Luckily this had no visible effect, because in practice we hardly use the DSISR bits. The lack of DSISR_NOHPTE means a TLB flush optimisation was missed in the native HPTE code, and DSISR_NOEXEC_OR_G and DSISR_PROTFAULT are both only used to trigger rare warnings.
So we got lucky, but let's fix it. The new value only has bits between 17 and 30 set, so we can continue to use andis.
Fixes: b4c001dc44f0 ("powerpc/mm: Use symbolic constants for filtering SRR1 bits on ISIs") Signed-off-by: Michael Ellerman mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/kernel/exceptions-64s.S | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -542,7 +542,7 @@ EXC_COMMON_BEGIN(instruction_access_comm RECONCILE_IRQ_STATE(r10, r11) ld r12,_MSR(r1) ld r3,_NIP(r1) - andis. r4,r12,DSISR_BAD_FAULT_64S@h + andis. r4,r12,DSISR_SRR1_MATCH_64S@h li r5,0x400 std r3,_DAR(r1) std r4,_DSISR(r1)
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nicholas Piggin npiggin@gmail.com
commit 85e3f1adcb9d49300b0a943bb93f9604be375bfb upstream.
Radix VA space allocations test addresses against mm->task_size which is 512TB, even in cases where the intention is to limit allocation to below 128TB.
This results in mmap with a hint address below 128TB but address + length above 128TB succeeding when it should fail (as hash does after the previous patch).
Set the high address limit to be considered up front, and base subsequent allocation checks on that consistently.
Fixes: f4ea6dcb08ea ("powerpc/mm: Enable mappings above 128TB") Signed-off-by: Nicholas Piggin npiggin@gmail.com Reviewed-by: Aneesh Kumar K.V aneesh.kumar@linux.vnet.ibm.com Signed-off-by: Michael Ellerman mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/mm/hugetlbpage-radix.c | 26 +++++++++++------ arch/powerpc/mm/mmap.c | 55 +++++++++++++++++++++--------------- 2 files changed, 50 insertions(+), 31 deletions(-)
--- a/arch/powerpc/mm/hugetlbpage-radix.c +++ b/arch/powerpc/mm/hugetlbpage-radix.c @@ -49,17 +49,28 @@ radix__hugetlb_get_unmapped_area(struct struct mm_struct *mm = current->mm; struct vm_area_struct *vma; struct hstate *h = hstate_file(file); + int fixed = (flags & MAP_FIXED); + unsigned long high_limit; struct vm_unmapped_area_info info;
- if (unlikely(addr > mm->context.addr_limit && addr < TASK_SIZE)) - mm->context.addr_limit = TASK_SIZE; + high_limit = DEFAULT_MAP_WINDOW; + if (addr >= high_limit || (fixed && (addr + len > high_limit))) + high_limit = TASK_SIZE;
if (len & ~huge_page_mask(h)) return -EINVAL; - if (len > mm->task_size) + if (len > high_limit) return -ENOMEM; + if (fixed) { + if (addr > high_limit - len) + return -ENOMEM; + }
- if (flags & MAP_FIXED) { + if (unlikely(addr > mm->context.addr_limit && + mm->context.addr_limit != TASK_SIZE)) + mm->context.addr_limit = TASK_SIZE; + + if (fixed) { if (prepare_hugepage_range(file, addr, len)) return -EINVAL; return addr; @@ -68,7 +79,7 @@ radix__hugetlb_get_unmapped_area(struct if (addr) { addr = ALIGN(addr, huge_page_size(h)); vma = find_vma(mm, addr); - if (mm->task_size - len >= addr && + if (high_limit - len >= addr && (!vma || addr + len <= vm_start_gap(vma))) return addr; } @@ -79,12 +90,9 @@ radix__hugetlb_get_unmapped_area(struct info.flags = VM_UNMAPPED_AREA_TOPDOWN; info.length = len; info.low_limit = PAGE_SIZE; - info.high_limit = current->mm->mmap_base; + info.high_limit = mm->mmap_base + (high_limit - DEFAULT_MAP_WINDOW); info.align_mask = PAGE_MASK & ~huge_page_mask(h); info.align_offset = 0;
- if (addr > DEFAULT_MAP_WINDOW) - info.high_limit += mm->context.addr_limit - DEFAULT_MAP_WINDOW; - return vm_unmapped_area(&info); } --- a/arch/powerpc/mm/mmap.c +++ b/arch/powerpc/mm/mmap.c @@ -106,22 +106,32 @@ radix__arch_get_unmapped_area(struct fil { struct mm_struct *mm = current->mm; struct vm_area_struct *vma; + int fixed = (flags & MAP_FIXED); + unsigned long high_limit; struct vm_unmapped_area_info info;
+ high_limit = DEFAULT_MAP_WINDOW; + if (addr >= high_limit || (fixed && (addr + len > high_limit))) + high_limit = TASK_SIZE; + + if (len > high_limit) + return -ENOMEM; + if (fixed) { + if (addr > high_limit - len) + return -ENOMEM; + } + if (unlikely(addr > mm->context.addr_limit && mm->context.addr_limit != TASK_SIZE)) mm->context.addr_limit = TASK_SIZE;
- if (len > mm->task_size - mmap_min_addr) - return -ENOMEM; - - if (flags & MAP_FIXED) + if (fixed) return addr;
if (addr) { addr = PAGE_ALIGN(addr); vma = find_vma(mm, addr); - if (mm->task_size - len >= addr && addr >= mmap_min_addr && + if (high_limit - len >= addr && addr >= mmap_min_addr && (!vma || addr + len <= vm_start_gap(vma))) return addr; } @@ -129,13 +139,9 @@ radix__arch_get_unmapped_area(struct fil info.flags = 0; info.length = len; info.low_limit = mm->mmap_base; + info.high_limit = high_limit; info.align_mask = 0;
- if (unlikely(addr > DEFAULT_MAP_WINDOW)) - info.high_limit = mm->context.addr_limit; - else - info.high_limit = DEFAULT_MAP_WINDOW; - return vm_unmapped_area(&info); }
@@ -149,37 +155,42 @@ radix__arch_get_unmapped_area_topdown(st struct vm_area_struct *vma; struct mm_struct *mm = current->mm; unsigned long addr = addr0; + int fixed = (flags & MAP_FIXED); + unsigned long high_limit; struct vm_unmapped_area_info info;
+ high_limit = DEFAULT_MAP_WINDOW; + if (addr >= high_limit || (fixed && (addr + len > high_limit))) + high_limit = TASK_SIZE; + + if (len > high_limit) + return -ENOMEM; + if (fixed) { + if (addr > high_limit - len) + return -ENOMEM; + } + if (unlikely(addr > mm->context.addr_limit && mm->context.addr_limit != TASK_SIZE)) mm->context.addr_limit = TASK_SIZE;
- /* requested length too big for entire address space */ - if (len > mm->task_size - mmap_min_addr) - return -ENOMEM; - - if (flags & MAP_FIXED) + if (fixed) return addr;
- /* requesting a specific address */ if (addr) { addr = PAGE_ALIGN(addr); vma = find_vma(mm, addr); - if (mm->task_size - len >= addr && addr >= mmap_min_addr && - (!vma || addr + len <= vm_start_gap(vma))) + if (high_limit - len >= addr && addr >= mmap_min_addr && + (!vma || addr + len <= vm_start_gap(vma))) return addr; }
info.flags = VM_UNMAPPED_AREA_TOPDOWN; info.length = len; info.low_limit = max(PAGE_SIZE, mmap_min_addr); - info.high_limit = mm->mmap_base; + info.high_limit = mm->mmap_base + (high_limit - DEFAULT_MAP_WINDOW); info.align_mask = 0;
- if (addr > DEFAULT_MAP_WINDOW) - info.high_limit += mm->context.addr_limit - DEFAULT_MAP_WINDOW; - addr = vm_unmapped_area(&info); if (!(addr & ~PAGE_MASK)) return addr;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Michael Ellerman mpe@ellerman.id.au
commit 7ece370996b694ae263025e056ad785afc1be5ab upstream.
Currently userspace is able to request mmap() search between 128T-512T by specifying a hint address that is greater than 128T. But that means a hint of 128T exactly will return an address below 128T, which is confusing and wrong.
So fix the logic to check the hint is greater than *or equal* to 128T.
Fixes: f4ea6dcb08ea ("powerpc/mm: Enable mappings above 128TB") Suggested-by: Aneesh Kumar K.V aneesh.kumar@linux.vnet.ibm.com Suggested-by: Nicholas Piggin npiggin@gmail.com [mpe: Split out of Nick's bigger patch] Signed-off-by: Michael Ellerman mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/mm/slice.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/powerpc/mm/slice.c +++ b/arch/powerpc/mm/slice.c @@ -419,7 +419,7 @@ unsigned long slice_get_unmapped_area(un /* * Check if we need to expland slice area. */ - if (unlikely(addr > mm->context.addr_limit && + if (unlikely(addr >= mm->context.addr_limit && mm->context.addr_limit != TASK_SIZE)) { mm->context.addr_limit = TASK_SIZE; on_each_cpu(slice_flush_segments, mm, 1); @@ -427,7 +427,7 @@ unsigned long slice_get_unmapped_area(un /* * This mmap request can allocate upt to 512TB */ - if (addr > DEFAULT_MAP_WINDOW) + if (addr >= DEFAULT_MAP_WINDOW) high_limit = mm->context.addr_limit; else high_limit = DEFAULT_MAP_WINDOW;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nicholas Piggin npiggin@gmail.com
commit 6a72dc038b615229a1b285829d6c8378d15c2347 upstream.
When allocating VA space with a hint that crosses 128TB, the SLB addr_limit variable is not expanded if addr is not > 128TB, but the slice allocation looks at task_size, which is 512TB. This results in slice_check_fit() incorrectly succeeding because the slice_count truncates off bit 128 of the requested mask, so the comparison to the available mask succeeds.
Fix this by using mm->context.addr_limit instead of mm->task_size for testing allocation limits. This causes such allocations to fail.
Fixes: f4ea6dcb08ea ("powerpc/mm: Enable mappings above 128TB") Reported-by: Florian Weimer fweimer@redhat.com Signed-off-by: Nicholas Piggin npiggin@gmail.com Reviewed-by: Aneesh Kumar K.V aneesh.kumar@linux.vnet.ibm.com Signed-off-by: Michael Ellerman mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/mm/slice.c | 50 +++++++++++++++++++++++------------------------- 1 file changed, 24 insertions(+), 26 deletions(-)
--- a/arch/powerpc/mm/slice.c +++ b/arch/powerpc/mm/slice.c @@ -96,7 +96,7 @@ static int slice_area_is_free(struct mm_ { struct vm_area_struct *vma;
- if ((mm->task_size - len) < addr) + if ((mm->context.addr_limit - len) < addr) return 0; vma = find_vma(mm, addr); return (!vma || (addr + len) <= vm_start_gap(vma)); @@ -133,7 +133,7 @@ static void slice_mask_for_free(struct m if (!slice_low_has_vma(mm, i)) ret->low_slices |= 1u << i;
- if (mm->task_size <= SLICE_LOW_TOP) + if (mm->context.addr_limit <= SLICE_LOW_TOP) return;
for (i = 0; i < GET_HIGH_SLICE_INDEX(mm->context.addr_limit); i++) @@ -412,25 +412,31 @@ unsigned long slice_get_unmapped_area(un struct slice_mask compat_mask; int fixed = (flags & MAP_FIXED); int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT); + unsigned long page_size = 1UL << pshift; struct mm_struct *mm = current->mm; unsigned long newaddr; unsigned long high_limit;
- /* - * Check if we need to expland slice area. - */ - if (unlikely(addr >= mm->context.addr_limit && - mm->context.addr_limit != TASK_SIZE)) { - mm->context.addr_limit = TASK_SIZE; + high_limit = DEFAULT_MAP_WINDOW; + if (addr >= high_limit) + high_limit = TASK_SIZE; + + if (len > high_limit) + return -ENOMEM; + if (len & (page_size - 1)) + return -EINVAL; + if (fixed) { + if (addr & (page_size - 1)) + return -EINVAL; + if (addr > high_limit - len) + return -ENOMEM; + } + + if (high_limit > mm->context.addr_limit) { + mm->context.addr_limit = high_limit; on_each_cpu(slice_flush_segments, mm, 1); } - /* - * This mmap request can allocate upt to 512TB - */ - if (addr >= DEFAULT_MAP_WINDOW) - high_limit = mm->context.addr_limit; - else - high_limit = DEFAULT_MAP_WINDOW; + /* * init different masks */ @@ -446,27 +452,19 @@ unsigned long slice_get_unmapped_area(un
/* Sanity checks */ BUG_ON(mm->task_size == 0); + BUG_ON(mm->context.addr_limit == 0); VM_BUG_ON(radix_enabled());
slice_dbg("slice_get_unmapped_area(mm=%p, psize=%d...\n", mm, psize); slice_dbg(" addr=%lx, len=%lx, flags=%lx, topdown=%d\n", addr, len, flags, topdown);
- if (len > mm->task_size) - return -ENOMEM; - if (len & ((1ul << pshift) - 1)) - return -EINVAL; - if (fixed && (addr & ((1ul << pshift) - 1))) - return -EINVAL; - if (fixed && addr > (mm->task_size - len)) - return -ENOMEM; - /* If hint, make sure it matches our alignment restrictions */ if (!fixed && addr) { - addr = _ALIGN_UP(addr, 1ul << pshift); + addr = _ALIGN_UP(addr, page_size); slice_dbg(" aligned addr=%lx\n", addr); /* Ignore hint if it's too large or overlaps a VMA */ - if (addr > mm->task_size - len || + if (addr > high_limit - len || !slice_area_is_free(mm, addr, len)) addr = 0; }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nicholas Piggin npiggin@gmail.com
commit effc1b25088502fbd30305c79773de2d1f7470a6 upstream.
Hash unconditionally resets the addr_limit to default (128TB) when the mm context is initialised. If a process has > 128TB mappings when it forks, the child will not get the 512TB addr_limit, so accesses to valid > 128TB mappings will fail in the child.
Fix this by only resetting the addr_limit to default if it was 0. Non zero indicates it was duplicated from the parent (0 means exec()).
Fixes: f4ea6dcb08ea ("powerpc/mm: Enable mappings above 128TB") Signed-off-by: Nicholas Piggin npiggin@gmail.com Reviewed-by: Aneesh Kumar K.V aneesh.kumar@linux.vnet.ibm.com Signed-off-by: Michael Ellerman mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/mm/mmu_context_book3s64.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
--- a/arch/powerpc/mm/mmu_context_book3s64.c +++ b/arch/powerpc/mm/mmu_context_book3s64.c @@ -93,11 +93,11 @@ static int hash__init_new_context(struct return index;
/* - * We do switch_slb() early in fork, even before we setup the - * mm->context.addr_limit. Default to max task size so that we copy the - * default values to paca which will help us to handle slb miss early. + * In the case of exec, use the default limit, + * otherwise inherit it from the mm we are duplicating. */ - mm->context.addr_limit = DEFAULT_MAP_WINDOW_USER64; + if (!mm->context.addr_limit) + mm->context.addr_limit = DEFAULT_MAP_WINDOW_USER64;
/* * The old code would re-promote on fork, we don't do that when using
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Nicholas Piggin npiggin@gmail.com
commit 35602f82d0c765f991420e319c8d3a596c921eb8 upstream.
While mapping hints with a length that cross 128TB are disallowed, MAP_FIXED allocations that cross 128TB are allowed. These are failing on hash (on radix they succeed). Add an additional case for fixed mappings to expand the addr_limit when crossing 128TB.
Fixes: f4ea6dcb08ea ("powerpc/mm: Enable mappings above 128TB") Signed-off-by: Nicholas Piggin npiggin@gmail.com Reviewed-by: Aneesh Kumar K.V aneesh.kumar@linux.vnet.ibm.com Signed-off-by: Michael Ellerman mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/powerpc/mm/slice.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/powerpc/mm/slice.c +++ b/arch/powerpc/mm/slice.c @@ -418,7 +418,7 @@ unsigned long slice_get_unmapped_area(un unsigned long high_limit;
high_limit = DEFAULT_MAP_WINDOW; - if (addr >= high_limit) + if (addr >= high_limit || (fixed && (addr + len > high_limit))) high_limit = TASK_SIZE;
if (len > high_limit)
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Michele Baldessari michele@acksyn.org
commit b3120d2cc447ee77b9d69bf4ad7b452c9adb4d39 upstream.
Firmware load on AS102 is using the stack which is not allowed any longer. We currently fail with:
kernel: transfer buffer not dma capable kernel: ------------[ cut here ]------------ kernel: WARNING: CPU: 0 PID: 598 at drivers/usb/core/hcd.c:1595 usb_hcd_map_urb_for_dma+0x41d/0x620 kernel: Modules linked in: amd64_edac_mod(-) edac_mce_amd as102_fe dvb_as102(+) kvm_amd kvm snd_hda_codec_realtek dvb_core snd_hda_codec_generic snd_hda_codec_hdmi snd_hda_intel snd_hda_codec irqbypass crct10dif_pclmul crc32_pclmul snd_hda_core snd_hwdep snd_seq ghash_clmulni_intel sp5100_tco fam15h_power wmi k10temp i2c_piix4 snd_seq_device snd_pcm snd_timer parport_pc parport tpm_infineon snd tpm_tis soundcore tpm_tis_core tpm shpchp acpi_cpufreq xfs libcrc32c amdgpu amdkfd amd_iommu_v2 radeon hid_logitech_hidpp i2c_algo_bit drm_kms_helper crc32c_intel ttm drm r8169 mii hid_logitech_dj kernel: CPU: 0 PID: 598 Comm: systemd-udevd Not tainted 4.13.10-200.fc26.x86_64 #1 kernel: Hardware name: ASUS All Series/AM1I-A, BIOS 0505 03/13/2014 kernel: task: ffff979933b24c80 task.stack: ffffaf83413a4000 kernel: RIP: 0010:usb_hcd_map_urb_for_dma+0x41d/0x620 systemd-fsck[659]: /dev/sda2: clean, 49/128016 files, 268609/512000 blocks kernel: RSP: 0018:ffffaf83413a7728 EFLAGS: 00010282 systemd-udevd[604]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. kernel: RAX: 000000000000001f RBX: ffff979930bce780 RCX: 0000000000000000 kernel: RDX: 0000000000000000 RSI: ffff97993ec0e118 RDI: ffff97993ec0e118 kernel: RBP: ffffaf83413a7768 R08: 000000000000039a R09: 0000000000000000 kernel: R10: 0000000000000001 R11: 00000000ffffffff R12: 00000000fffffff5 kernel: R13: 0000000001400000 R14: 0000000000000001 R15: ffff979930806800 kernel: FS: 00007effaca5c8c0(0000) GS:ffff97993ec00000(0000) knlGS:0000000000000000 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 kernel: CR2: 00007effa9fca962 CR3: 0000000233089000 CR4: 00000000000406f0 kernel: Call Trace: kernel: usb_hcd_submit_urb+0x493/0xb40 kernel: ? page_cache_tree_insert+0x100/0x100 kernel: ? xfs_iunlock+0xd5/0x100 [xfs] kernel: ? xfs_file_buffered_aio_read+0x57/0xc0 [xfs] kernel: usb_submit_urb+0x22d/0x560 kernel: usb_start_wait_urb+0x6e/0x180 kernel: usb_bulk_msg+0xb8/0x160 kernel: as102_send_ep1+0x49/0xe0 [dvb_as102] kernel: ? devres_add+0x3f/0x50 kernel: as102_firmware_upload.isra.0+0x1dc/0x210 [dvb_as102] kernel: as102_fw_upload+0xb6/0x1f0 [dvb_as102] kernel: as102_dvb_register+0x2af/0x2d0 [dvb_as102] kernel: as102_usb_probe+0x1f3/0x260 [dvb_as102] kernel: usb_probe_interface+0x124/0x300 kernel: driver_probe_device+0x2ff/0x450 kernel: __driver_attach+0xa4/0xe0 kernel: ? driver_probe_device+0x450/0x450 kernel: bus_for_each_dev+0x6e/0xb0 kernel: driver_attach+0x1e/0x20 kernel: bus_add_driver+0x1c7/0x270 kernel: driver_register+0x60/0xe0 kernel: usb_register_driver+0x81/0x150 kernel: ? 0xffffffffc0807000 kernel: as102_usb_driver_init+0x1e/0x1000 [dvb_as102] kernel: do_one_initcall+0x50/0x190 kernel: ? __vunmap+0x81/0xb0 kernel: ? kfree+0x154/0x170 kernel: ? kmem_cache_alloc_trace+0x15f/0x1c0 kernel: ? do_init_module+0x27/0x1e9 kernel: do_init_module+0x5f/0x1e9 kernel: load_module+0x2602/0x2c30 kernel: SYSC_init_module+0x170/0x1a0 kernel: ? SYSC_init_module+0x170/0x1a0 kernel: SyS_init_module+0xe/0x10 kernel: do_syscall_64+0x67/0x140 kernel: entry_SYSCALL64_slow_path+0x25/0x25 kernel: RIP: 0033:0x7effab6cf3ea kernel: RSP: 002b:00007fff5cfcbbc8 EFLAGS: 00000246 ORIG_RAX: 00000000000000af kernel: RAX: ffffffffffffffda RBX: 00005569e0b83760 RCX: 00007effab6cf3ea kernel: RDX: 00007effac2099c5 RSI: 0000000000009a13 RDI: 00005569e0b98c50 kernel: RBP: 00007effac2099c5 R08: 00005569e0b83ed0 R09: 0000000000001d80 kernel: R10: 00007effab98db00 R11: 0000000000000246 R12: 00005569e0b98c50 kernel: R13: 00005569e0b81c60 R14: 0000000000020000 R15: 00005569dfadfdf7 kernel: Code: 48 39 c8 73 30 80 3d 59 60 9d 00 00 41 bc f5 ff ff ff 0f 85 26 ff ff ff 48 c7 c7 b8 6b d0 92 c6 05 3f 60 9d 00 01 e8 24 3d ad ff <0f> ff 8b 53 64 e9 09 ff ff ff 65 48 8b 0c 25 00 d3 00 00 48 8b kernel: ---[ end trace c4cae366180e70ec ]--- kernel: as10x_usb: error during firmware upload part1
Let's allocate the the structure dynamically so we can get the firmware loaded correctly: [ 14.243057] as10x_usb: firmware: as102_data1_st.hex loaded with success [ 14.500777] as10x_usb: firmware: as102_data2_st.hex loaded with success
Signed-off-by: Michele Baldessari michele@acksyn.org Signed-off-by: Mauro Carvalho Chehab mchehab@s-opensource.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/media/usb/as102/as102_fw.c | 28 +++++++++++++++++----------- 1 file changed, 17 insertions(+), 11 deletions(-)
--- a/drivers/media/usb/as102/as102_fw.c +++ b/drivers/media/usb/as102/as102_fw.c @@ -101,18 +101,23 @@ static int as102_firmware_upload(struct unsigned char *cmd, const struct firmware *firmware) {
- struct as10x_fw_pkt_t fw_pkt; + struct as10x_fw_pkt_t *fw_pkt; int total_read_bytes = 0, errno = 0; unsigned char addr_has_changed = 0;
+ fw_pkt = kmalloc(sizeof(*fw_pkt), GFP_KERNEL); + if (!fw_pkt) + return -ENOMEM; + + for (total_read_bytes = 0; total_read_bytes < firmware->size; ) { int read_bytes = 0, data_len = 0;
/* parse intel hex line */ read_bytes = parse_hex_line( (u8 *) (firmware->data + total_read_bytes), - fw_pkt.raw.address, - fw_pkt.raw.data, + fw_pkt->raw.address, + fw_pkt->raw.data, &data_len, &addr_has_changed);
@@ -122,28 +127,28 @@ static int as102_firmware_upload(struct /* detect the end of file */ total_read_bytes += read_bytes; if (total_read_bytes == firmware->size) { - fw_pkt.u.request[0] = 0x00; - fw_pkt.u.request[1] = 0x03; + fw_pkt->u.request[0] = 0x00; + fw_pkt->u.request[1] = 0x03;
/* send EOF command */ errno = bus_adap->ops->upload_fw_pkt(bus_adap, (uint8_t *) - &fw_pkt, 2, 0); + fw_pkt, 2, 0); if (errno < 0) goto error; } else { if (!addr_has_changed) { /* prepare command to send */ - fw_pkt.u.request[0] = 0x00; - fw_pkt.u.request[1] = 0x01; + fw_pkt->u.request[0] = 0x00; + fw_pkt->u.request[1] = 0x01;
- data_len += sizeof(fw_pkt.u.request); - data_len += sizeof(fw_pkt.raw.address); + data_len += sizeof(fw_pkt->u.request); + data_len += sizeof(fw_pkt->raw.address);
/* send cmd to device */ errno = bus_adap->ops->upload_fw_pkt(bus_adap, (uint8_t *) - &fw_pkt, + fw_pkt, data_len, 0); if (errno < 0) @@ -152,6 +157,7 @@ static int as102_firmware_upload(struct } } error: + kfree(fw_pkt); return (errno == 0) ? total_read_bytes : errno; }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sean Young sean@mess.org
commit 3e45067f94bbd61dec0619b1c32744eb0de480c8 upstream.
The ioctl LIRC_SET_REC_TIMEOUT would set a timeout of 704ns if called with a timeout of 4294968us.
Signed-off-by: Sean Young sean@mess.org Signed-off-by: Mauro Carvalho Chehab mchehab@s-opensource.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/media/rc/ir-lirc-codec.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
--- a/drivers/media/rc/ir-lirc-codec.c +++ b/drivers/media/rc/ir-lirc-codec.c @@ -298,11 +298,14 @@ static long ir_lirc_ioctl(struct file *f if (!dev->max_timeout) return -ENOTTY;
+ /* Check for multiply overflow */ + if (val > U32_MAX / 1000) + return -EINVAL; + tmp = val * 1000;
- if (tmp < dev->min_timeout || - tmp > dev->max_timeout) - return -EINVAL; + if (tmp < dev->min_timeout || tmp > dev->max_timeout) + return -EINVAL;
if (dev->s_timeout) ret = dev->s_timeout(dev, tmp);
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sean Young sean@mess.org
commit 829bbf268894d0866bb9dd2b1e430cfa5c5f0779 upstream.
When receiving an nec repeat, rc_repeat() is called and then rc_keydown() with the last decoded scancode. That last call is redundant.
Fixes: 265a2988d202 ("media: rc-core: consistent use of rc_repeat()")
Signed-off-by: Sean Young sean@mess.org Signed-off-by: Mauro Carvalho Chehab mchehab@s-opensource.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/media/rc/ir-nec-decoder.c | 31 ++++++++++++++++++------------- 1 file changed, 18 insertions(+), 13 deletions(-)
--- a/drivers/media/rc/ir-nec-decoder.c +++ b/drivers/media/rc/ir-nec-decoder.c @@ -87,8 +87,6 @@ static int ir_nec_decode(struct rc_dev * data->state = STATE_BIT_PULSE; return 0; } else if (eq_margin(ev.duration, NEC_REPEAT_SPACE, NEC_UNIT / 2)) { - rc_repeat(dev); - IR_dprintk(1, "Repeat last key\n"); data->state = STATE_TRAILER_PULSE; return 0; } @@ -151,19 +149,26 @@ static int ir_nec_decode(struct rc_dev * if (!geq_margin(ev.duration, NEC_TRAILER_SPACE, NEC_UNIT / 2)) break;
- address = bitrev8((data->bits >> 24) & 0xff); - not_address = bitrev8((data->bits >> 16) & 0xff); - command = bitrev8((data->bits >> 8) & 0xff); - not_command = bitrev8((data->bits >> 0) & 0xff); - - scancode = ir_nec_bytes_to_scancode(address, not_address, - command, not_command, - &rc_proto); + if (data->count == NEC_NBITS) { + address = bitrev8((data->bits >> 24) & 0xff); + not_address = bitrev8((data->bits >> 16) & 0xff); + command = bitrev8((data->bits >> 8) & 0xff); + not_command = bitrev8((data->bits >> 0) & 0xff); + + scancode = ir_nec_bytes_to_scancode(address, + not_address, + command, + not_command, + &rc_proto); + + if (data->is_nec_x) + data->necx_repeat = true;
- if (data->is_nec_x) - data->necx_repeat = true; + rc_keydown(dev, rc_proto, scancode, 0); + } else { + rc_repeat(dev); + }
- rc_keydown(dev, rc_proto, scancode, 0); data->state = STATE_INACTIVE; return 0; }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Johan Hovold johan@kernel.org
commit 6c3b047fa2d2286d5e438bcb470c7b1a49f415f6 upstream.
Make sure to check that we actually have an Interface Association Descriptor before dereferencing it during probe to avoid dereferencing a NULL-pointer.
Fixes: e0d3bafd0258 ("V4L/DVB (10954): Add cx231xx USB driver") Reported-by: Andrey Konovalov andreyknvl@google.com Signed-off-by: Johan Hovold johan@kernel.org Tested-by: Andrey Konovalov andreyknvl@google.com Signed-off-by: Hans Verkuil hans.verkuil@cisco.com Signed-off-by: Mauro Carvalho Chehab mchehab@osg.samsung.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/media/usb/cx231xx/cx231xx-cards.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/media/usb/cx231xx/cx231xx-cards.c +++ b/drivers/media/usb/cx231xx/cx231xx-cards.c @@ -1684,7 +1684,7 @@ static int cx231xx_usb_probe(struct usb_ nr = dev->devno;
assoc_desc = udev->actconfig->intf_assoc[0]; - if (assoc_desc->bFirstInterface != ifnum) { + if (!assoc_desc || assoc_desc->bFirstInterface != ifnum) { dev_err(d, "Not found matching IAD interface\n"); retval = -ENODEV; goto err_if;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ricardo Ribalda Delgado ricardo.ribalda@gmail.com
commit 9cac9d2fb2fe0e0cadacdb94415b3fe49e3f724f upstream.
VIDIOC_DQEVENT and VIDIOC_QUERY_EXT_CTRL should give the same output for the control flags field.
This patch creates a new function user_flags(), that calculates the user exported flags value (which is different than the kernel internal flags structure). This function is then used by all the code that exports the internal flags to userspace.
Reported-by: Dimitrios Katsaros patcherwork@gmail.com Signed-off-by: Ricardo Ribalda Delgado ricardo.ribalda@gmail.com Signed-off-by: Hans Verkuil hans.verkuil@cisco.com Signed-off-by: Mauro Carvalho Chehab mchehab@s-opensource.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/media/v4l2-core/v4l2-ctrls.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-)
--- a/drivers/media/v4l2-core/v4l2-ctrls.c +++ b/drivers/media/v4l2-core/v4l2-ctrls.c @@ -1227,6 +1227,16 @@ void v4l2_ctrl_fill(u32 id, const char * } EXPORT_SYMBOL(v4l2_ctrl_fill);
+static u32 user_flags(const struct v4l2_ctrl *ctrl) +{ + u32 flags = ctrl->flags; + + if (ctrl->is_ptr) + flags |= V4L2_CTRL_FLAG_HAS_PAYLOAD; + + return flags; +} + static void fill_event(struct v4l2_event *ev, struct v4l2_ctrl *ctrl, u32 changes) { memset(ev->reserved, 0, sizeof(ev->reserved)); @@ -1234,7 +1244,7 @@ static void fill_event(struct v4l2_event ev->id = ctrl->id; ev->u.ctrl.changes = changes; ev->u.ctrl.type = ctrl->type; - ev->u.ctrl.flags = ctrl->flags; + ev->u.ctrl.flags = user_flags(ctrl); if (ctrl->is_ptr) ev->u.ctrl.value64 = 0; else @@ -2577,10 +2587,8 @@ int v4l2_query_ext_ctrl(struct v4l2_ctrl else qc->id = ctrl->id; strlcpy(qc->name, ctrl->name, sizeof(qc->name)); - qc->flags = ctrl->flags; + qc->flags = user_flags(ctrl); qc->type = ctrl->type; - if (ctrl->is_ptr) - qc->flags |= V4L2_CTRL_FLAG_HAS_PAYLOAD; qc->elem_size = ctrl->elem_size; qc->elems = ctrl->elems; qc->nr_of_dims = ctrl->nr_of_dims;
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Stanimir Varbanov stanimir.varbanov@linaro.org
commit cd1a77e3c9cc6dbb57f02aa50e1740fc144d2dad upstream.
This change will fix an issue with dma_free size found with DMA API debug enabled.
Signed-off-by: Stanimir Varbanov stanimir.varbanov@linaro.org Signed-off-by: Hans Verkuil hans.verkuil@cisco.com Signed-off-by: Mauro Carvalho Chehab mchehab@osg.samsung.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/media/platform/qcom/venus/hfi_venus.c | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-)
--- a/drivers/media/platform/qcom/venus/hfi_venus.c +++ b/drivers/media/platform/qcom/venus/hfi_venus.c @@ -344,7 +344,7 @@ static int venus_alloc(struct venus_hfi_ desc->attrs = DMA_ATTR_WRITE_COMBINE; desc->size = ALIGN(size, SZ_4K);
- desc->kva = dma_alloc_attrs(dev, size, &desc->da, GFP_KERNEL, + desc->kva = dma_alloc_attrs(dev, desc->size, &desc->da, GFP_KERNEL, desc->attrs); if (!desc->kva) return -ENOMEM; @@ -710,10 +710,8 @@ static int venus_interface_queues_init(s if (ret) return ret;
- hdev->ifaceq_table.kva = desc.kva; - hdev->ifaceq_table.da = desc.da; - hdev->ifaceq_table.size = IFACEQ_TABLE_SIZE; - offset = hdev->ifaceq_table.size; + hdev->ifaceq_table = desc; + offset = IFACEQ_TABLE_SIZE;
for (i = 0; i < IFACEQ_NUM; i++) { queue = &hdev->queues[i]; @@ -755,9 +753,7 @@ static int venus_interface_queues_init(s if (ret) { hdev->sfr.da = 0; } else { - hdev->sfr.da = desc.da; - hdev->sfr.kva = desc.kva; - hdev->sfr.size = ALIGNED_SFR_SIZE; + hdev->sfr = desc; sfr = hdev->sfr.kva; sfr->buf_size = ALIGNED_SFR_SIZE; }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Stanimir Varbanov stanimir.varbanov@linaro.org
commit 5232c37ce244db04fd50d160b92e40d2df46a2e9 upstream.
This fixes wrongly filled bytesused field of v4l2_plane structure by include data_offset in the plane, Also fill data_offset and bytesused for capture type of buffers only.
Signed-off-by: Stanimir Varbanov stanimir.varbanov@linaro.org Signed-off-by: Hans Verkuil hans.verkuil@cisco.com Signed-off-by: Mauro Carvalho Chehab mchehab@osg.samsung.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/media/platform/qcom/venus/venc.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)
--- a/drivers/media/platform/qcom/venus/venc.c +++ b/drivers/media/platform/qcom/venus/venc.c @@ -963,13 +963,12 @@ static void venc_buf_done(struct venus_i if (!vbuf) return;
- vb = &vbuf->vb2_buf; - vb->planes[0].bytesused = bytesused; - vb->planes[0].data_offset = data_offset; - vbuf->flags = flags;
if (type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE) { + vb = &vbuf->vb2_buf; + vb2_set_plane_payload(vb, 0, bytesused + data_offset); + vb->planes[0].data_offset = data_offset; vb->timestamp = timestamp_us * NSEC_PER_USEC; vbuf->sequence = inst->sequence_cap++; } else {
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Stanimir Varbanov stanimir.varbanov@linaro.org
commit e69b987a97599456b95b5fef4aca8dcdb1505aea upstream.
This addresses the wrong behavior of decoder stop command by rewriting it. These new implementation enqueue an empty buffer on the decoder input buffer queue to signal end-of-stream. The client should stop queuing buffers on the V4L2 Output queue and continue queuing/dequeuing buffers on Capture queue. This process will continue until the client receives a buffer with V4L2_BUF_FLAG_LAST flag raised, which means that this is last decoded buffer with data.
Signed-off-by: Stanimir Varbanov stanimir.varbanov@linaro.org Tested-by: Nicolas Dufresne nicolas.dufresne@collabora.com Signed-off-by: Hans Verkuil hans.verkuil@cisco.com Signed-off-by: Mauro Carvalho Chehab mchehab@osg.samsung.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/media/platform/qcom/venus/core.h | 2 - drivers/media/platform/qcom/venus/helpers.c | 7 ----- drivers/media/platform/qcom/venus/hfi.c | 1 drivers/media/platform/qcom/venus/vdec.c | 34 ++++++++++++++++++---------- 4 files changed, 24 insertions(+), 20 deletions(-)
--- a/drivers/media/platform/qcom/venus/core.h +++ b/drivers/media/platform/qcom/venus/core.h @@ -194,7 +194,6 @@ struct venus_buffer { * @fh: a holder of v4l file handle structure * @streamon_cap: stream on flag for capture queue * @streamon_out: stream on flag for output queue - * @cmd_stop: a flag to signal encoder/decoder commands * @width: current capture width * @height: current capture height * @out_width: current output width @@ -258,7 +257,6 @@ struct venus_inst { } controls; struct v4l2_fh fh; unsigned int streamon_cap, streamon_out; - bool cmd_stop; u32 width; u32 height; u32 out_width; --- a/drivers/media/platform/qcom/venus/helpers.c +++ b/drivers/media/platform/qcom/venus/helpers.c @@ -623,13 +623,6 @@ void venus_helper_vb2_buf_queue(struct v
mutex_lock(&inst->lock);
- if (inst->cmd_stop) { - vbuf->flags |= V4L2_BUF_FLAG_LAST; - v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_DONE); - inst->cmd_stop = false; - goto unlock; - } - v4l2_m2m_buf_queue(m2m_ctx, vbuf);
if (!(inst->streamon_out & inst->streamon_cap)) --- a/drivers/media/platform/qcom/venus/hfi.c +++ b/drivers/media/platform/qcom/venus/hfi.c @@ -484,6 +484,7 @@ int hfi_session_process_buf(struct venus
return -EINVAL; } +EXPORT_SYMBOL_GPL(hfi_session_process_buf);
irqreturn_t hfi_isr_thread(int irq, void *dev_id) { --- a/drivers/media/platform/qcom/venus/vdec.c +++ b/drivers/media/platform/qcom/venus/vdec.c @@ -469,8 +469,14 @@ static int vdec_subscribe_event(struct v static int vdec_try_decoder_cmd(struct file *file, void *fh, struct v4l2_decoder_cmd *cmd) { - if (cmd->cmd != V4L2_DEC_CMD_STOP) + switch (cmd->cmd) { + case V4L2_DEC_CMD_STOP: + if (cmd->flags & V4L2_DEC_CMD_STOP_TO_BLACK) + return -EINVAL; + break; + default: return -EINVAL; + }
return 0; } @@ -479,6 +485,7 @@ static int vdec_decoder_cmd(struct file *file, void *fh, struct v4l2_decoder_cmd *cmd) { struct venus_inst *inst = to_inst(file); + struct hfi_frame_data fdata = {0}; int ret;
ret = vdec_try_decoder_cmd(file, fh, cmd); @@ -486,12 +493,23 @@ vdec_decoder_cmd(struct file *file, void return ret;
mutex_lock(&inst->lock); - inst->cmd_stop = true; - mutex_unlock(&inst->lock);
- hfi_session_flush(inst); + /* + * Implement V4L2_DEC_CMD_STOP by enqueue an empty buffer on decoder + * input to signal EOS. + */ + if (!(inst->streamon_out & inst->streamon_cap)) + goto unlock; + + fdata.buffer_type = HFI_BUFFER_INPUT; + fdata.flags |= HFI_BUFFERFLAG_EOS; + fdata.device_addr = 0xdeadbeef;
- return 0; + ret = hfi_session_process_buf(inst, &fdata); + +unlock: + mutex_unlock(&inst->lock); + return ret; }
static const struct v4l2_ioctl_ops vdec_ioctl_ops = { @@ -718,7 +736,6 @@ static int vdec_start_streaming(struct v inst->reconfig = false; inst->sequence_cap = 0; inst->sequence_out = 0; - inst->cmd_stop = false;
ret = vdec_init_session(inst); if (ret) @@ -807,11 +824,6 @@ static void vdec_buf_done(struct venus_i vb->timestamp = timestamp_us * NSEC_PER_USEC; vbuf->sequence = inst->sequence_cap++;
- if (inst->cmd_stop) { - vbuf->flags |= V4L2_BUF_FLAG_LAST; - inst->cmd_stop = false; - } - if (vbuf->flags & V4L2_BUF_FLAG_LAST) { const struct v4l2_event ev = { .type = V4L2_EVENT_EOS };
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Neil Armstrong narmstrong@baylibre.com
commit 4ee8e51b9edfe7845a094690a365c844e5a35b4b upstream.
This year, Amlogic updated the ARM Trusted Firmware reserved memory mapping for Meson GXL SoCs and products sold since May 2017 uses this alternate reserved memory mapping. But products had been sold using the previous mapping.
This issue has been explained in [1] and a dynamic solution is yet to be found to avoid loosing another 3Mbytes of reservable memory.
In the meantime, this patch adds this alternate memory zone only for the GXL and GXM SoCs since GXBB based new products stopped earlier.
[1] http://lists.infradead.org/pipermail/linux-amlogic/2017-October/004860.html
Fixes: bba8e3f42736 ("ARM64: dts: meson-gx: Add firmware reserved memory zones") Reported-by: Jerome Brunet jbrunet@baylibre.com Signed-off-by: Neil Armstrong narmstrong@baylibre.com Signed-off-by: Kevin Hilman khilman@baylibre.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/arm64/boot/dts/amlogic/meson-gxl.dtsi | 8 ++++++++ 1 file changed, 8 insertions(+)
--- a/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi +++ b/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi @@ -49,6 +49,14 @@
/ { compatible = "amlogic,meson-gxl"; + + reserved-memory { + /* Alternate 3 MiB reserved for ARM Trusted Firmware (BL31) */ + secmon_reserved_alt: secmon@05000000 { + reg = <0x0 0x05000000 0x0 0x300000>; + no-map; + }; + }; };
ðmac {
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Oren Givon oren.givon@intel.com
commit f7f5873bbd45a67d3097dfb55237ade2ad520184 upstream.
The PCI ID (0x2720, 0x0070) was set with the config struct iwla000_2ax_cfg_hr instead of iwla000_2ac_cfg_hr_cdb.
Fixes: 175b87c69253 ("iwlwifi: add the new a000_2ax series") Signed-off-by: Oren Givon oren.givon@intel.com Signed-off-by: Luca Coelho luciano.coelho@intel.com Cc: Thomas Backlund tmb@mageia.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/wireless/intel/iwlwifi/pcie/drv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c @@ -576,7 +576,7 @@ static const struct pci_device_id iwl_hw {IWL_PCI_DEVICE(0x2720, 0x0000, iwla000_2ax_cfg_hr)}, {IWL_PCI_DEVICE(0x34F0, 0x0070, iwla000_2ax_cfg_hr)}, {IWL_PCI_DEVICE(0x2720, 0x0078, iwla000_2ax_cfg_hr)}, - {IWL_PCI_DEVICE(0x2720, 0x0070, iwla000_2ax_cfg_hr)}, + {IWL_PCI_DEVICE(0x2720, 0x0070, iwla000_2ac_cfg_hr_cdb)}, {IWL_PCI_DEVICE(0x2720, 0x1080, iwla000_2ax_cfg_hr)}, #endif /* CONFIG_IWLMVM */
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Oren Givon oren.givon@intel.com
commit d048b36b9654c4e0cf0d3576be2d1ed2a3084c6f upstream.
Add a new a000 device with PCI ID (0x2720, 0x0030).
Signed-off-by: Oren Givon oren.givon@intel.com Signed-off-by: Luca Coelho luciano.coelho@intel.com Cc: Thomas Backlund tmb@mageia.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/wireless/intel/iwlwifi/pcie/drv.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c @@ -577,6 +577,7 @@ static const struct pci_device_id iwl_hw {IWL_PCI_DEVICE(0x34F0, 0x0070, iwla000_2ax_cfg_hr)}, {IWL_PCI_DEVICE(0x2720, 0x0078, iwla000_2ax_cfg_hr)}, {IWL_PCI_DEVICE(0x2720, 0x0070, iwla000_2ac_cfg_hr_cdb)}, + {IWL_PCI_DEVICE(0x2720, 0x0030, iwla000_2ac_cfg_hr_cdb)}, {IWL_PCI_DEVICE(0x2720, 0x1080, iwla000_2ax_cfg_hr)}, #endif /* CONFIG_IWLMVM */
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Luca Coelho luciano.coelho@intel.com
commit 1105a337375258515ed09b92a83fd7bfd6775958 upstream.
It's hard to find values that are missing in the list, so sorting the values and comparing them makes it much easier. To simplify this task, sort the devices in the list.
Signed-off-by: Luca Coelho luciano.coelho@intel.com Cc: Thomas Backlund tmb@mageia.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/wireless/intel/iwlwifi/pcie/drv.c | 84 +++++++++++++------------- 1 file changed, 42 insertions(+), 42 deletions(-)
--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c @@ -510,65 +510,65 @@ static const struct pci_device_id iwl_hw {IWL_PCI_DEVICE(0x24FD, 0x0012, iwl8275_2ac_cfg)},
/* 9000 Series */ - {IWL_PCI_DEVICE(0x271B, 0x0010, iwl9160_2ac_cfg)}, - {IWL_PCI_DEVICE(0x271B, 0x0014, iwl9160_2ac_cfg)}, - {IWL_PCI_DEVICE(0x271B, 0x0210, iwl9160_2ac_cfg)}, {IWL_PCI_DEVICE(0x2526, 0x0000, iwl9260_2ac_cfg)}, {IWL_PCI_DEVICE(0x2526, 0x0010, iwl9260_2ac_cfg)}, {IWL_PCI_DEVICE(0x2526, 0x0014, iwl9260_2ac_cfg)}, - {IWL_PCI_DEVICE(0x2526, 0xA014, iwl9260_2ac_cfg)}, - {IWL_PCI_DEVICE(0x2526, 0x4010, iwl9260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2526, 0x0030, iwl9560_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2526, 0x0034, iwl9560_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2526, 0x0038, iwl9560_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2526, 0x003C, iwl9560_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2526, 0x0060, iwl9460_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2526, 0x0064, iwl9460_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2526, 0x00A0, iwl9460_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2526, 0x00A4, iwl9460_2ac_cfg)}, {IWL_PCI_DEVICE(0x2526, 0x0210, iwl9260_2ac_cfg)}, {IWL_PCI_DEVICE(0x2526, 0x0214, iwl9260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2526, 0x0230, iwl9560_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2526, 0x0234, iwl9560_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2526, 0x0238, iwl9560_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2526, 0x023C, iwl9560_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2526, 0x0260, iwl9460_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2526, 0x02A0, iwl9460_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2526, 0x02A4, iwl9460_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2526, 0x1030, iwl9560_2ac_cfg)}, {IWL_PCI_DEVICE(0x2526, 0x1410, iwl9270_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2526, 0x1420, iwl9460_2ac_cfg)}, {IWL_PCI_DEVICE(0x2526, 0x1610, iwl9270_2ac_cfg)}, - {IWL_PCI_DEVICE(0x9DF0, 0x0A10, iwl9460_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2526, 0x4010, iwl9260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2526, 0x4030, iwl9560_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2526, 0x40A4, iwl9460_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2526, 0xA014, iwl9260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x271B, 0x0010, iwl9160_2ac_cfg)}, + {IWL_PCI_DEVICE(0x271B, 0x0014, iwl9160_2ac_cfg)}, + {IWL_PCI_DEVICE(0x271B, 0x0210, iwl9160_2ac_cfg)}, + {IWL_PCI_DEVICE(0x30DC, 0x0060, iwl9460_2ac_cfg)}, + {IWL_PCI_DEVICE(0x31DC, 0x0030, iwl9560_2ac_cfg)}, + {IWL_PCI_DEVICE(0x31DC, 0x0034, iwl9560_2ac_cfg)}, + {IWL_PCI_DEVICE(0x31DC, 0x0038, iwl9560_2ac_cfg)}, + {IWL_PCI_DEVICE(0x31DC, 0x003C, iwl9560_2ac_cfg)}, + {IWL_PCI_DEVICE(0x31DC, 0x0060, iwl9460_2ac_cfg)}, + {IWL_PCI_DEVICE(0x9DF0, 0x0000, iwl9460_2ac_cfg)}, {IWL_PCI_DEVICE(0x9DF0, 0x0010, iwl9460_2ac_cfg)}, + {IWL_PCI_DEVICE(0x9DF0, 0x0030, iwl9560_2ac_cfg)}, + {IWL_PCI_DEVICE(0x9DF0, 0x0034, iwl9560_2ac_cfg)}, + {IWL_PCI_DEVICE(0x9DF0, 0x0038, iwl9560_2ac_cfg)}, + {IWL_PCI_DEVICE(0x9DF0, 0x003C, iwl9560_2ac_cfg)}, + {IWL_PCI_DEVICE(0x9DF0, 0x0060, iwl9460_2ac_cfg)}, {IWL_PCI_DEVICE(0x9DF0, 0x0210, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x9DF0, 0x0410, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x9DF0, 0x0610, iwl9460_2ac_cfg)}, {IWL_PCI_DEVICE(0x9DF0, 0x0310, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x9DF0, 0x0000, iwl9460_2ac_cfg)}, + {IWL_PCI_DEVICE(0x9DF0, 0x0410, iwl9460_2ac_cfg)}, {IWL_PCI_DEVICE(0x9DF0, 0x0510, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x9DF0, 0x2010, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x2526, 0x1420, iwl9460_2ac_cfg)}, + {IWL_PCI_DEVICE(0x9DF0, 0x0610, iwl9460_2ac_cfg)}, {IWL_PCI_DEVICE(0x9DF0, 0x0710, iwl9460_2ac_cfg)}, + {IWL_PCI_DEVICE(0x9DF0, 0x0A10, iwl9460_2ac_cfg)}, + {IWL_PCI_DEVICE(0x9DF0, 0x2010, iwl9460_2ac_cfg)}, {IWL_PCI_DEVICE(0x9DF0, 0x2A10, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x30DC, 0x0060, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x2526, 0x0060, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x2526, 0x0260, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x2526, 0x0064, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x2526, 0x00A4, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x2526, 0x40A4, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x2526, 0x02A4, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x2526, 0x00A0, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x2526, 0x02A0, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x9DF0, 0x0060, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0xA370, 0x0060, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x31DC, 0x0060, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x2526, 0x0030, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0x2526, 0x4030, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0x2526, 0x0230, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0x2526, 0x0234, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0x2526, 0x0238, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0x2526, 0x023C, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0x9DF0, 0x0030, iwl9560_2ac_cfg)}, {IWL_PCI_DEVICE(0xA370, 0x0030, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0x31DC, 0x0030, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0x2526, 0x1030, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0xA370, 0x1030, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0x9DF0, 0x0034, iwl9560_2ac_cfg)}, {IWL_PCI_DEVICE(0xA370, 0x0034, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0x31DC, 0x0034, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0x2526, 0x0038, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0x2526, 0x003C, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0x9DF0, 0x0038, iwl9560_2ac_cfg)}, {IWL_PCI_DEVICE(0xA370, 0x0038, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0x31DC, 0x0038, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0x9DF0, 0x003C, iwl9560_2ac_cfg)}, {IWL_PCI_DEVICE(0xA370, 0x003C, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0x31DC, 0x003C, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0x2526, 0x0034, iwl9560_2ac_cfg)}, + {IWL_PCI_DEVICE(0xA370, 0x0060, iwl9460_2ac_cfg)}, + {IWL_PCI_DEVICE(0xA370, 0x1030, iwl9560_2ac_cfg)},
/* a000 Series */ {IWL_PCI_DEVICE(0x2720, 0x0A10, iwla000_2ac_cfg_hr_cdb)},
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ihab Zhaika ihab.zhaika@intel.com
commit 57b36f7fcb39c5eae8c1f463699f747af69643ba upstream.
add four new PCI ID'S for a000 series
Signed-off-by: Ihab Zhaika ihab.zhaika@intel.com Signed-off-by: Luca Coelho luciano.coelho@intel.com Cc: Thomas Backlund tmb@mageia.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/wireless/intel/iwlwifi/pcie/drv.c | 5 +++++ 1 file changed, 5 insertions(+)
--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c @@ -579,6 +579,11 @@ static const struct pci_device_id iwl_hw {IWL_PCI_DEVICE(0x2720, 0x0070, iwla000_2ac_cfg_hr_cdb)}, {IWL_PCI_DEVICE(0x2720, 0x0030, iwla000_2ac_cfg_hr_cdb)}, {IWL_PCI_DEVICE(0x2720, 0x1080, iwla000_2ax_cfg_hr)}, + {IWL_PCI_DEVICE(0x2720, 0x0090, iwla000_2ac_cfg_hr_cdb)}, + {IWL_PCI_DEVICE(0x2720, 0x0310, iwla000_2ac_cfg_hr_cdb)}, + {IWL_PCI_DEVICE(0x40C0, 0x0000, iwla000_2ax_cfg_hr)}, + {IWL_PCI_DEVICE(0x40C0, 0x0A10, iwla000_2ax_cfg_hr)}, + #endif /* CONFIG_IWLMVM */
{0}
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ihab Zhaika ihab.zhaika@intel.com
commit 7cddbef445631109bd530ce7cdacaa04ff0a62d1 upstream.
add two new PCI ID'S for 8265 series
Signed-off-by: Ihab Zhaika ihab.zhaika@intel.com Signed-off-by: Luca Coelho luciano.coelho@intel.com Cc: Thomas Backlund tmb@mageia.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/wireless/intel/iwlwifi/pcie/drv.c | 2 ++ 1 file changed, 2 insertions(+)
--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c @@ -508,6 +508,8 @@ static const struct pci_device_id iwl_hw {IWL_PCI_DEVICE(0x24FD, 0x3E01, iwl8275_2ac_cfg)}, {IWL_PCI_DEVICE(0x24FD, 0x1012, iwl8275_2ac_cfg)}, {IWL_PCI_DEVICE(0x24FD, 0x0012, iwl8275_2ac_cfg)}, + {IWL_PCI_DEVICE(0x24FD, 0x0014, iwl8265_2ac_cfg)}, + {IWL_PCI_DEVICE(0x24FD, 0x9074, iwl8265_2ac_cfg)},
/* 9000 Series */ {IWL_PCI_DEVICE(0x2526, 0x0000, iwl9260_2ac_cfg)},
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ihab Zhaika ihab.zhaika@intel.com
commit d669fc2d42a43ee0abcf2396df6e9c5a124aa984 upstream.
add three new PCI ID'S for 8260 series
Signed-off-by: Ihab Zhaika ihab.zhaika@intel.com Signed-off-by: Luca Coelho luciano.coelho@intel.com Cc: Thomas Backlund tmb@mageia.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/wireless/intel/iwlwifi/pcie/drv.c | 3 +++ 1 file changed, 3 insertions(+)
--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c @@ -465,6 +465,8 @@ static const struct pci_device_id iwl_hw {IWL_PCI_DEVICE(0x24F3, 0x9110, iwl8260_2ac_cfg)}, {IWL_PCI_DEVICE(0x24F4, 0x8030, iwl8260_2ac_cfg)}, {IWL_PCI_DEVICE(0x24F4, 0x9030, iwl8260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x24F4, 0xC030, iwl8260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x24F4, 0xD030, iwl8260_2ac_cfg)}, {IWL_PCI_DEVICE(0x24F3, 0x8130, iwl8260_2ac_cfg)}, {IWL_PCI_DEVICE(0x24F3, 0x9130, iwl8260_2ac_cfg)}, {IWL_PCI_DEVICE(0x24F3, 0x8132, iwl8260_2ac_cfg)}, @@ -483,6 +485,7 @@ static const struct pci_device_id iwl_hw {IWL_PCI_DEVICE(0x24F3, 0x0950, iwl8260_2ac_cfg)}, {IWL_PCI_DEVICE(0x24F3, 0x0930, iwl8260_2ac_cfg)}, {IWL_PCI_DEVICE(0x24F3, 0x0000, iwl8265_2ac_cfg)}, + {IWL_PCI_DEVICE(0x24F3, 0x4010, iwl8260_2ac_cfg)}, {IWL_PCI_DEVICE(0x24FD, 0x0010, iwl8265_2ac_cfg)}, {IWL_PCI_DEVICE(0x24FD, 0x0110, iwl8265_2ac_cfg)}, {IWL_PCI_DEVICE(0x24FD, 0x1110, iwl8265_2ac_cfg)},
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Luca Coelho luciano.coelho@intel.com
commit dbc89253a7e15f8f031fb1eeb956de91204655e3 upstream.
A lot of PCI IDs were missing and there were some problems with the configuration and firmware selection for devices on the 9000 series. Fix the firmware selection by adding files for the B-steps; add configuration for some integrated devices; and add a bunch of PCI IDs (mostly for integrated devices) that were missing from the driver's list.
Without this patch, a lot of devices will not be recognized or will try to load the wrong firmware file.
Signed-off-by: Luca Coelho luciano.coelho@intel.com Cc: Thomas Backlund tmb@mageia.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/wireless/intel/iwlwifi/cfg/9000.c | 67 +++++++++++- drivers/net/wireless/intel/iwlwifi/iwl-config.h | 5 drivers/net/wireless/intel/iwlwifi/pcie/drv.c | 132 ++++++++++++++++++------ 3 files changed, 170 insertions(+), 34 deletions(-)
--- a/drivers/net/wireless/intel/iwlwifi/cfg/9000.c +++ b/drivers/net/wireless/intel/iwlwifi/cfg/9000.c @@ -72,12 +72,15 @@ #define IWL9000_SMEM_OFFSET 0x400000 #define IWL9000_SMEM_LEN 0x68000
-#define IWL9000_FW_PRE "iwlwifi-9000-pu-a0-jf-a0-" +#define IWL9000A_FW_PRE "iwlwifi-9000-pu-a0-jf-a0-" +#define IWL9000B_FW_PRE "iwlwifi-9000-pu-b0-jf-b0-" #define IWL9000RFB_FW_PRE "iwlwifi-9000-pu-a0-jf-b0-" #define IWL9260A_FW_PRE "iwlwifi-9260-th-a0-jf-a0-" #define IWL9260B_FW_PRE "iwlwifi-9260-th-b0-jf-b0-" -#define IWL9000_MODULE_FIRMWARE(api) \ - IWL9000_FW_PRE "-" __stringify(api) ".ucode" +#define IWL9000A_MODULE_FIRMWARE(api) \ + IWL9000A_FW_PRE __stringify(api) ".ucode" +#define IWL9000B_MODULE_FIRMWARE(api) \ + IWL9000B_FW_PRE __stringify(api) ".ucode" #define IWL9000RFB_MODULE_FIRMWARE(api) \ IWL9000RFB_FW_PRE __stringify(api) ".ucode" #define IWL9260A_MODULE_FIRMWARE(api) \ @@ -193,7 +196,48 @@ const struct iwl_cfg iwl9460_2ac_cfg = { .nvm_ver = IWL9000_NVM_VERSION, .nvm_calib_ver = IWL9000_TX_POWER_VERSION, .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, +}; + +const struct iwl_cfg iwl9460_2ac_cfg_soc = { + .name = "Intel(R) Dual Band Wireless AC 9460", + .fw_name_pre = IWL9000A_FW_PRE, + .fw_name_pre_b_or_c_step = IWL9000B_FW_PRE, + .fw_name_pre_rf_next_step = IWL9000RFB_FW_PRE, + IWL_DEVICE_9000, + .ht_params = &iwl9000_ht_params, + .nvm_ver = IWL9000_NVM_VERSION, + .nvm_calib_ver = IWL9000_TX_POWER_VERSION, + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, .integrated = true, + .soc_latency = 5000, +}; + +const struct iwl_cfg iwl9461_2ac_cfg_soc = { + .name = "Intel(R) Dual Band Wireless AC 9461", + .fw_name_pre = IWL9000A_FW_PRE, + .fw_name_pre_b_or_c_step = IWL9000B_FW_PRE, + .fw_name_pre_rf_next_step = IWL9000RFB_FW_PRE, + IWL_DEVICE_9000, + .ht_params = &iwl9000_ht_params, + .nvm_ver = IWL9000_NVM_VERSION, + .nvm_calib_ver = IWL9000_TX_POWER_VERSION, + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, + .integrated = true, + .soc_latency = 5000, +}; + +const struct iwl_cfg iwl9462_2ac_cfg_soc = { + .name = "Intel(R) Dual Band Wireless AC 9462", + .fw_name_pre = IWL9000A_FW_PRE, + .fw_name_pre_b_or_c_step = IWL9000B_FW_PRE, + .fw_name_pre_rf_next_step = IWL9000RFB_FW_PRE, + IWL_DEVICE_9000, + .ht_params = &iwl9000_ht_params, + .nvm_ver = IWL9000_NVM_VERSION, + .nvm_calib_ver = IWL9000_TX_POWER_VERSION, + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, + .integrated = true, + .soc_latency = 5000, };
const struct iwl_cfg iwl9560_2ac_cfg = { @@ -205,10 +249,23 @@ const struct iwl_cfg iwl9560_2ac_cfg = { .nvm_ver = IWL9000_NVM_VERSION, .nvm_calib_ver = IWL9000_TX_POWER_VERSION, .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, - .integrated = true, };
-MODULE_FIRMWARE(IWL9000_MODULE_FIRMWARE(IWL9000_UCODE_API_MAX)); +const struct iwl_cfg iwl9560_2ac_cfg_soc = { + .name = "Intel(R) Dual Band Wireless AC 9560", + .fw_name_pre = IWL9000A_FW_PRE, + .fw_name_pre_b_or_c_step = IWL9000B_FW_PRE, + .fw_name_pre_rf_next_step = IWL9000RFB_FW_PRE, + IWL_DEVICE_9000, + .ht_params = &iwl9000_ht_params, + .nvm_ver = IWL9000_NVM_VERSION, + .nvm_calib_ver = IWL9000_TX_POWER_VERSION, + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K, + .integrated = true, + .soc_latency = 5000, +}; +MODULE_FIRMWARE(IWL9000A_MODULE_FIRMWARE(IWL9000_UCODE_API_MAX)); +MODULE_FIRMWARE(IWL9000B_MODULE_FIRMWARE(IWL9000_UCODE_API_MAX)); MODULE_FIRMWARE(IWL9000RFB_MODULE_FIRMWARE(IWL9000_UCODE_API_MAX)); MODULE_FIRMWARE(IWL9260A_MODULE_FIRMWARE(IWL9000_UCODE_API_MAX)); MODULE_FIRMWARE(IWL9260B_MODULE_FIRMWARE(IWL9000_UCODE_API_MAX)); --- a/drivers/net/wireless/intel/iwlwifi/iwl-config.h +++ b/drivers/net/wireless/intel/iwlwifi/iwl-config.h @@ -364,6 +364,7 @@ struct iwl_cfg { u32 dccm2_len; u32 smem_offset; u32 smem_len; + u32 soc_latency; u16 nvm_ver; u16 nvm_calib_ver; u16 rx_with_siso_diversity:1, @@ -471,6 +472,10 @@ extern const struct iwl_cfg iwl9260_2ac_ extern const struct iwl_cfg iwl9270_2ac_cfg; extern const struct iwl_cfg iwl9460_2ac_cfg; extern const struct iwl_cfg iwl9560_2ac_cfg; +extern const struct iwl_cfg iwl9460_2ac_cfg_soc; +extern const struct iwl_cfg iwl9461_2ac_cfg_soc; +extern const struct iwl_cfg iwl9462_2ac_cfg_soc; +extern const struct iwl_cfg iwl9560_2ac_cfg_soc; extern const struct iwl_cfg iwla000_2ac_cfg_hr; extern const struct iwl_cfg iwla000_2ac_cfg_hr_cdb; extern const struct iwl_cfg iwla000_2ac_cfg_jf; --- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c +++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c @@ -533,47 +533,121 @@ static const struct pci_device_id iwl_hw {IWL_PCI_DEVICE(0x2526, 0x0238, iwl9560_2ac_cfg)}, {IWL_PCI_DEVICE(0x2526, 0x023C, iwl9560_2ac_cfg)}, {IWL_PCI_DEVICE(0x2526, 0x0260, iwl9460_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2526, 0x0264, iwl9461_2ac_cfg_soc)}, {IWL_PCI_DEVICE(0x2526, 0x02A0, iwl9460_2ac_cfg)}, {IWL_PCI_DEVICE(0x2526, 0x02A4, iwl9460_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2526, 0x1010, iwl9260_2ac_cfg)}, {IWL_PCI_DEVICE(0x2526, 0x1030, iwl9560_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2526, 0x1210, iwl9260_2ac_cfg)}, {IWL_PCI_DEVICE(0x2526, 0x1410, iwl9270_2ac_cfg)}, - {IWL_PCI_DEVICE(0x2526, 0x1420, iwl9460_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2526, 0x1420, iwl9460_2ac_cfg_soc)}, {IWL_PCI_DEVICE(0x2526, 0x1610, iwl9270_2ac_cfg)}, {IWL_PCI_DEVICE(0x2526, 0x4010, iwl9260_2ac_cfg)}, {IWL_PCI_DEVICE(0x2526, 0x4030, iwl9560_2ac_cfg)}, {IWL_PCI_DEVICE(0x2526, 0x40A4, iwl9460_2ac_cfg)}, {IWL_PCI_DEVICE(0x2526, 0xA014, iwl9260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2526, 0x42A4, iwl9462_2ac_cfg_soc)}, {IWL_PCI_DEVICE(0x271B, 0x0010, iwl9160_2ac_cfg)}, {IWL_PCI_DEVICE(0x271B, 0x0014, iwl9160_2ac_cfg)}, {IWL_PCI_DEVICE(0x271B, 0x0210, iwl9160_2ac_cfg)}, - {IWL_PCI_DEVICE(0x30DC, 0x0060, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x31DC, 0x0030, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0x31DC, 0x0034, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0x31DC, 0x0038, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0x31DC, 0x003C, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0x31DC, 0x0060, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x9DF0, 0x0000, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x9DF0, 0x0010, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x9DF0, 0x0030, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0x9DF0, 0x0034, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0x9DF0, 0x0038, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0x9DF0, 0x003C, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0x9DF0, 0x0060, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x9DF0, 0x0210, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x9DF0, 0x0310, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x9DF0, 0x0410, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x9DF0, 0x0510, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x9DF0, 0x0610, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x9DF0, 0x0710, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x9DF0, 0x0A10, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x9DF0, 0x2010, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0x9DF0, 0x2A10, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0xA370, 0x0030, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0xA370, 0x0034, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0xA370, 0x0038, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0xA370, 0x003C, iwl9560_2ac_cfg)}, - {IWL_PCI_DEVICE(0xA370, 0x0060, iwl9460_2ac_cfg)}, - {IWL_PCI_DEVICE(0xA370, 0x1030, iwl9560_2ac_cfg)}, + {IWL_PCI_DEVICE(0x271B, 0x0214, iwl9260_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2720, 0x0034, iwl9560_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2720, 0x0038, iwl9560_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2720, 0x003C, iwl9560_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2720, 0x0060, iwl9461_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x2720, 0x0064, iwl9461_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x2720, 0x00A0, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x2720, 0x00A4, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x2720, 0x0230, iwl9560_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2720, 0x0234, iwl9560_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2720, 0x0238, iwl9560_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2720, 0x023C, iwl9560_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2720, 0x0260, iwl9461_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x2720, 0x0264, iwl9461_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x2720, 0x02A0, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x2720, 0x02A4, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x2720, 0x4030, iwl9560_2ac_cfg)}, + {IWL_PCI_DEVICE(0x2720, 0x40A4, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x30DC, 0x0060, iwl9460_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x30DC, 0x0064, iwl9461_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x30DC, 0x00A0, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x30DC, 0x00A4, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x30DC, 0x0260, iwl9461_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x30DC, 0x0264, iwl9461_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x30DC, 0x02A0, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x30DC, 0x02A4, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x31DC, 0x0030, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x31DC, 0x0034, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x31DC, 0x0038, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x31DC, 0x003C, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x31DC, 0x0060, iwl9460_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x31DC, 0x0064, iwl9461_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x31DC, 0x00A0, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x31DC, 0x00A4, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x31DC, 0x0230, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x31DC, 0x0234, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x31DC, 0x0238, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x31DC, 0x023C, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x31DC, 0x0260, iwl9461_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x31DC, 0x0264, iwl9461_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x31DC, 0x02A0, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x31DC, 0x02A4, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x31DC, 0x4030, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x31DC, 0x4034, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x31DC, 0x40A4, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x34F0, 0x0030, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x34F0, 0x0034, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x34F0, 0x02A4, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x0000, iwl9460_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x0010, iwl9460_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x0030, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x0034, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x0038, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x003C, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x0060, iwl9460_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x0064, iwl9461_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x00A0, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x00A4, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x0210, iwl9460_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x0230, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x0234, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x0238, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x023C, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x0260, iwl9461_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x0264, iwl9461_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x02A0, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x02A4, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x0310, iwl9460_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x0410, iwl9460_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x0510, iwl9460_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x0610, iwl9460_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x0710, iwl9460_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x0A10, iwl9460_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x2010, iwl9460_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x2A10, iwl9460_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x4030, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x4034, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0x9DF0, 0x40A4, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0xA370, 0x0030, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0xA370, 0x0034, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0xA370, 0x0038, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0xA370, 0x003C, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0xA370, 0x0060, iwl9460_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0xA370, 0x0064, iwl9461_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0xA370, 0x00A0, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0xA370, 0x00A4, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0xA370, 0x0230, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0xA370, 0x0234, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0xA370, 0x0238, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0xA370, 0x023C, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0xA370, 0x0260, iwl9461_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0xA370, 0x0264, iwl9461_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0xA370, 0x02A0, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0xA370, 0x02A4, iwl9462_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0xA370, 0x1030, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0xA370, 0x4030, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0xA370, 0x4034, iwl9560_2ac_cfg_soc)}, + {IWL_PCI_DEVICE(0xA370, 0x40A4, iwl9462_2ac_cfg_soc)},
/* a000 Series */ {IWL_PCI_DEVICE(0x2720, 0x0A10, iwla000_2ac_cfg_hr_cdb)},
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Luca Coelho luciano.coelho@intel.com
commit dac4df1c5f2c34903f61b1bc4fc722e31b4199e7 upstream.
Newer firmware versions (such as iwlwifi-8000C-34.ucode) have introduced an API change in the SCAN_REQ_UMAC command that is not backwards compatible. The driver needs to detect and use the new API format when the firmware reports it, otherwise the scan command will not work properly, causing a command timeout.
Fix this by adding a TLV that tells the driver that the new API is in use and use the correct structures for it.
This fixes https://bugzilla.kernel.org/show_bug.cgi?id=197591
Fixes: d7a5b3e9e42e ("iwlwifi: mvm: bump API to 34 for 8000 and up") Signed-off-by: Luca Coelho luciano.coelho@intel.com Cc: Thomas Backlund tmb@mageia.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/wireless/intel/iwlwifi/fw/api/scan.h | 59 ++++++++++++--- drivers/net/wireless/intel/iwlwifi/fw/file.h | 1 drivers/net/wireless/intel/iwlwifi/mvm/mvm.h | 6 + drivers/net/wireless/intel/iwlwifi/mvm/scan.c | 86 +++++++++++++++++------ 4 files changed, 118 insertions(+), 34 deletions(-)
--- a/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/api/scan.h @@ -531,6 +531,8 @@ struct iwl_scan_config_v1 { } __packed; /* SCAN_CONFIG_DB_CMD_API_S */
#define SCAN_TWO_LMACS 2 +#define SCAN_LB_LMAC_IDX 0 +#define SCAN_HB_LMAC_IDX 1
struct iwl_scan_config { __le32 flags; @@ -578,6 +580,7 @@ enum iwl_umac_scan_general_flags { IWL_UMAC_SCAN_GEN_FLAGS_MATCH = BIT(9), IWL_UMAC_SCAN_GEN_FLAGS_EXTENDED_DWELL = BIT(10), IWL_UMAC_SCAN_GEN_FLAGS_LMAC2_FRAGMENTED = BIT(11), + IWL_UMAC_SCAN_GEN_FLAGS_ADAPTIVE_DWELL = BIT(13), };
/** @@ -631,12 +634,17 @@ struct iwl_scan_req_umac_tail { * @uid: scan id, &enum iwl_umac_scan_uid_offsets * @ooc_priority: out of channel priority - &enum iwl_scan_priority * @general_flags: &enum iwl_umac_scan_general_flags - * @reserved2: for future use and alignment * @scan_start_mac_id: report the scan start TSF time according to this mac TSF * @extended_dwell: dwell time for channels 1, 6 and 11 * @active_dwell: dwell time for active scan * @passive_dwell: dwell time for passive scan * @fragmented_dwell: dwell time for fragmented passive scan + * @adwell_default_n_aps: for adaptive dwell the default number of APs + * per channel + * @adwell_default_n_aps_social: for adaptive dwell the default + * number of APs per social (1,6,11) channel + * @adwell_max_budget: for adaptive dwell the maximal budget of TU to be added + * to total scan time * @max_out_time: max out of serving channel time, per LMAC - for CDB there * are 2 LMACs * @suspend_time: max suspend time, per LMAC - for CDB there are 2 LMACs @@ -644,6 +652,8 @@ struct iwl_scan_req_umac_tail { * @channel_flags: &enum iwl_scan_channel_flags * @n_channels: num of channels in scan request * @reserved: for future use and alignment + * @reserved2: for future use and alignment + * @reserved3: for future use and alignment * @data: &struct iwl_scan_channel_cfg_umac and * &struct iwl_scan_req_umac_tail */ @@ -651,41 +661,64 @@ struct iwl_scan_req_umac { __le32 flags; __le32 uid; __le32 ooc_priority; - /* SCAN_GENERAL_PARAMS_API_S_VER_4 */ __le16 general_flags; - u8 reserved2; + u8 reserved; u8 scan_start_mac_id; - u8 extended_dwell; - u8 active_dwell; - u8 passive_dwell; - u8 fragmented_dwell; union { struct { + u8 extended_dwell; + u8 active_dwell; + u8 passive_dwell; + u8 fragmented_dwell; __le32 max_out_time; __le32 suspend_time; __le32 scan_priority; - /* SCAN_CHANNEL_PARAMS_API_S_VER_4 */ + /* SCAN_CHANNEL_PARAMS_API_S_VER_1 */ u8 channel_flags; u8 n_channels; - __le16 reserved; + __le16 reserved2; u8 data[]; } v1; /* SCAN_REQUEST_CMD_UMAC_API_S_VER_1 */ struct { + u8 extended_dwell; + u8 active_dwell; + u8 passive_dwell; + u8 fragmented_dwell; __le32 max_out_time[SCAN_TWO_LMACS]; __le32 suspend_time[SCAN_TWO_LMACS]; __le32 scan_priority; - /* SCAN_CHANNEL_PARAMS_API_S_VER_4 */ + /* SCAN_CHANNEL_PARAMS_API_S_VER_1 */ u8 channel_flags; u8 n_channels; - __le16 reserved; + __le16 reserved2; u8 data[]; } v6; /* SCAN_REQUEST_CMD_UMAC_API_S_VER_6 */ + struct { + u8 active_dwell; + u8 passive_dwell; + u8 fragmented_dwell; + u8 adwell_default_n_aps; + u8 adwell_default_n_aps_social; + u8 reserved3; + __le16 adwell_max_budget; + __le32 max_out_time[SCAN_TWO_LMACS]; + __le32 suspend_time[SCAN_TWO_LMACS]; + __le32 scan_priority; + /* SCAN_CHANNEL_PARAMS_API_S_VER_1 */ + u8 channel_flags; + u8 n_channels; + __le16 reserved2; + u8 data[]; + } v7; /* SCAN_REQUEST_CMD_UMAC_API_S_VER_7 */ }; } __packed;
-#define IWL_SCAN_REQ_UMAC_SIZE sizeof(struct iwl_scan_req_umac) +#define IWL_SCAN_REQ_UMAC_SIZE_V7 sizeof(struct iwl_scan_req_umac) +#define IWL_SCAN_REQ_UMAC_SIZE_V6 (sizeof(struct iwl_scan_req_umac) - \ + 2 * sizeof(u8) - sizeof(__le16)) #define IWL_SCAN_REQ_UMAC_SIZE_V1 (sizeof(struct iwl_scan_req_umac) - \ - 2 * sizeof(__le32)) + 2 * sizeof(__le32) - 2 * sizeof(u8) - \ + sizeof(__le16))
/** * struct iwl_umac_scan_abort --- a/drivers/net/wireless/intel/iwlwifi/fw/file.h +++ b/drivers/net/wireless/intel/iwlwifi/fw/file.h @@ -262,6 +262,7 @@ enum iwl_ucode_tlv_api { IWL_UCODE_TLV_API_STA_TYPE = (__force iwl_ucode_tlv_api_t)30, IWL_UCODE_TLV_API_NAN2_VER2 = (__force iwl_ucode_tlv_api_t)31, /* API Set 1 */ + IWL_UCODE_TLV_API_ADAPTIVE_DWELL = (__force iwl_ucode_tlv_api_t)32, IWL_UCODE_TLV_API_NEW_BEACON_TEMPLATE = (__force iwl_ucode_tlv_api_t)34, IWL_UCODE_TLV_API_NEW_RX_STATS = (__force iwl_ucode_tlv_api_t)35, IWL_UCODE_TLV_API_COEX_ATS_EXTERNAL = (__force iwl_ucode_tlv_api_t)37, --- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h +++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h @@ -1124,6 +1124,12 @@ static inline bool iwl_mvm_is_d0i3_suppo IWL_UCODE_TLV_CAPA_D0I3_SUPPORT); }
+static inline bool iwl_mvm_is_adaptive_dwell_supported(struct iwl_mvm *mvm) +{ + return fw_has_api(&mvm->fw->ucode_capa, + IWL_UCODE_TLV_API_ADAPTIVE_DWELL); +} + static inline bool iwl_mvm_enter_d0i3_on_suspend(struct iwl_mvm *mvm) { /* For now we only use this mode to differentiate between --- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c @@ -130,6 +130,19 @@ struct iwl_mvm_scan_params { u32 measurement_dwell; };
+static inline void *iwl_mvm_get_scan_req_umac_data(struct iwl_mvm *mvm) +{ + struct iwl_scan_req_umac *cmd = mvm->scan_cmd; + + if (iwl_mvm_is_adaptive_dwell_supported(mvm)) + return (void *)&cmd->v7.data; + + if (iwl_mvm_has_new_tx_api(mvm)) + return (void *)&cmd->v6.data; + + return (void *)&cmd->v1.data; +} + static u8 iwl_mvm_scan_rx_ant(struct iwl_mvm *mvm) { if (mvm->scan_rx_ant != ANT_NONE) @@ -1075,25 +1088,57 @@ static void iwl_mvm_scan_umac_dwell(stru { struct iwl_mvm_scan_timing_params *timing = &scan_timing[params->type];
+ if (iwl_mvm_is_regular_scan(params)) + cmd->ooc_priority = cpu_to_le32(IWL_SCAN_PRIORITY_EXT_6); + else + cmd->ooc_priority = cpu_to_le32(IWL_SCAN_PRIORITY_EXT_2); + + if (iwl_mvm_is_adaptive_dwell_supported(mvm)) { + if (params->measurement_dwell) { + cmd->v7.active_dwell = params->measurement_dwell; + cmd->v7.passive_dwell = params->measurement_dwell; + } else { + cmd->v7.active_dwell = IWL_SCAN_DWELL_ACTIVE; + cmd->v7.passive_dwell = IWL_SCAN_DWELL_PASSIVE; + } + cmd->v7.fragmented_dwell = IWL_SCAN_DWELL_FRAGMENTED; + + cmd->v7.scan_priority = cpu_to_le32(IWL_SCAN_PRIORITY_EXT_6); + cmd->v7.max_out_time[SCAN_LB_LMAC_IDX] = + cpu_to_le32(timing->max_out_time); + cmd->v7.suspend_time[SCAN_LB_LMAC_IDX] = + cpu_to_le32(timing->suspend_time); + if (iwl_mvm_is_cdb_supported(mvm)) { + cmd->v7.max_out_time[SCAN_HB_LMAC_IDX] = + cpu_to_le32(timing->max_out_time); + cmd->v7.suspend_time[SCAN_HB_LMAC_IDX] = + cpu_to_le32(timing->suspend_time); + } + + return; + } + if (params->measurement_dwell) { - cmd->active_dwell = params->measurement_dwell; - cmd->passive_dwell = params->measurement_dwell; - cmd->extended_dwell = params->measurement_dwell; + cmd->v1.active_dwell = params->measurement_dwell; + cmd->v1.passive_dwell = params->measurement_dwell; + cmd->v1.extended_dwell = params->measurement_dwell; } else { - cmd->active_dwell = IWL_SCAN_DWELL_ACTIVE; - cmd->passive_dwell = IWL_SCAN_DWELL_PASSIVE; - cmd->extended_dwell = IWL_SCAN_DWELL_EXTENDED; + cmd->v1.active_dwell = IWL_SCAN_DWELL_ACTIVE; + cmd->v1.passive_dwell = IWL_SCAN_DWELL_PASSIVE; + cmd->v1.extended_dwell = IWL_SCAN_DWELL_EXTENDED; } - cmd->fragmented_dwell = IWL_SCAN_DWELL_FRAGMENTED; + cmd->v1.fragmented_dwell = IWL_SCAN_DWELL_FRAGMENTED;
if (iwl_mvm_has_new_tx_api(mvm)) { cmd->v6.scan_priority = cpu_to_le32(IWL_SCAN_PRIORITY_EXT_6); - cmd->v6.max_out_time[0] = cpu_to_le32(timing->max_out_time); - cmd->v6.suspend_time[0] = cpu_to_le32(timing->suspend_time); + cmd->v6.max_out_time[SCAN_LB_LMAC_IDX] = + cpu_to_le32(timing->max_out_time); + cmd->v6.suspend_time[SCAN_LB_LMAC_IDX] = + cpu_to_le32(timing->suspend_time); if (iwl_mvm_is_cdb_supported(mvm)) { - cmd->v6.max_out_time[1] = + cmd->v6.max_out_time[SCAN_HB_LMAC_IDX] = cpu_to_le32(timing->max_out_time); - cmd->v6.suspend_time[1] = + cmd->v6.suspend_time[SCAN_HB_LMAC_IDX] = cpu_to_le32(timing->suspend_time); } } else { @@ -1102,11 +1147,6 @@ static void iwl_mvm_scan_umac_dwell(stru cmd->v1.scan_priority = cpu_to_le32(IWL_SCAN_PRIORITY_EXT_6); } - - if (iwl_mvm_is_regular_scan(params)) - cmd->ooc_priority = cpu_to_le32(IWL_SCAN_PRIORITY_EXT_6); - else - cmd->ooc_priority = cpu_to_le32(IWL_SCAN_PRIORITY_EXT_2); }
static void @@ -1178,8 +1218,7 @@ static int iwl_mvm_scan_umac(struct iwl_ int type) { struct iwl_scan_req_umac *cmd = mvm->scan_cmd; - void *cmd_data = iwl_mvm_has_new_tx_api(mvm) ? - (void *)&cmd->v6.data : (void *)&cmd->v1.data; + void *cmd_data = iwl_mvm_get_scan_req_umac_data(mvm); struct iwl_scan_req_umac_tail *sec_part = cmd_data + sizeof(struct iwl_scan_channel_cfg_umac) * mvm->fw->ucode_capa.n_scan_channels; @@ -1216,7 +1255,10 @@ static int iwl_mvm_scan_umac(struct iwl_ IWL_SCAN_CHANNEL_FLAG_EBS_ACCURATE | IWL_SCAN_CHANNEL_FLAG_CACHE_ADD;
- if (iwl_mvm_has_new_tx_api(mvm)) { + if (iwl_mvm_is_adaptive_dwell_supported(mvm)) { + cmd->v7.channel_flags = channel_flags; + cmd->v7.n_channels = params->n_channels; + } else if (iwl_mvm_has_new_tx_api(mvm)) { cmd->v6.channel_flags = channel_flags; cmd->v6.n_channels = params->n_channels; } else { @@ -1661,8 +1703,10 @@ int iwl_mvm_scan_size(struct iwl_mvm *mv { int base_size = IWL_SCAN_REQ_UMAC_SIZE_V1;
- if (iwl_mvm_has_new_tx_api(mvm)) - base_size = IWL_SCAN_REQ_UMAC_SIZE; + if (iwl_mvm_is_adaptive_dwell_supported(mvm)) + base_size = IWL_SCAN_REQ_UMAC_SIZE_V7; + else if (iwl_mvm_has_new_tx_api(mvm)) + base_size = IWL_SCAN_REQ_UMAC_SIZE_V6;
if (fw_has_capa(&mvm->fw->ucode_capa, IWL_UCODE_TLV_CAPA_UMAC_SCAN)) return base_size +
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Benjamin Poirier bpoirier@suse.com
commit c4c40e51f9c32c6dd8adf606624c930a1c4d9bbb upstream.
In case of error from e1e_rphy(), the loop will exit early and "success" will be set to true erroneously.
Signed-off-by: Benjamin Poirier bpoirier@suse.com Tested-by: Aaron Brown aaron.f.brown@intel.com Signed-off-by: Jeff Kirsher jeffrey.t.kirsher@intel.com Signed-off-by: Amit Pundir amit.pundir@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/ethernet/intel/e1000e/phy.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
--- a/drivers/net/ethernet/intel/e1000e/phy.c +++ b/drivers/net/ethernet/intel/e1000e/phy.c @@ -1744,6 +1744,7 @@ s32 e1000e_phy_has_link_generic(struct e s32 ret_val = 0; u16 i, phy_status;
+ *success = false; for (i = 0; i < iterations; i++) { /* Some PHYs require the MII_BMSR register to be read * twice due to the link bit being sticky. No harm doing @@ -1763,16 +1764,16 @@ s32 e1000e_phy_has_link_generic(struct e ret_val = e1e_rphy(hw, MII_BMSR, &phy_status); if (ret_val) break; - if (phy_status & BMSR_LSTATUS) + if (phy_status & BMSR_LSTATUS) { + *success = true; break; + } if (usec_interval >= 1000) msleep(usec_interval / 1000); else udelay(usec_interval); }
- *success = (i < iterations); - return ret_val; }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Benjamin Poirier bpoirier@suse.com
commit d3509f8bc7b0560044c15f0e3ecfde1d9af757a6 upstream.
All the helpers return -E1000_ERR_PHY.
Signed-off-by: Benjamin Poirier bpoirier@suse.com Tested-by: Aaron Brown aaron.f.brown@intel.com Signed-off-by: Jeff Kirsher jeffrey.t.kirsher@intel.com Signed-off-by: Amit Pundir amit.pundir@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/ethernet/intel/e1000e/netdev.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/net/ethernet/intel/e1000e/netdev.c +++ b/drivers/net/ethernet/intel/e1000e/netdev.c @@ -5099,7 +5099,7 @@ static bool e1000e_has_link(struct e1000 break; }
- if ((ret_val == E1000_ERR_PHY) && (hw->phy.type == e1000_phy_igp_3) && + if ((ret_val == -E1000_ERR_PHY) && (hw->phy.type == e1000_phy_igp_3) && (er32(CTRL) & E1000_PHY_CTRL_GBE_DISABLE)) { /* See e1000_kmrn_lock_loss_workaround_ich8lan() */ e_info("Gigabit has been disabled, downgrading speed\n");
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Benjamin Poirier bpoirier@suse.com
commit 19110cfbb34d4af0cdfe14cd243f3b09dc95b013 upstream.
Lennart reported the following race condition:
\ e1000_watchdog_task \ e1000e_has_link \ hw->mac.ops.check_for_link() === e1000e_check_for_copper_link /* link is up */ mac->get_link_status = false;
/* interrupt */ \ e1000_msix_other hw->mac.get_link_status = true;
link_active = !hw->mac.get_link_status /* link_active is false, wrongly */
This problem arises because the single flag get_link_status is used to signal two different states: link status needs checking and link status is down.
Avoid the problem by using the return value of .check_for_link to signal the link status to e1000e_has_link().
Reported-by: Lennart Sorensen lsorense@csclub.uwaterloo.ca Signed-off-by: Benjamin Poirier bpoirier@suse.com Tested-by: Aaron Brown aaron.f.brown@intel.com Signed-off-by: Jeff Kirsher jeffrey.t.kirsher@intel.com Signed-off-by: Amit Pundir amit.pundir@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/ethernet/intel/e1000e/mac.c | 11 ++++++++--- drivers/net/ethernet/intel/e1000e/netdev.c | 2 +- 2 files changed, 9 insertions(+), 4 deletions(-)
--- a/drivers/net/ethernet/intel/e1000e/mac.c +++ b/drivers/net/ethernet/intel/e1000e/mac.c @@ -410,6 +410,9 @@ void e1000e_clear_hw_cntrs_base(struct e * Checks to see of the link status of the hardware has changed. If a * change in link status has been detected, then we read the PHY registers * to get the current speed/duplex if link exists. + * + * Returns a negative error code (-E1000_ERR_*) or 0 (link down) or 1 (link + * up). **/ s32 e1000e_check_for_copper_link(struct e1000_hw *hw) { @@ -423,7 +426,7 @@ s32 e1000e_check_for_copper_link(struct * Change or Rx Sequence Error interrupt. */ if (!mac->get_link_status) - return 0; + return 1;
/* First we want to see if the MII Status Register reports * link. If so, then we want to get the current speed/duplex @@ -461,10 +464,12 @@ s32 e1000e_check_for_copper_link(struct * different link partner. */ ret_val = e1000e_config_fc_after_link_up(hw); - if (ret_val) + if (ret_val) { e_dbg("Error configuring flow control\n"); + return ret_val; + }
- return ret_val; + return 1; }
/** --- a/drivers/net/ethernet/intel/e1000e/netdev.c +++ b/drivers/net/ethernet/intel/e1000e/netdev.c @@ -5081,7 +5081,7 @@ static bool e1000e_has_link(struct e1000 case e1000_media_type_copper: if (hw->mac.get_link_status) { ret_val = hw->mac.ops.check_for_link(hw); - link_active = !hw->mac.get_link_status; + link_active = ret_val > 0; } else { link_active = true; }
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Benjamin Poirier bpoirier@suse.com
commit 4aea7a5c5e940c1723add439f4088844cd26196d upstream.
When e1000e_poll() is not fast enough to keep up with incoming traffic, the adapter (when operating in msix mode) raises the Other interrupt to signal Receiver Overrun.
This is a double problem because 1) at the moment e1000_msix_other() assumes that it is only called in case of Link Status Change and 2) if the condition persists, the interrupt is repeatedly raised again in quick succession.
Ideally we would configure the Other interrupt to not be raised in case of receiver overrun but this doesn't seem possible on this adapter. Instead, we handle the first part of the problem by reverting to the practice of reading ICR in the other interrupt handler, like before commit 16ecba59bc33 ("e1000e: Do not read ICR in Other interrupt"). Thanks to commit 0a8047ac68e5 ("e1000e: Fix msi-x interrupt automask") which cleared IAME from CTRL_EXT, reading ICR doesn't interfere with RxQ0, TxQ0 interrupts anymore. We handle the second part of the problem by not re-enabling the Other interrupt right away when there is overrun. Instead, we wait until traffic subsides, napi polling mode is exited and interrupts are re-enabled.
Reported-by: Lennart Sorensen lsorense@csclub.uwaterloo.ca Fixes: 16ecba59bc33 ("e1000e: Do not read ICR in Other interrupt") Signed-off-by: Benjamin Poirier bpoirier@suse.com Tested-by: Aaron Brown aaron.f.brown@intel.com Signed-off-by: Jeff Kirsher jeffrey.t.kirsher@intel.com Signed-off-by: Amit Pundir amit.pundir@linaro.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/ethernet/intel/e1000e/defines.h | 1 drivers/net/ethernet/intel/e1000e/netdev.c | 31 +++++++++++++++++++++------- 2 files changed, 25 insertions(+), 7 deletions(-)
--- a/drivers/net/ethernet/intel/e1000e/defines.h +++ b/drivers/net/ethernet/intel/e1000e/defines.h @@ -398,6 +398,7 @@ #define E1000_ICR_LSC 0x00000004 /* Link Status Change */ #define E1000_ICR_RXSEQ 0x00000008 /* Rx sequence error */ #define E1000_ICR_RXDMT0 0x00000010 /* Rx desc min. threshold (0) */ +#define E1000_ICR_RXO 0x00000040 /* Receiver Overrun */ #define E1000_ICR_RXT0 0x00000080 /* Rx timer intr (ring 0) */ #define E1000_ICR_ECCER 0x00400000 /* Uncorrectable ECC Error */ /* If this bit asserted, the driver should claim the interrupt */ --- a/drivers/net/ethernet/intel/e1000e/netdev.c +++ b/drivers/net/ethernet/intel/e1000e/netdev.c @@ -1910,14 +1910,30 @@ static irqreturn_t e1000_msix_other(int struct net_device *netdev = data; struct e1000_adapter *adapter = netdev_priv(netdev); struct e1000_hw *hw = &adapter->hw; + u32 icr; + bool enable = true;
- hw->mac.get_link_status = true; + icr = er32(ICR); + if (icr & E1000_ICR_RXO) { + ew32(ICR, E1000_ICR_RXO); + enable = false; + /* napi poll will re-enable Other, make sure it runs */ + if (napi_schedule_prep(&adapter->napi)) { + adapter->total_rx_bytes = 0; + adapter->total_rx_packets = 0; + __napi_schedule(&adapter->napi); + } + } + if (icr & E1000_ICR_LSC) { + ew32(ICR, E1000_ICR_LSC); + hw->mac.get_link_status = true; + /* guard against interrupt when we're going down */ + if (!test_bit(__E1000_DOWN, &adapter->state)) + mod_timer(&adapter->watchdog_timer, jiffies + 1); + }
- /* guard against interrupt when we're going down */ - if (!test_bit(__E1000_DOWN, &adapter->state)) { - mod_timer(&adapter->watchdog_timer, jiffies + 1); + if (enable && !test_bit(__E1000_DOWN, &adapter->state)) ew32(IMS, E1000_IMS_OTHER); - }
return IRQ_HANDLED; } @@ -2687,7 +2703,8 @@ static int e1000e_poll(struct napi_struc napi_complete_done(napi, work_done); if (!test_bit(__E1000_DOWN, &adapter->state)) { if (adapter->msix_entries) - ew32(IMS, adapter->rx_ring->ims_val); + ew32(IMS, adapter->rx_ring->ims_val | + E1000_IMS_OTHER); else e1000_irq_enable(adapter); } @@ -4204,7 +4221,7 @@ static void e1000e_trigger_lsc(struct e1 struct e1000_hw *hw = &adapter->hw;
if (adapter->msix_entries) - ew32(ICS, E1000_ICS_OTHER); + ew32(ICS, E1000_ICS_LSC | E1000_ICS_OTHER); else ew32(ICS, E1000_ICS_LSC); }
On 28 November 2017 at 15:54, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 4.14.3 release. There are 193 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Thu Nov 30 10:05:26 UTC 2017. Anything received after that time might be too late.
The whole patch series can be found in one patch at: kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.14.3-rc1.gz or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.14.y and the diffstat can be found below.
thanks,
greg k-h
Results from Linaro’s test farm. No regressions on arm64, arm and x86_64.
Summary ------------------------------------------------------------------------
kernel: 4.14.3-rc1 git repo: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git git branch: linux-4.14.y git commit: 9ff910a1edbfe3044963b615a4fb2d29f611579d git describe: v4.14.2-194-g9ff910a1edbf Test details: https://qa-reports.linaro.org/lkft/linux-stable-rc-4.14-oe/build/v4.14.2-194...
No regressions (compared to build v4.14.2-181-g684cdd60a58a)
Boards, architectures and test suites: -------------------------------------
hi6220-hikey - arm64 * boot - pass: 20 * kselftest - skip: 14, pass: 39 * libhugetlbfs - skip: 1, pass: 90 * ltp-cap_bounds-tests - pass: 2 * ltp-containers-tests - pass: 64 * ltp-fcntl-locktests-tests - pass: 2 * ltp-filecaps-tests - pass: 2 * ltp-fs-tests - pass: 60 * ltp-fs_bind-tests - pass: 2 * ltp-fs_perms_simple-tests - pass: 19 * ltp-fsx-tests - pass: 2 * ltp-hugetlb-tests - skip: 1, pass: 21 * ltp-io-tests - pass: 3 * ltp-ipc-tests - pass: 9 * ltp-math-tests - pass: 11 * ltp-nptl-tests - pass: 2 * ltp-pty-tests - pass: 4 * ltp-sched-tests - pass: 14 * ltp-securebits-tests - pass: 4 * ltp-syscalls-tests - skip: 121, pass: 982 * ltp-timers-tests - pass: 12
juno-r2 - arm64 * boot - pass: 20 * kselftest - skip: 16, pass: 38 * libhugetlbfs - skip: 1, pass: 90 * ltp-cap_bounds-tests - pass: 2 * ltp-containers-tests - pass: 64 * ltp-fcntl-locktests-tests - pass: 2 * ltp-filecaps-tests - pass: 2 * ltp-fs-tests - pass: 60 * ltp-fs_bind-tests - pass: 2 * ltp-fs_perms_simple-tests - pass: 19 * ltp-fsx-tests - pass: 2 * ltp-hugetlb-tests - pass: 22 * ltp-io-tests - pass: 3 * ltp-ipc-tests - pass: 9 * ltp-math-tests - pass: 11 * ltp-nptl-tests - pass: 2 * ltp-pty-tests - pass: 4 * ltp-sched-tests - pass: 10 * ltp-securebits-tests - pass: 4 * ltp-syscalls-tests - skip: 156, pass: 939 * ltp-timers-tests - pass: 12
x15 - arm * boot - pass: 20 * kselftest - skip: 18, pass: 35 * libhugetlbfs - skip: 1, pass: 87 * ltp-cap_bounds-tests - pass: 2 * ltp-containers-tests - pass: 64 * ltp-fcntl-locktests-tests - pass: 2 * ltp-filecaps-tests - pass: 2 * ltp-fs-tests - pass: 60 * ltp-fs_bind-tests - pass: 2 * ltp-fs_perms_simple-tests - pass: 19 * ltp-fsx-tests - pass: 2 * ltp-hugetlb-tests - skip: 2, pass: 20 * ltp-io-tests - pass: 3 * ltp-ipc-tests - pass: 9 * ltp-math-tests - pass: 11 * ltp-nptl-tests - pass: 2 * ltp-pty-tests - pass: 4 * ltp-sched-tests - skip: 1, pass: 13 * ltp-securebits-tests - pass: 4 * ltp-syscalls-tests - skip: 66, pass: 1036 * ltp-timers-tests - pass: 12
x86_64 * boot - pass: 20 * kselftest - skip: 13, pass: 54 * libhugetlbfs - skip: 1, pass: 76 * ltp-cap_bounds-tests - pass: 2 * ltp-containers-tests - pass: 64 * ltp-fcntl-locktests-tests - pass: 2 * ltp-filecaps-tests - pass: 2 * ltp-fs-tests - skip: 1, pass: 61 * ltp-fs_bind-tests - pass: 2 * ltp-fs_perms_simple-tests - pass: 19 * ltp-fsx-tests - pass: 2 * ltp-hugetlb-tests - pass: 22 * ltp-io-tests - pass: 2 * ltp-ipc-tests - pass: 9 * ltp-math-tests - pass: 11 * ltp-nptl-tests - pass: 1 * ltp-pty-tests - pass: 4 * ltp-sched-tests - skip: 1, pass: 9 * ltp-securebits-tests - pass: 4 * ltp-syscalls-tests - skip: 163, pass: 957 * ltp-timers-tests - pass: 12
Documentation - https://collaborate.linaro.org/display/LKFT/Email+Reports
Signed-off-by: Naresh Kamboju naresh.kamboju@linaro.org
On Tue, Nov 28, 2017 at 11:57:22PM +0530, Naresh Kamboju wrote:
On 28 November 2017 at 15:54, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 4.14.3 release. There are 193 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Thu Nov 30 10:05:26 UTC 2017. Anything received after that time might be too late.
The whole patch series can be found in one patch at: kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.14.3-rc1.gz or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.14.y and the diffstat can be found below.
thanks,
greg k-h
Results from Linaro’s test farm. No regressions on arm64, arm and x86_64.
Thanks for testing and letting me know.
greg k-h
On 11/28/2017 03:24 AM, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 4.14.3 release. There are 193 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Thu Nov 30 10:05:26 UTC 2017. Anything received after that time might be too late.
The whole patch series can be found in one patch at: kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.14.3-rc1.gz or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.14.y and the diffstat can be found below.
thanks,
greg k-h
Compiled and booted on my test system. No dmesg regressions.
thanks, -- Shuah
On Tue, Nov 28, 2017 at 11:24:07AM +0100, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 4.14.3 release. There are 193 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Thu Nov 30 10:05:26 UTC 2017. Anything received after that time might be too late.
Build results: total: 145 pass: 145 fail: 0 Qemu test results: total: 123 pass: 123 fail: 0
Details are available at http://kerneltests.org/builders.
Guenter
On Tue, Nov 28, 2017 at 01:52:22PM -0800, Guenter Roeck wrote:
On Tue, Nov 28, 2017 at 11:24:07AM +0100, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 4.14.3 release. There are 193 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Thu Nov 30 10:05:26 UTC 2017. Anything received after that time might be too late.
Build results: total: 145 pass: 145 fail: 0 Qemu test results: total: 123 pass: 123 fail: 0
Details are available at http://kerneltests.org/builders.
Thanks for testing all of these and letting me know.
greg k-h
On Nov 28, 2017, at 4:24 AM, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 4.14.3 release. There are 193 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Thu Nov 30 10:05:26 UTC 2017. Anything received after that time might be too late.
The whole patch series can be found in one patch at: kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.14.3-rc1.gz or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.14.y and the diffstat can be found below.
thanks,
greg k-h
Successfully runs the following package test suites on x86-64 without any errors: alsa-lib boringssl e2fsprogs libusb sqlite
Signed-off-by: Tom Gall tom.gall@linaro.org
On Tue, Nov 28, 2017 at 04:17:08PM -0600, Tom Gall wrote:
On Nov 28, 2017, at 4:24 AM, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 4.14.3 release. There are 193 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Thu Nov 30 10:05:26 UTC 2017. Anything received after that time might be too late.
The whole patch series can be found in one patch at: kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.14.3-rc1.gz or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.14.y and the diffstat can be found below.
thanks,
greg k-h
Successfully runs the following package test suites on x86-64 without any errors: alsa-lib boringssl e2fsprogs libusb sqlite
Signed-off-by: Tom Gall tom.gall@linaro.org
What exactly are you signing off on here? You do know what that line means, right?
totally confused,
greg k-h
On Nov 28, 2017, at 11:13 PM, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
On Tue, Nov 28, 2017 at 04:17:08PM -0600, Tom Gall wrote:
On Nov 28, 2017, at 4:24 AM, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 4.14.3 release. There are 193 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Thu Nov 30 10:05:26 UTC 2017. Anything received after that time might be too late.
The whole patch series can be found in one patch at: kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.14.3-rc1.gz or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.14.y and the diffstat can be found below.
thanks,
greg k-h
Successfully runs the following package test suites on x86-64 without any errors: alsa-lib boringssl e2fsprogs libusb sqlite
Signed-off-by: Tom Gall tom.gall@linaro.org
s/Signed-off-by:/Tested-by:/
What exactly are you signing off on here? You do know what that line means, right?
totally confused,
greg k-h
On 11/28/2017 11:24 AM, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 4.14.3 release. There are 193 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Thu Nov 30 10:05:26 UTC 2017. Anything received after that time might be too late.
The whole patch series can be found in one patch at: kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.14.3-rc1.gz or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.14.y and the diffstat can be found below.
thanks,
greg k-h
... snip ...
Paolo Bonzini pbonzini@redhat.com kvm: vmx: Reinstate support for CPUs without virtual NMI
KVM works again on old Core2 CPU ... thanks, Z.
On Wed, Nov 29, 2017 at 05:04:34PM +0100, Zdenek Kaspar wrote:
On 11/28/2017 11:24 AM, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 4.14.3 release. There are 193 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Thu Nov 30 10:05:26 UTC 2017. Anything received after that time might be too late.
The whole patch series can be found in one patch at: kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.14.3-rc1.gz or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.14.y and the diffstat can be found below.
thanks,
greg k-h
... snip ...
Paolo Bonzini pbonzini@redhat.com kvm: vmx: Reinstate support for CPUs without virtual NMI
KVM works again on old Core2 CPU ... thanks, Z.
Nice, thanks for testing!
greg k-h
linux-stable-mirror@lists.linaro.org