This is the start of the stable review cycle for the 5.2.2 release. There are 21 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Sat 20 Jul 2019 02:59:27 AM UTC. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.2.2-rc1.g... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.2.y and the diffstat can be found below.
thanks,
greg k-h
------------- Pseudo-Shortlog of commits:
Greg Kroah-Hartman gregkh@linuxfoundation.org Linux 5.2.2-rc1
Jiri Slaby jslaby@suse.cz x86/entry/32: Fix ENDPROC of common_spurious
Haren Myneni haren@linux.vnet.ibm.com crypto/NX: Set receive window credits to max number of CRBs in RxFIFO
Christophe Leroy christophe.leroy@c-s.fr crypto: talitos - fix hash on SEC1.
Christophe Leroy christophe.leroy@c-s.fr crypto: talitos - move struct talitos_edesc into talitos.h
Julian Wiedmann jwi@linux.ibm.com s390/qdio: don't touch the dsci in tiqdio_add_input_queues()
Julian Wiedmann jwi@linux.ibm.com s390/qdio: (re-)initialize tiqdio list entries
Heiko Carstens heiko.carstens@de.ibm.com s390: fix stfle zero padding
Philipp Rudo prudo@linux.ibm.com s390/ipl: Fix detection of has_secure attribute
Arnd Bergmann arnd@arndb.de ARC: hide unused function unw_hdr_alloc
Thomas Gleixner tglx@linutronix.de x86/irq: Seperate unused system vectors from spurious entry again
Thomas Gleixner tglx@linutronix.de x86/irq: Handle spurious interrupt after shutdown gracefully
Thomas Gleixner tglx@linutronix.de x86/ioapic: Implement irq_get_irqchip_state() callback
Thomas Gleixner tglx@linutronix.de genirq: Add optional hardware synchronization for shutdown
Thomas Gleixner tglx@linutronix.de genirq: Fix misleading synchronize_irq() documentation
Thomas Gleixner tglx@linutronix.de genirq: Delay deactivation in free_irq()
Sven Van Asbroeck thesven73@gmail.com firmware: improve LSM/IMA security behaviour
James Morse james.morse@arm.com drivers: base: cacheinfo: Ensure cpu hotplug work is done before Intel RDT
Masahiro Yamada yamada.masahiro@socionext.com nilfs2: do not use unexported cpu_to_le32()/le32_to_cpu() in uapi header
Cole Rogers colerogers@disroot.org Input: synaptics - enable SMBUS on T480 thinkpad trackpad
Konstantin Khlebnikov khlebnikov@yandex-team.ru e1000e: start network tx queue only when link is up
Konstantin Khlebnikov khlebnikov@yandex-team.ru Revert "e1000e: fix cyclic resets at link up with active tx"
-------------
Diffstat:
Makefile | 4 +- arch/arc/kernel/unwind.c | 9 ++- arch/s390/include/asm/facility.h | 21 ++++--- arch/s390/include/asm/sclp.h | 1 - arch/s390/kernel/ipl.c | 7 +-- arch/x86/entry/entry_32.S | 24 ++++++++ arch/x86/entry/entry_64.S | 30 +++++++-- arch/x86/include/asm/hw_irq.h | 5 +- arch/x86/kernel/apic/apic.c | 33 ++++++---- arch/x86/kernel/apic/io_apic.c | 46 ++++++++++++++ arch/x86/kernel/apic/vector.c | 4 +- arch/x86/kernel/idt.c | 3 +- arch/x86/kernel/irq.c | 2 +- drivers/base/cacheinfo.c | 3 +- drivers/base/firmware_loader/fallback.c | 2 +- drivers/crypto/nx/nx-842-powernv.c | 8 ++- drivers/crypto/talitos.c | 99 +++++++++++++----------------- drivers/crypto/talitos.h | 30 +++++++++ drivers/input/mouse/synaptics.c | 1 + drivers/net/ethernet/intel/e1000e/netdev.c | 21 ++++--- drivers/s390/char/sclp_early.c | 1 - drivers/s390/cio/qdio_setup.c | 2 + drivers/s390/cio/qdio_thinint.c | 5 +- include/linux/cpuhotplug.h | 1 + include/uapi/linux/nilfs2_ondisk.h | 24 ++++---- kernel/irq/autoprobe.c | 6 +- kernel/irq/chip.c | 6 ++ kernel/irq/cpuhotplug.c | 2 +- kernel/irq/internals.h | 5 ++ kernel/irq/manage.c | 90 ++++++++++++++++++++------- 30 files changed, 342 insertions(+), 153 deletions(-)
From: Konstantin Khlebnikov khlebnikov@yandex-team.ru
commit caff422ea81e144842bc44bab408d85ac449377b upstream.
This reverts commit 0f9e980bf5ee1a97e2e401c846b2af989eb21c61.
That change cased false-positive warning about hardware hang:
e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready e1000e 0000:00:1f.6 eth0: Detected Hardware Unit Hang: TDH <0> TDT <1> next_to_use <1> next_to_clean <0> buffer_info[next_to_clean]: time_stamp <fffba7a7> next_to_watch <0> jiffies <fffbb140> next_to_watch.status <0> MAC Status <40080080> PHY Status <7949> PHY 1000BASE-T Status <0> PHY Extended Status <3000> PCI Status <10> e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Besides warning everything works fine. Original issue will be fixed property in following patch.
Signed-off-by: Konstantin Khlebnikov khlebnikov@yandex-team.ru Reported-by: Joseph Yasi joe.yasi@gmail.com Link: https://bugzilla.kernel.org/show_bug.cgi?id=203175 Tested-by: Joseph Yasi joe.yasi@gmail.com Tested-by: Aaron Brown aaron.f.brown@intel.com Tested-by: Oleksandr Natalenko oleksandr@redhat.com Signed-off-by: Jeff Kirsher jeffrey.t.kirsher@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/ethernet/intel/e1000e/netdev.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-)
--- a/drivers/net/ethernet/intel/e1000e/netdev.c +++ b/drivers/net/ethernet/intel/e1000e/netdev.c @@ -5308,13 +5308,8 @@ static void e1000_watchdog_task(struct w /* 8000ES2LAN requires a Rx packet buffer work-around * on link down event; reset the controller to flush * the Rx packet buffer. - * - * If the link is lost the controller stops DMA, but - * if there is queued Tx work it cannot be done. So - * reset the controller to flush the Tx packet buffers. */ - if ((adapter->flags & FLAG_RX_NEEDS_RESTART) || - e1000_desc_unused(tx_ring) + 1 < tx_ring->count) + if (adapter->flags & FLAG_RX_NEEDS_RESTART) adapter->flags |= FLAG_RESTART_NOW; else pm_schedule_suspend(netdev->dev.parent, @@ -5337,6 +5332,14 @@ link_up: adapter->gotc_old = adapter->stats.gotc; spin_unlock(&adapter->stats64_lock);
+ /* If the link is lost the controller stops DMA, but + * if there is queued Tx work it cannot be done. So + * reset the controller to flush the Tx packet buffers. + */ + if (!netif_carrier_ok(netdev) && + (e1000_desc_unused(tx_ring) + 1 < tx_ring->count)) + adapter->flags |= FLAG_RESTART_NOW; + /* If reset is necessary, do it outside of interrupt context. */ if (adapter->flags & FLAG_RESTART_NOW) { schedule_work(&adapter->reset_task);
From: Konstantin Khlebnikov khlebnikov@yandex-team.ru
commit d17ba0f616a08f597d9348c372d89b8c0405ccf3 upstream.
Driver does not want to keep packets in Tx queue when link is lost. But present code only reset NIC to flush them, but does not prevent queuing new packets. Moreover reset sequence itself could generate new packets via netconsole and NIC falls into endless reset loop.
This patch wakes Tx queue only when NIC is ready to send packets.
This is proper fix for problem addressed by commit 0f9e980bf5ee ("e1000e: fix cyclic resets at link up with active tx").
Signed-off-by: Konstantin Khlebnikov khlebnikov@yandex-team.ru Suggested-by: Alexander Duyck alexander.duyck@gmail.com Tested-by: Joseph Yasi joe.yasi@gmail.com Tested-by: Aaron Brown aaron.f.brown@intel.com Tested-by: Oleksandr Natalenko oleksandr@redhat.com Signed-off-by: Jeff Kirsher jeffrey.t.kirsher@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/net/ethernet/intel/e1000e/netdev.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
--- a/drivers/net/ethernet/intel/e1000e/netdev.c +++ b/drivers/net/ethernet/intel/e1000e/netdev.c @@ -4208,7 +4208,7 @@ void e1000e_up(struct e1000_adapter *ada e1000_configure_msix(adapter); e1000_irq_enable(adapter);
- netif_start_queue(adapter->netdev); + /* Tx queue started by watchdog timer when link is up */
e1000e_trigger_lsc(adapter); } @@ -4606,6 +4606,7 @@ int e1000e_open(struct net_device *netde pm_runtime_get_sync(&pdev->dev);
netif_carrier_off(netdev); + netif_stop_queue(netdev);
/* allocate transmit descriptors */ err = e1000e_setup_tx_resources(adapter->tx_ring); @@ -4666,7 +4667,6 @@ int e1000e_open(struct net_device *netde e1000_irq_enable(adapter);
adapter->tx_hang_recheck = false; - netif_start_queue(netdev);
hw->mac.get_link_status = true; pm_runtime_put(&pdev->dev); @@ -5288,6 +5288,7 @@ static void e1000_watchdog_task(struct w if (phy->ops.cfg_on_link_up) phy->ops.cfg_on_link_up(hw);
+ netif_wake_queue(netdev); netif_carrier_on(netdev);
if (!test_bit(__E1000_DOWN, &adapter->state)) @@ -5301,6 +5302,7 @@ static void e1000_watchdog_task(struct w /* Link status message must follow this format */ pr_info("%s NIC Link is Down\n", adapter->netdev->name); netif_carrier_off(netdev); + netif_stop_queue(netdev); if (!test_bit(__E1000_DOWN, &adapter->state)) mod_timer(&adapter->phy_info_timer, round_jiffies(jiffies + 2 * HZ));
From: Cole Rogers colerogers@disroot.org
commit abbe3acd7d72ab4633ade6bd24e8306b67e0add3 upstream.
Thinkpad t480 laptops had some touchpad features disabled, resulting in the loss of pinch to activities in GNOME, on wayland, and other touch gestures being slower. This patch adds the touchpad of the t480 to the smbus_pnp_ids whitelist to enable the extra features. In my testing this does not break suspend (on fedora, with wayland, and GNOME, using the rc-6 kernel), while also fixing the feature on a T480.
Signed-off-by: Cole Rogers colerogers@disroot.org Acked-by: Benjamin Tissoires benjamin.tissoires@redhat.com Cc: stable@vger.kernel.org Signed-off-by: Dmitry Torokhov dmitry.torokhov@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/input/mouse/synaptics.c | 1 + 1 file changed, 1 insertion(+)
--- a/drivers/input/mouse/synaptics.c +++ b/drivers/input/mouse/synaptics.c @@ -173,6 +173,7 @@ static const char * const smbus_pnp_ids[ "LEN0072", /* X1 Carbon Gen 5 (2017) - Elan/ALPS trackpoint */ "LEN0073", /* X1 Carbon G5 (Elantech) */ "LEN0092", /* X1 Carbon 6 */ + "LEN0093", /* T480 */ "LEN0096", /* X280 */ "LEN0097", /* X280 -> ALPS trackpoint */ "LEN200f", /* T450s */
From: Masahiro Yamada yamada.masahiro@socionext.com
commit c32cc30c0544f13982ee0185d55f4910319b1a79 upstream.
cpu_to_le32/le32_to_cpu is defined in include/linux/byteorder/generic.h, which is not exported to user-space.
UAPI headers must use the ones prefixed with double-underscore.
Detected by compile-testing exported headers:
include/linux/nilfs2_ondisk.h: In function `nilfs_checkpoint_set_snapshot': include/linux/nilfs2_ondisk.h:536:17: error: implicit declaration of function `cpu_to_le32' [-Werror=implicit-function-declaration] cp->cp_flags = cpu_to_le32(le32_to_cpu(cp->cp_flags) | \ ^ include/linux/nilfs2_ondisk.h:552:1: note: in expansion of macro `NILFS_CHECKPOINT_FNS' NILFS_CHECKPOINT_FNS(SNAPSHOT, snapshot) ^~~~~~~~~~~~~~~~~~~~ include/linux/nilfs2_ondisk.h:536:29: error: implicit declaration of function `le32_to_cpu' [-Werror=implicit-function-declaration] cp->cp_flags = cpu_to_le32(le32_to_cpu(cp->cp_flags) | \ ^ include/linux/nilfs2_ondisk.h:552:1: note: in expansion of macro `NILFS_CHECKPOINT_FNS' NILFS_CHECKPOINT_FNS(SNAPSHOT, snapshot) ^~~~~~~~~~~~~~~~~~~~ include/linux/nilfs2_ondisk.h: In function `nilfs_segment_usage_set_clean': include/linux/nilfs2_ondisk.h:622:19: error: implicit declaration of function `cpu_to_le64' [-Werror=implicit-function-declaration] su->su_lastmod = cpu_to_le64(0); ^~~~~~~~~~~
Link: http://lkml.kernel.org/r/20190605053006.14332-1-yamada.masahiro@socionext.co... Fixes: e63e88bc53ba ("nilfs2: move ioctl interface and disk layout to uapi separately") Signed-off-by: Masahiro Yamada yamada.masahiro@socionext.com Acked-by: Ryusuke Konishi konishi.ryusuke@gmail.com Cc: Arnd Bergmann arnd@arndb.de Cc: Greg KH gregkh@linuxfoundation.org Cc: Joe Perches joe@perches.com Cc: stable@vger.kernel.org [4.9+] Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- include/uapi/linux/nilfs2_ondisk.h | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-)
--- a/include/uapi/linux/nilfs2_ondisk.h +++ b/include/uapi/linux/nilfs2_ondisk.h @@ -29,7 +29,7 @@
#include <linux/types.h> #include <linux/magic.h> - +#include <asm/byteorder.h>
#define NILFS_INODE_BMAP_SIZE 7
@@ -533,19 +533,19 @@ enum { static inline void \ nilfs_checkpoint_set_##name(struct nilfs_checkpoint *cp) \ { \ - cp->cp_flags = cpu_to_le32(le32_to_cpu(cp->cp_flags) | \ - (1UL << NILFS_CHECKPOINT_##flag)); \ + cp->cp_flags = __cpu_to_le32(__le32_to_cpu(cp->cp_flags) | \ + (1UL << NILFS_CHECKPOINT_##flag)); \ } \ static inline void \ nilfs_checkpoint_clear_##name(struct nilfs_checkpoint *cp) \ { \ - cp->cp_flags = cpu_to_le32(le32_to_cpu(cp->cp_flags) & \ + cp->cp_flags = __cpu_to_le32(__le32_to_cpu(cp->cp_flags) & \ ~(1UL << NILFS_CHECKPOINT_##flag)); \ } \ static inline int \ nilfs_checkpoint_##name(const struct nilfs_checkpoint *cp) \ { \ - return !!(le32_to_cpu(cp->cp_flags) & \ + return !!(__le32_to_cpu(cp->cp_flags) & \ (1UL << NILFS_CHECKPOINT_##flag)); \ }
@@ -595,20 +595,20 @@ enum { static inline void \ nilfs_segment_usage_set_##name(struct nilfs_segment_usage *su) \ { \ - su->su_flags = cpu_to_le32(le32_to_cpu(su->su_flags) | \ + su->su_flags = __cpu_to_le32(__le32_to_cpu(su->su_flags) | \ (1UL << NILFS_SEGMENT_USAGE_##flag));\ } \ static inline void \ nilfs_segment_usage_clear_##name(struct nilfs_segment_usage *su) \ { \ su->su_flags = \ - cpu_to_le32(le32_to_cpu(su->su_flags) & \ + __cpu_to_le32(__le32_to_cpu(su->su_flags) & \ ~(1UL << NILFS_SEGMENT_USAGE_##flag)); \ } \ static inline int \ nilfs_segment_usage_##name(const struct nilfs_segment_usage *su) \ { \ - return !!(le32_to_cpu(su->su_flags) & \ + return !!(__le32_to_cpu(su->su_flags) & \ (1UL << NILFS_SEGMENT_USAGE_##flag)); \ }
@@ -619,15 +619,15 @@ NILFS_SEGMENT_USAGE_FNS(ERROR, error) static inline void nilfs_segment_usage_set_clean(struct nilfs_segment_usage *su) { - su->su_lastmod = cpu_to_le64(0); - su->su_nblocks = cpu_to_le32(0); - su->su_flags = cpu_to_le32(0); + su->su_lastmod = __cpu_to_le64(0); + su->su_nblocks = __cpu_to_le32(0); + su->su_flags = __cpu_to_le32(0); }
static inline int nilfs_segment_usage_clean(const struct nilfs_segment_usage *su) { - return !le32_to_cpu(su->su_flags); + return !__le32_to_cpu(su->su_flags); }
/**
From: James Morse james.morse@arm.com
commit 83b44fe343b5abfcb1b2261289bd0cfcfcfd60a8 upstream.
The cacheinfo structures are alloced/freed by cpu online/offline callbacks. Originally these were only used by sysfs to expose the cache topology to user space. Without any in-kernel dependencies CPUHP_AP_ONLINE_DYN was an appropriate choice.
resctrl has started using these structures to identify CPUs that share a cache. It updates its 'domain' structures from cpu online/offline callbacks. These depend on the cacheinfo structures (resctrl_online_cpu()->domain_add_cpu()->get_cache_id()-> get_cpu_cacheinfo()). These also run as CPUHP_AP_ONLINE_DYN.
Now that there is an in-kernel dependency, move the cacheinfo work earlier so we know its done before resctrl's CPUHP_AP_ONLINE_DYN work runs.
Fixes: 2264d9c74dda1 ("x86/intel_rdt: Build structures for each resource based on cache topology") Cc: stable@vger.kernel.org Cc: Fenghua Yu fenghua.yu@intel.com Cc: Reinette Chatre reinette.chatre@intel.com Signed-off-by: James Morse james.morse@arm.com Link: https://lore.kernel.org/r/20190624173656.202407-1-james.morse@arm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/base/cacheinfo.c | 3 ++- include/linux/cpuhotplug.h | 1 + 2 files changed, 3 insertions(+), 1 deletion(-)
--- a/drivers/base/cacheinfo.c +++ b/drivers/base/cacheinfo.c @@ -655,7 +655,8 @@ static int cacheinfo_cpu_pre_down(unsign
static int __init cacheinfo_sysfs_init(void) { - return cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "base/cacheinfo:online", + return cpuhp_setup_state(CPUHP_AP_BASE_CACHEINFO_ONLINE, + "base/cacheinfo:online", cacheinfo_cpu_online, cacheinfo_cpu_pre_down); } device_initcall(cacheinfo_sysfs_init); --- a/include/linux/cpuhotplug.h +++ b/include/linux/cpuhotplug.h @@ -176,6 +176,7 @@ enum cpuhp_state { CPUHP_AP_WATCHDOG_ONLINE, CPUHP_AP_WORKQUEUE_ONLINE, CPUHP_AP_RCUTREE_ONLINE, + CPUHP_AP_BASE_CACHEINFO_ONLINE, CPUHP_AP_ONLINE_DYN, CPUHP_AP_ONLINE_DYN_END = CPUHP_AP_ONLINE_DYN + 30, CPUHP_AP_X86_HPET_ONLINE,
From: Sven Van Asbroeck thesven73@gmail.com
commit 2472d64af2d3561954e2f05365a67692bb852f2a upstream.
The firmware loader queries if LSM/IMA permits it to load firmware via the sysfs fallback. Unfortunately, the code does the opposite: it expressly permits sysfs fw loading if security_kernel_load_data( LOADING_FIRMWARE) returns -EACCES. This happens because a zero-on-success return value is cast to a bool that's true on success.
Fix the return value handling so we get the correct behaviour.
Fixes: 6e852651f28e ("firmware: add call to LSM hook before firmware sysfs fallback") Cc: Stable stable@vger.kernel.org Cc: Mimi Zohar zohar@linux.vnet.ibm.com Cc: Kees Cook keescook@chromium.org To: Luis Chamberlain mcgrof@kernel.org Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: "Rafael J. Wysocki" rafael@kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Sven Van Asbroeck TheSven73@gmail.com Reviewed-by: Mimi Zohar zohar@linux.ibm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/base/firmware_loader/fallback.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/base/firmware_loader/fallback.c +++ b/drivers/base/firmware_loader/fallback.c @@ -659,7 +659,7 @@ static bool fw_run_sysfs_fallback(enum f /* Also permit LSMs and IMA to fail firmware sysfs fallback */ ret = security_kernel_load_data(LOADING_FIRMWARE); if (ret < 0) - return ret; + return false;
return fw_force_sysfs_fallback(opt_flags); }
From: Thomas Gleixner tglx@linutronix.de
commit 4001d8e8762f57d418b66e4e668601791900a1dd upstream.
When interrupts are shutdown, they are immediately deactivated in the irqdomain hierarchy. While this looks obviously correct there is a subtle issue:
There might be an interrupt in flight when free_irq() is invoking the shutdown. This is properly handled at the irq descriptor / primary handler level, but the deactivation might completely disable resources which are required to acknowledge the interrupt.
Split the shutdown code and deactivate the interrupt after synchronization in free_irq(). Fixup all other usage sites where this is not an issue to invoke the combined shutdown_and_deactivate() function instead.
This still might be an issue if the interrupt in flight servicing is delayed on a remote CPU beyond the invocation of synchronize_irq(), but that cannot be handled at that level and needs to be handled in the synchronize_irq() context.
Fixes: f8264e34965a ("irqdomain: Introduce new interfaces to support hierarchy irqdomains") Reported-by: Robert Hodaszi Robert.Hodaszi@digi.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Reviewed-by: Marc Zyngier marc.zyngier@arm.com Link: https://lkml.kernel.org/r/20190628111440.098196390@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- kernel/irq/autoprobe.c | 6 +++--- kernel/irq/chip.c | 6 ++++++ kernel/irq/cpuhotplug.c | 2 +- kernel/irq/internals.h | 1 + kernel/irq/manage.c | 12 +++++++++++- 5 files changed, 22 insertions(+), 5 deletions(-)
--- a/kernel/irq/autoprobe.c +++ b/kernel/irq/autoprobe.c @@ -90,7 +90,7 @@ unsigned long probe_irq_on(void) /* It triggered already - consider it spurious. */ if (!(desc->istate & IRQS_WAITING)) { desc->istate &= ~IRQS_AUTODETECT; - irq_shutdown(desc); + irq_shutdown_and_deactivate(desc); } else if (i < 32) mask |= 1 << i; @@ -127,7 +127,7 @@ unsigned int probe_irq_mask(unsigned lon mask |= 1 << i;
desc->istate &= ~IRQS_AUTODETECT; - irq_shutdown(desc); + irq_shutdown_and_deactivate(desc); } raw_spin_unlock_irq(&desc->lock); } @@ -169,7 +169,7 @@ int probe_irq_off(unsigned long val) nr_of_irqs++; } desc->istate &= ~IRQS_AUTODETECT; - irq_shutdown(desc); + irq_shutdown_and_deactivate(desc); } raw_spin_unlock_irq(&desc->lock); } --- a/kernel/irq/chip.c +++ b/kernel/irq/chip.c @@ -314,6 +314,12 @@ void irq_shutdown(struct irq_desc *desc) } irq_state_clr_started(desc); } +} + + +void irq_shutdown_and_deactivate(struct irq_desc *desc) +{ + irq_shutdown(desc); /* * This must be called even if the interrupt was never started up, * because the activation can happen before the interrupt is --- a/kernel/irq/cpuhotplug.c +++ b/kernel/irq/cpuhotplug.c @@ -116,7 +116,7 @@ static bool migrate_one_irq(struct irq_d */ if (irqd_affinity_is_managed(d)) { irqd_set_managed_shutdown(d); - irq_shutdown(desc); + irq_shutdown_and_deactivate(desc); return false; } affinity = cpu_online_mask; --- a/kernel/irq/internals.h +++ b/kernel/irq/internals.h @@ -82,6 +82,7 @@ extern int irq_activate_and_startup(stru extern int irq_startup(struct irq_desc *desc, bool resend, bool force);
extern void irq_shutdown(struct irq_desc *desc); +extern void irq_shutdown_and_deactivate(struct irq_desc *desc); extern void irq_enable(struct irq_desc *desc); extern void irq_disable(struct irq_desc *desc); extern void irq_percpu_enable(struct irq_desc *desc, unsigned int cpu); --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -13,6 +13,7 @@ #include <linux/module.h> #include <linux/random.h> #include <linux/interrupt.h> +#include <linux/irqdomain.h> #include <linux/slab.h> #include <linux/sched.h> #include <linux/sched/rt.h> @@ -1699,6 +1700,7 @@ static struct irqaction *__free_irq(stru /* If this was the last handler, shut down the IRQ line: */ if (!desc->action) { irq_settings_clr_disable_unlazy(desc); + /* Only shutdown. Deactivate after synchronize_hardirq() */ irq_shutdown(desc); }
@@ -1768,6 +1770,14 @@ static struct irqaction *__free_irq(stru * require it to deallocate resources over the slow bus. */ chip_bus_lock(desc); + /* + * There is no interrupt on the fly anymore. Deactivate it + * completely. + */ + raw_spin_lock_irqsave(&desc->lock, flags); + irq_domain_deactivate_irq(&desc->irq_data); + raw_spin_unlock_irqrestore(&desc->lock, flags); + irq_release_resources(desc); chip_bus_sync_unlock(desc); irq_remove_timings(desc); @@ -1855,7 +1865,7 @@ static const void *__cleanup_nmi(unsigne }
irq_settings_clr_disable_unlazy(desc); - irq_shutdown(desc); + irq_shutdown_and_deactivate(desc);
irq_release_resources(desc);
From: Thomas Gleixner tglx@linutronix.de
commit 1d21f2af8571c6a6a44e7c1911780614847b0253 upstream.
The function might sleep, so it cannot be called from interrupt context. Not even with care.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Cc: Marc Zyngier marc.zyngier@arm.com Link: https://lkml.kernel.org/r/20190628111440.189241552@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- kernel/irq/manage.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
--- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -96,7 +96,8 @@ EXPORT_SYMBOL(synchronize_hardirq); * to complete before returning. If you use this function while * holding a resource the IRQ handler may need you will deadlock. * - * This function may be called - with care - from IRQ context. + * Can only be called from preemptible code as it might sleep when + * an interrupt thread is associated to @irq. */ void synchronize_irq(unsigned int irq) {
From: Thomas Gleixner tglx@linutronix.de
commit 62e0468650c30f0298822c580f382b16328119f6 upstream.
free_irq() ensures that no hardware interrupt handler is executing on a different CPU before actually releasing resources and deactivating the interrupt completely in a domain hierarchy.
But that does not catch the case where the interrupt is on flight at the hardware level but not yet serviced by the target CPU. That creates an interesing race condition:
CPU 0 CPU 1 IRQ CHIP
interrupt is raised sent to CPU1 Unable to handle immediately (interrupts off, deep idle delay) mask() ... free() shutdown() synchronize_irq() release_resources() do_IRQ() -> resources are not available
That might be harmless and just trigger a spurious interrupt warning, but some interrupt chips might get into a wedged state.
Utilize the existing irq_get_irqchip_state() callback for the synchronization in free_irq().
synchronize_hardirq() is not using this mechanism as it might actually deadlock unter certain conditions, e.g. when called with interrupts disabled and the target CPU is the one on which the synchronization is invoked. synchronize_irq() uses it because that function cannot be called from non preemtible contexts as it might sleep.
No functional change intended and according to Marc the existing GIC implementations where the driver supports the callback should be able to cope with that core change. Famous last words.
Fixes: 464d12309e1b ("x86/vector: Switch IOAPIC to global reservation mode") Reported-by: Robert Hodaszi Robert.Hodaszi@digi.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Reviewed-by: Marc Zyngier marc.zyngier@arm.com Tested-by: Marc Zyngier marc.zyngier@arm.com Link: https://lkml.kernel.org/r/20190628111440.279463375@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- kernel/irq/internals.h | 4 ++ kernel/irq/manage.c | 75 ++++++++++++++++++++++++++++++++++++------------- 2 files changed, 60 insertions(+), 19 deletions(-)
--- a/kernel/irq/internals.h +++ b/kernel/irq/internals.h @@ -97,6 +97,10 @@ static inline void irq_mark_irq(unsigned extern void irq_mark_irq(unsigned int irq); #endif
+extern int __irq_get_irqchip_state(struct irq_data *data, + enum irqchip_irq_state which, + bool *state); + extern void init_kstat_irqs(struct irq_desc *desc, int node, int nr);
irqreturn_t __handle_irq_event_percpu(struct irq_desc *desc, unsigned int *flags); --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -35,8 +35,9 @@ static int __init setup_forced_irqthread early_param("threadirqs", setup_forced_irqthreads); #endif
-static void __synchronize_hardirq(struct irq_desc *desc) +static void __synchronize_hardirq(struct irq_desc *desc, bool sync_chip) { + struct irq_data *irqd = irq_desc_get_irq_data(desc); bool inprogress;
do { @@ -52,6 +53,20 @@ static void __synchronize_hardirq(struct /* Ok, that indicated we're done: double-check carefully. */ raw_spin_lock_irqsave(&desc->lock, flags); inprogress = irqd_irq_inprogress(&desc->irq_data); + + /* + * If requested and supported, check at the chip whether it + * is in flight at the hardware level, i.e. already pending + * in a CPU and waiting for service and acknowledge. + */ + if (!inprogress && sync_chip) { + /* + * Ignore the return code. inprogress is only updated + * when the chip supports it. + */ + __irq_get_irqchip_state(irqd, IRQCHIP_STATE_ACTIVE, + &inprogress); + } raw_spin_unlock_irqrestore(&desc->lock, flags);
/* Oops, that failed? */ @@ -74,13 +89,18 @@ static void __synchronize_hardirq(struct * Returns: false if a threaded handler is active. * * This function may be called - with care - from IRQ context. + * + * It does not check whether there is an interrupt in flight at the + * hardware level, but not serviced yet, as this might deadlock when + * called with interrupts disabled and the target CPU of the interrupt + * is the current CPU. */ bool synchronize_hardirq(unsigned int irq) { struct irq_desc *desc = irq_to_desc(irq);
if (desc) { - __synchronize_hardirq(desc); + __synchronize_hardirq(desc, false); return !atomic_read(&desc->threads_active); }
@@ -98,13 +118,17 @@ EXPORT_SYMBOL(synchronize_hardirq); * * Can only be called from preemptible code as it might sleep when * an interrupt thread is associated to @irq. + * + * It optionally makes sure (when the irq chip supports that method) + * that the interrupt is not pending in any CPU and waiting for + * service. */ void synchronize_irq(unsigned int irq) { struct irq_desc *desc = irq_to_desc(irq);
if (desc) { - __synchronize_hardirq(desc); + __synchronize_hardirq(desc, true); /* * We made sure that no hardirq handler is * running. Now verify that no threaded handlers are @@ -1730,8 +1754,12 @@ static struct irqaction *__free_irq(stru
unregister_handler_proc(irq, action);
- /* Make sure it's not being used on another CPU: */ - synchronize_hardirq(irq); + /* + * Make sure it's not being used on another CPU and if the chip + * supports it also make sure that there is no (not yet serviced) + * interrupt in flight at the hardware level. + */ + __synchronize_hardirq(desc, true);
#ifdef CONFIG_DEBUG_SHIRQ /* @@ -2589,6 +2617,28 @@ out: irq_put_desc_unlock(desc, flags); }
+int __irq_get_irqchip_state(struct irq_data *data, enum irqchip_irq_state which, + bool *state) +{ + struct irq_chip *chip; + int err = -EINVAL; + + do { + chip = irq_data_get_irq_chip(data); + if (chip->irq_get_irqchip_state) + break; +#ifdef CONFIG_IRQ_DOMAIN_HIERARCHY + data = data->parent_data; +#else + data = NULL; +#endif + } while (data); + + if (data) + err = chip->irq_get_irqchip_state(data, which, state); + return err; +} + /** * irq_get_irqchip_state - returns the irqchip state of a interrupt. * @irq: Interrupt line that is forwarded to a VM @@ -2607,7 +2657,6 @@ int irq_get_irqchip_state(unsigned int i { struct irq_desc *desc; struct irq_data *data; - struct irq_chip *chip; unsigned long flags; int err = -EINVAL;
@@ -2617,19 +2666,7 @@ int irq_get_irqchip_state(unsigned int i
data = irq_desc_get_irq_data(desc);
- do { - chip = irq_data_get_irq_chip(data); - if (chip->irq_get_irqchip_state) - break; -#ifdef CONFIG_IRQ_DOMAIN_HIERARCHY - data = data->parent_data; -#else - data = NULL; -#endif - } while (data); - - if (data) - err = chip->irq_get_irqchip_state(data, which, state); + err = __irq_get_irqchip_state(data, which, state);
irq_put_desc_busunlock(desc, flags); return err;
From: Thomas Gleixner tglx@linutronix.de
commit dfe0cf8b51b07e56ded571e3de0a4a9382517231 upstream.
When an interrupt is shut down in free_irq() there might be an inflight interrupt pending in the IO-APIC remote IRR which is not yet serviced. That means the interrupt has been sent to the target CPUs local APIC, but the target CPU is in a state which delays the servicing.
So free_irq() would proceed to free resources and to clear the vector because synchronize_hardirq() does not see an interrupt handler in progress.
That can trigger a spurious interrupt warning, which is harmless and just confuses users, but it also can leave the remote IRR in a stale state because once the handler is invoked the interrupt resources might be freed already and therefore acknowledgement is not possible anymore.
Implement the irq_get_irqchip_state() callback for the IO-APIC irq chip. The callback is invoked from free_irq() via __synchronize_hardirq(). Check the remote IRR bit of the interrupt and return 'in flight' if it is set and the interrupt is configured in level mode. For edge mode the remote IRR has no meaning.
As this is only meaningful for level triggered interrupts this won't cure the potential spurious interrupt warning for edge triggered interrupts, but the edge trigger case does not result in stale hardware state. This has to be addressed at the vector/interrupt entry level seperately.
Fixes: 464d12309e1b ("x86/vector: Switch IOAPIC to global reservation mode") Reported-by: Robert Hodaszi Robert.Hodaszi@digi.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Cc: Marc Zyngier marc.zyngier@arm.com Link: https://lkml.kernel.org/r/20190628111440.370295517@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/kernel/apic/io_apic.c | 46 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 46 insertions(+)
--- a/arch/x86/kernel/apic/io_apic.c +++ b/arch/x86/kernel/apic/io_apic.c @@ -1893,6 +1893,50 @@ static int ioapic_set_affinity(struct ir return ret; }
+/* + * Interrupt shutdown masks the ioapic pin, but the interrupt might already + * be in flight, but not yet serviced by the target CPU. That means + * __synchronize_hardirq() would return and claim that everything is calmed + * down. So free_irq() would proceed and deactivate the interrupt and free + * resources. + * + * Once the target CPU comes around to service it it will find a cleared + * vector and complain. While the spurious interrupt is harmless, the full + * release of resources might prevent the interrupt from being acknowledged + * which keeps the hardware in a weird state. + * + * Verify that the corresponding Remote-IRR bits are clear. + */ +static int ioapic_irq_get_chip_state(struct irq_data *irqd, + enum irqchip_irq_state which, + bool *state) +{ + struct mp_chip_data *mcd = irqd->chip_data; + struct IO_APIC_route_entry rentry; + struct irq_pin_list *p; + + if (which != IRQCHIP_STATE_ACTIVE) + return -EINVAL; + + *state = false; + raw_spin_lock(&ioapic_lock); + for_each_irq_pin(p, mcd->irq_2_pin) { + rentry = __ioapic_read_entry(p->apic, p->pin); + /* + * The remote IRR is only valid in level trigger mode. It's + * meaning is undefined for edge triggered interrupts and + * irrelevant because the IO-APIC treats them as fire and + * forget. + */ + if (rentry.irr && rentry.trigger) { + *state = true; + break; + } + } + raw_spin_unlock(&ioapic_lock); + return 0; +} + static struct irq_chip ioapic_chip __read_mostly = { .name = "IO-APIC", .irq_startup = startup_ioapic_irq, @@ -1902,6 +1946,7 @@ static struct irq_chip ioapic_chip __rea .irq_eoi = ioapic_ack_level, .irq_set_affinity = ioapic_set_affinity, .irq_retrigger = irq_chip_retrigger_hierarchy, + .irq_get_irqchip_state = ioapic_irq_get_chip_state, .flags = IRQCHIP_SKIP_SET_WAKE, };
@@ -1914,6 +1959,7 @@ static struct irq_chip ioapic_ir_chip __ .irq_eoi = ioapic_ir_ack_level, .irq_set_affinity = ioapic_set_affinity, .irq_retrigger = irq_chip_retrigger_hierarchy, + .irq_get_irqchip_state = ioapic_irq_get_chip_state, .flags = IRQCHIP_SKIP_SET_WAKE, };
From: Thomas Gleixner tglx@linutronix.de
commit b7107a67f0d125459fe41f86e8079afd1a5e0b15 upstream.
Since the rework of the vector management, warnings about spurious interrupts have been reported. Robert provided some more information and did an initial analysis. The following situation leads to these warnings:
CPU 0 CPU 1 IO_APIC
interrupt is raised sent to CPU1 Unable to handle immediately (interrupts off, deep idle delay) mask() ... free() shutdown() synchronize_irq() clear_vector() do_IRQ() -> vector is clear
Before the rework the vector entries of legacy interrupts were statically assigned and occupied precious vector space while most of them were unused. Due to that the above situation was handled silently because the vector was handled and the core handler of the assigned interrupt descriptor noticed that it is shut down and returned.
While this has been usually observed with legacy interrupts, this situation is not limited to them. Any other interrupt source, e.g. MSI, can cause the same issue.
After adding proper synchronization for level triggered interrupts, this can only happen for edge triggered interrupts where the IO-APIC obviously cannot provide information about interrupts in flight.
While the spurious warning is actually harmless in this case it worries users and driver developers.
Handle it gracefully by marking the vector entry as VECTOR_SHUTDOWN instead of VECTOR_UNUSED when the vector is freed up.
If that above late handling happens the spurious detector will not complain and switch the entry to VECTOR_UNUSED. Any subsequent spurious interrupt on that line will trigger the spurious warning as before.
Fixes: 464d12309e1b ("x86/vector: Switch IOAPIC to global reservation mode") Reported-by: Robert Hodaszi Robert.Hodaszi@digi.com Signed-off-by: Thomas Gleixner tglx@linutronix.de- Tested-by: Robert Hodaszi Robert.Hodaszi@digi.com Cc: Marc Zyngier marc.zyngier@arm.com Link: https://lkml.kernel.org/r/20190628111440.459647741@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/include/asm/hw_irq.h | 3 ++- arch/x86/kernel/apic/vector.c | 4 ++-- arch/x86/kernel/irq.c | 2 +- 3 files changed, 5 insertions(+), 4 deletions(-)
--- a/arch/x86/include/asm/hw_irq.h +++ b/arch/x86/include/asm/hw_irq.h @@ -151,7 +151,8 @@ extern char irq_entries_start[]; #endif
#define VECTOR_UNUSED NULL -#define VECTOR_RETRIGGERED ((void *)~0UL) +#define VECTOR_SHUTDOWN ((void *)~0UL) +#define VECTOR_RETRIGGERED ((void *)~1UL)
typedef struct irq_desc* vector_irq_t[NR_VECTORS]; DECLARE_PER_CPU(vector_irq_t, vector_irq); --- a/arch/x86/kernel/apic/vector.c +++ b/arch/x86/kernel/apic/vector.c @@ -340,7 +340,7 @@ static void clear_irq_vector(struct irq_ trace_vector_clear(irqd->irq, vector, apicd->cpu, apicd->prev_vector, apicd->prev_cpu);
- per_cpu(vector_irq, apicd->cpu)[vector] = VECTOR_UNUSED; + per_cpu(vector_irq, apicd->cpu)[vector] = VECTOR_SHUTDOWN; irq_matrix_free(vector_matrix, apicd->cpu, vector, managed); apicd->vector = 0;
@@ -349,7 +349,7 @@ static void clear_irq_vector(struct irq_ if (!vector) return;
- per_cpu(vector_irq, apicd->prev_cpu)[vector] = VECTOR_UNUSED; + per_cpu(vector_irq, apicd->prev_cpu)[vector] = VECTOR_SHUTDOWN; irq_matrix_free(vector_matrix, apicd->prev_cpu, vector, managed); apicd->prev_vector = 0; apicd->move_in_progress = 0; --- a/arch/x86/kernel/irq.c +++ b/arch/x86/kernel/irq.c @@ -247,7 +247,7 @@ __visible unsigned int __irq_entry do_IR if (!handle_irq(desc, regs)) { ack_APIC_irq();
- if (desc != VECTOR_RETRIGGERED) { + if (desc != VECTOR_RETRIGGERED && desc != VECTOR_SHUTDOWN) { pr_emerg_ratelimited("%s: %d.%d No irq handler for vector\n", __func__, smp_processor_id(), vector);
From: Thomas Gleixner tglx@linutronix.de
commit f8a8fe61fec8006575699559ead88b0b833d5cad upstream.
Quite some time ago the interrupt entry stubs for unused vectors in the system vector range got removed and directly mapped to the spurious interrupt vector entry point.
Sounds reasonable, but it's subtly broken. The spurious interrupt vector entry point pushes vector number 0xFF on the stack which makes the whole logic in __smp_spurious_interrupt() pointless.
As a consequence any spurious interrupt which comes from a vector != 0xFF is treated as a real spurious interrupt (vector 0xFF) and not acknowledged. That subsequently stalls all interrupt vectors of equal and lower priority, which brings the system to a grinding halt.
This can happen because even on 64-bit the system vector space is not guaranteed to be fully populated. A full compile time handling of the unused vectors is not possible because quite some of them are conditonally populated at runtime.
Bring the entry stubs back, which wastes 160 bytes if all stubs are unused, but gains the proper handling back. There is no point to selectively spare some of the stubs which are known at compile time as the required code in the IDT management would be way larger and convoluted.
Do not route the spurious entries through common_interrupt and do_IRQ() as the original code did. Route it to smp_spurious_interrupt() which evaluates the vector number and acts accordingly now that the real vector numbers are handed in.
Fixup the pr_warn so the actual spurious vector (0xff) is clearly distiguished from the other vectors and also note for the vectored case whether it was pending in the ISR or not.
"Spurious APIC interrupt (vector 0xFF) on CPU#0, should never happen." "Spurious interrupt vector 0xed on CPU#1. Acked." "Spurious interrupt vector 0xee on CPU#1. Not pending!."
Fixes: 2414e021ac8d ("x86: Avoid building unused IRQ entry stubs") Reported-by: Jan Kiszka jan.kiszka@siemens.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Cc: Marc Zyngier marc.zyngier@arm.com Cc: Jan Beulich jbeulich@suse.com Link: https://lkml.kernel.org/r/20190628111440.550568228@linutronix.de Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/entry/entry_32.S | 24 ++++++++++++++++++++++++ arch/x86/entry/entry_64.S | 30 ++++++++++++++++++++++++++---- arch/x86/include/asm/hw_irq.h | 2 ++ arch/x86/kernel/apic/apic.c | 33 ++++++++++++++++++++++----------- arch/x86/kernel/idt.c | 3 ++- 5 files changed, 76 insertions(+), 16 deletions(-)
--- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -1104,6 +1104,30 @@ ENTRY(irq_entries_start) .endr END(irq_entries_start)
+#ifdef CONFIG_X86_LOCAL_APIC + .align 8 +ENTRY(spurious_entries_start) + vector=FIRST_SYSTEM_VECTOR + .rept (NR_VECTORS - FIRST_SYSTEM_VECTOR) + pushl $(~vector+0x80) /* Note: always in signed byte range */ + vector=vector+1 + jmp common_spurious + .align 8 + .endr +END(spurious_entries_start) + +common_spurious: + ASM_CLAC + addl $-0x80, (%esp) /* Adjust vector into the [-256, -1] range */ + SAVE_ALL switch_stacks=1 + ENCODE_FRAME_POINTER + TRACE_IRQS_OFF + movl %esp, %eax + call smp_spurious_interrupt + jmp ret_from_intr +ENDPROC(common_interrupt) +#endif + /* * the CPU automatically disables interrupts when executing an IRQ vector, * so IRQ-flags tracing has to follow that: --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -375,6 +375,18 @@ ENTRY(irq_entries_start) .endr END(irq_entries_start)
+ .align 8 +ENTRY(spurious_entries_start) + vector=FIRST_SYSTEM_VECTOR + .rept (NR_VECTORS - FIRST_SYSTEM_VECTOR) + UNWIND_HINT_IRET_REGS + pushq $(~vector+0x80) /* Note: always in signed byte range */ + jmp common_spurious + .align 8 + vector=vector+1 + .endr +END(spurious_entries_start) + .macro DEBUG_ENTRY_ASSERT_IRQS_OFF #ifdef CONFIG_DEBUG_ENTRY pushq %rax @@ -571,10 +583,20 @@ _ASM_NOKPROBE(interrupt_entry)
/* Interrupt entry/exit. */
- /* - * The interrupt stubs push (~vector+0x80) onto the stack and - * then jump to common_interrupt. - */ +/* + * The interrupt stubs push (~vector+0x80) onto the stack and + * then jump to common_spurious/interrupt. + */ +common_spurious: + addq $-0x80, (%rsp) /* Adjust vector to [-256, -1] range */ + call interrupt_entry + UNWIND_HINT_REGS indirect=1 + call smp_spurious_interrupt /* rdi points to pt_regs */ + jmp ret_from_intr +END(common_spurious) +_ASM_NOKPROBE(common_spurious) + +/* common_interrupt is a hotpath. Align it */ .p2align CONFIG_X86_L1_CACHE_SHIFT common_interrupt: addq $-0x80, (%rsp) /* Adjust vector to [-256, -1] range */ --- a/arch/x86/include/asm/hw_irq.h +++ b/arch/x86/include/asm/hw_irq.h @@ -150,6 +150,8 @@ extern char irq_entries_start[]; #define trace_irq_entries_start irq_entries_start #endif
+extern char spurious_entries_start[]; + #define VECTOR_UNUSED NULL #define VECTOR_SHUTDOWN ((void *)~0UL) #define VECTOR_RETRIGGERED ((void *)~1UL) --- a/arch/x86/kernel/apic/apic.c +++ b/arch/x86/kernel/apic/apic.c @@ -2041,21 +2041,32 @@ __visible void __irq_entry smp_spurious_ entering_irq(); trace_spurious_apic_entry(vector);
+ inc_irq_stat(irq_spurious_count); + + /* + * If this is a spurious interrupt then do not acknowledge + */ + if (vector == SPURIOUS_APIC_VECTOR) { + /* See SDM vol 3 */ + pr_info("Spurious APIC interrupt (vector 0xFF) on CPU#%d, should never happen.\n", + smp_processor_id()); + goto out; + } + /* - * Check if this really is a spurious interrupt and ACK it - * if it is a vectored one. Just in case... - * Spurious interrupts should not be ACKed. + * If it is a vectored one, verify it's set in the ISR. If set, + * acknowledge it. */ v = apic_read(APIC_ISR + ((vector & ~0x1f) >> 1)); - if (v & (1 << (vector & 0x1f))) + if (v & (1 << (vector & 0x1f))) { + pr_info("Spurious interrupt (vector 0x%02x) on CPU#%d. Acked\n", + vector, smp_processor_id()); ack_APIC_irq(); - - inc_irq_stat(irq_spurious_count); - - /* see sw-dev-man vol 3, chapter 7.4.13.5 */ - pr_info("spurious APIC interrupt through vector %02x on CPU#%d, " - "should never happen.\n", vector, smp_processor_id()); - + } else { + pr_info("Spurious interrupt (vector 0x%02x) on CPU#%d. Not pending!\n", + vector, smp_processor_id()); + } +out: trace_spurious_apic_exit(vector); exiting_irq(); } --- a/arch/x86/kernel/idt.c +++ b/arch/x86/kernel/idt.c @@ -319,7 +319,8 @@ void __init idt_setup_apic_and_irq_gates #ifdef CONFIG_X86_LOCAL_APIC for_each_clear_bit_from(i, system_vectors, NR_VECTORS) { set_bit(i, system_vectors); - set_intr_gate(i, spurious_interrupt); + entry = spurious_entries_start + 8 * (i - FIRST_SYSTEM_VECTOR); + set_intr_gate(i, entry); } #endif }
From: Arnd Bergmann arnd@arndb.de
commit fd5de2721ea7d16e2b16c4049ac49f229551b290 upstream.
As kernelci.org reports, this function is not used in vdk_hs38_defconfig:
arch/arc/kernel/unwind.c:188:14: warning: 'unw_hdr_alloc' defined but not used [-Wunused-function]
Fixes: bc79c9a72165 ("ARC: dw2 unwind: Reinstante unwinding out of modules") Link: https://kernelci.org/build/id/5d1cae3f59b514300340c132/logs/ Cc: stable@vger.kernel.org Signed-off-by: Arnd Bergmann arnd@arndb.de Signed-off-by: Vineet Gupta vgupta@synopsys.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/arc/kernel/unwind.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-)
--- a/arch/arc/kernel/unwind.c +++ b/arch/arc/kernel/unwind.c @@ -181,11 +181,6 @@ static void *__init unw_hdr_alloc_early( return memblock_alloc_from(sz, sizeof(unsigned int), MAX_DMA_ADDRESS); }
-static void *unw_hdr_alloc(unsigned long sz) -{ - return kmalloc(sz, GFP_KERNEL); -} - static void init_unwind_table(struct unwind_table *table, const char *name, const void *core_start, unsigned long core_size, const void *init_start, unsigned long init_size, @@ -366,6 +361,10 @@ ret_err: }
#ifdef CONFIG_MODULES +static void *unw_hdr_alloc(unsigned long sz) +{ + return kmalloc(sz, GFP_KERNEL); +}
static struct unwind_table *last_table;
From: Philipp Rudo prudo@linux.ibm.com
commit 1b2be2071aca9aab22e3f902bcb0fca46a1d3b00 upstream.
Use the correct bit for detection of the machine capability associated with the has_secure attribute. It is expected that the underlying platform (including hypervisors) unsets the bit when they don't provide secure ipl for their guests.
Fixes: c9896acc7851 ("s390/ipl: Provide has_secure sysfs attribute") Cc: stable@vger.kernel.org # 5.2 Signed-off-by: Philipp Rudo prudo@linux.ibm.com Reviewed-by: Christian Borntraeger borntraeger@de.ibm.com Reviewed-by: Peter Oberparleiter oberpar@linux.ibm.com Signed-off-by: Vasily Gorbik gor@linux.ibm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/s390/include/asm/sclp.h | 1 - arch/s390/kernel/ipl.c | 7 +------ drivers/s390/char/sclp_early.c | 1 - 3 files changed, 1 insertion(+), 8 deletions(-)
--- a/arch/s390/include/asm/sclp.h +++ b/arch/s390/include/asm/sclp.h @@ -80,7 +80,6 @@ struct sclp_info { unsigned char has_gisaf : 1; unsigned char has_diag318 : 1; unsigned char has_sipl : 1; - unsigned char has_sipl_g2 : 1; unsigned char has_dirq : 1; unsigned int ibc; unsigned int mtid; --- a/arch/s390/kernel/ipl.c +++ b/arch/s390/kernel/ipl.c @@ -286,12 +286,7 @@ static struct kobj_attribute sys_ipl_sec static ssize_t ipl_has_secure_show(struct kobject *kobj, struct kobj_attribute *attr, char *page) { - if (MACHINE_IS_LPAR) - return sprintf(page, "%i\n", !!sclp.has_sipl); - else if (MACHINE_IS_VM) - return sprintf(page, "%i\n", !!sclp.has_sipl_g2); - else - return sprintf(page, "%i\n", 0); + return sprintf(page, "%i\n", !!sclp.has_sipl); }
static struct kobj_attribute sys_ipl_has_secure_attr = --- a/drivers/s390/char/sclp_early.c +++ b/drivers/s390/char/sclp_early.c @@ -41,7 +41,6 @@ static void __init sclp_early_facilities sclp.has_hvs = !!(sccb->fac119 & 0x80); sclp.has_kss = !!(sccb->fac98 & 0x01); sclp.has_sipl = !!(sccb->cbl & 0x02); - sclp.has_sipl_g2 = !!(sccb->cbl & 0x04); if (sccb->fac85 & 0x02) S390_lowcore.machine_flags |= MACHINE_FLAG_ESOP; if (sccb->fac91 & 0x40)
From: Heiko Carstens heiko.carstens@de.ibm.com
commit 4f18d869ffd056c7858f3d617c71345cf19be008 upstream.
The stfle inline assembly returns the number of double words written (condition code 0) or the double words it would have written (condition code 3), if the memory array it got as parameter would have been large enough.
The current stfle implementation assumes that the array is always large enough and clears those parts of the array that have not been written to with a subsequent memset call.
If however the array is not large enough memset will get a negative length parameter, which means that memset clears memory until it gets an exception and the kernel crashes.
To fix this simply limit the maximum length. Move also the inline assembly to an extra function to avoid clobbering of register 0, which might happen because of the added min_t invocation together with code instrumentation.
The bug was introduced with commit 14375bc4eb8d ("[S390] cleanup facility list handling") but was rather harmless, since it would only write to a rather large array. It became a potential problem with commit 3ab121ab1866 ("[S390] kernel: Add z/VM LGR detection"). Since then it writes to an array with only four double words, while some machines already deliver three double words. As soon as machines have a facility bit within the fifth double a crash on IPL would happen.
Fixes: 14375bc4eb8d ("[S390] cleanup facility list handling") Cc: stable@vger.kernel.org # v2.6.37+ Reviewed-by: Vasily Gorbik gor@linux.ibm.com Signed-off-by: Heiko Carstens heiko.carstens@de.ibm.com Signed-off-by: Vasily Gorbik gor@linux.ibm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/s390/include/asm/facility.h | 21 ++++++++++++++------- 1 file changed, 14 insertions(+), 7 deletions(-)
--- a/arch/s390/include/asm/facility.h +++ b/arch/s390/include/asm/facility.h @@ -59,6 +59,18 @@ static inline int test_facility(unsigned return __test_facility(nr, &S390_lowcore.stfle_fac_list); }
+static inline unsigned long __stfle_asm(u64 *stfle_fac_list, int size) +{ + register unsigned long reg0 asm("0") = size - 1; + + asm volatile( + ".insn s,0xb2b00000,0(%1)" /* stfle */ + : "+d" (reg0) + : "a" (stfle_fac_list) + : "memory", "cc"); + return reg0; +} + /** * stfle - Store facility list extended * @stfle_fac_list: array where facility list can be stored @@ -75,13 +87,8 @@ static inline void __stfle(u64 *stfle_fa memcpy(stfle_fac_list, &S390_lowcore.stfl_fac_list, 4); if (S390_lowcore.stfl_fac_list & 0x01000000) { /* More facility bits available with stfle */ - register unsigned long reg0 asm("0") = size - 1; - - asm volatile(".insn s,0xb2b00000,0(%1)" /* stfle */ - : "+d" (reg0) - : "a" (stfle_fac_list) - : "memory", "cc"); - nr = (reg0 + 1) * 8; /* # bytes stored by stfle */ + nr = __stfle_asm(stfle_fac_list, size); + nr = min_t(unsigned long, (nr + 1) * 8, size * 8); } memset((char *) stfle_fac_list + nr, 0, size * 8 - nr); }
From: Julian Wiedmann jwi@linux.ibm.com
commit e54e4785cb5cb4896cf4285964aeef2125612fb2 upstream.
When tiqdio_remove_input_queues() removes a queue from the tiq_list as part of qdio_shutdown(), it doesn't re-initialize the queue's list entry and the prev/next pointers go stale.
If a subsequent qdio_establish() fails while sending the ESTABLISH cmd, it calls qdio_shutdown() again in QDIO_IRQ_STATE_ERR state and tiqdio_remove_input_queues() will attempt to remove the queue entry a second time. This dereferences the stale pointers, and bad things ensue. Fix this by re-initializing the list entry after removing it from the list.
For good practice also initialize the list entry when the queue is first allocated, and remove the quirky checks that papered over this omission. Note that prior to commit e521813468f7 ("s390/qdio: fix access to uninitialized qdio_q fields"), these checks were bogus anyway.
setup_queues_misc() clears the whole queue struct, and thus needs to re-init the prev/next pointers as well.
Fixes: 779e6e1c724d ("[S390] qdio: new qdio driver.") Cc: stable@vger.kernel.org Signed-off-by: Julian Wiedmann jwi@linux.ibm.com Signed-off-by: Vasily Gorbik gor@linux.ibm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/s390/cio/qdio_setup.c | 2 ++ drivers/s390/cio/qdio_thinint.c | 4 ++-- 2 files changed, 4 insertions(+), 2 deletions(-)
--- a/drivers/s390/cio/qdio_setup.c +++ b/drivers/s390/cio/qdio_setup.c @@ -150,6 +150,7 @@ static int __qdio_allocate_qs(struct qdi return -ENOMEM; } irq_ptr_qs[i] = q; + INIT_LIST_HEAD(&q->entry); } return 0; } @@ -178,6 +179,7 @@ static void setup_queues_misc(struct qdi q->mask = 1 << (31 - i); q->nr = i; q->handler = handler; + INIT_LIST_HEAD(&q->entry); }
static void setup_storage_lists(struct qdio_q *q, struct qdio_irq *irq_ptr, --- a/drivers/s390/cio/qdio_thinint.c +++ b/drivers/s390/cio/qdio_thinint.c @@ -87,14 +87,14 @@ void tiqdio_remove_input_queues(struct q struct qdio_q *q;
q = irq_ptr->input_qs[0]; - /* if establish triggered an error */ - if (!q || !q->entry.prev || !q->entry.next) + if (!q) return;
mutex_lock(&tiq_list_lock); list_del_rcu(&q->entry); mutex_unlock(&tiq_list_lock); synchronize_rcu(); + INIT_LIST_HEAD(&q->entry); }
static inline int has_multiple_inq_on_dsci(struct qdio_irq *irq_ptr)
From: Julian Wiedmann jwi@linux.ibm.com
commit ac6639cd3db607d386616487902b4cc1850a7be5 upstream.
Current code sets the dsci to 0x00000080. Which doesn't make any sense, as the indicator area is located in the _left-most_ byte.
Worse: if the dsci is the _shared_ indicator, this potentially clears the indication of activity for a _different_ device. tiqdio_thinint_handler() will then have no reason to call that device's IRQ handler, and the device ends up stalling.
Fixes: d0c9d4a89fff ("[S390] qdio: set correct bit in dsci") Cc: stable@vger.kernel.org Signed-off-by: Julian Wiedmann jwi@linux.ibm.com Signed-off-by: Vasily Gorbik gor@linux.ibm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/s390/cio/qdio_thinint.c | 1 - 1 file changed, 1 deletion(-)
--- a/drivers/s390/cio/qdio_thinint.c +++ b/drivers/s390/cio/qdio_thinint.c @@ -79,7 +79,6 @@ void tiqdio_add_input_queues(struct qdio mutex_lock(&tiq_list_lock); list_add_rcu(&irq_ptr->input_qs[0]->entry, &tiq_list); mutex_unlock(&tiq_list_lock); - xchg(irq_ptr->dsci, 1 << 7); }
void tiqdio_remove_input_queues(struct qdio_irq *irq_ptr)
From: Christophe Leroy christophe.leroy@c-s.fr
commit d44769e4ccb636e8238adbc151f25467a536711b upstream.
Moves struct talitos_edesc into talitos.h so that it can be used from any place in talitos.c
It will be required for next patch ("crypto: talitos - fix hash on SEC1")
Signed-off-by: Christophe Leroy christophe.leroy@c-s.fr Cc: stable@vger.kernel.org Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/crypto/talitos.c | 30 ------------------------------ drivers/crypto/talitos.h | 30 ++++++++++++++++++++++++++++++ 2 files changed, 30 insertions(+), 30 deletions(-)
--- a/drivers/crypto/talitos.c +++ b/drivers/crypto/talitos.c @@ -948,36 +948,6 @@ badkey: goto out; }
-/* - * talitos_edesc - s/w-extended descriptor - * @src_nents: number of segments in input scatterlist - * @dst_nents: number of segments in output scatterlist - * @icv_ool: whether ICV is out-of-line - * @iv_dma: dma address of iv for checking continuity and link table - * @dma_len: length of dma mapped link_tbl space - * @dma_link_tbl: bus physical address of link_tbl/buf - * @desc: h/w descriptor - * @link_tbl: input and output h/w link tables (if {src,dst}_nents > 1) (SEC2) - * @buf: input and output buffeur (if {src,dst}_nents > 1) (SEC1) - * - * if decrypting (with authcheck), or either one of src_nents or dst_nents - * is greater than 1, an integrity check value is concatenated to the end - * of link_tbl data - */ -struct talitos_edesc { - int src_nents; - int dst_nents; - bool icv_ool; - dma_addr_t iv_dma; - int dma_len; - dma_addr_t dma_link_tbl; - struct talitos_desc desc; - union { - struct talitos_ptr link_tbl[0]; - u8 buf[0]; - }; -}; - static void talitos_sg_unmap(struct device *dev, struct talitos_edesc *edesc, struct scatterlist *src, --- a/drivers/crypto/talitos.h +++ b/drivers/crypto/talitos.h @@ -65,6 +65,36 @@ struct talitos_desc {
#define TALITOS_DESC_SIZE (sizeof(struct talitos_desc) - sizeof(__be32))
+/* + * talitos_edesc - s/w-extended descriptor + * @src_nents: number of segments in input scatterlist + * @dst_nents: number of segments in output scatterlist + * @icv_ool: whether ICV is out-of-line + * @iv_dma: dma address of iv for checking continuity and link table + * @dma_len: length of dma mapped link_tbl space + * @dma_link_tbl: bus physical address of link_tbl/buf + * @desc: h/w descriptor + * @link_tbl: input and output h/w link tables (if {src,dst}_nents > 1) (SEC2) + * @buf: input and output buffeur (if {src,dst}_nents > 1) (SEC1) + * + * if decrypting (with authcheck), or either one of src_nents or dst_nents + * is greater than 1, an integrity check value is concatenated to the end + * of link_tbl data + */ +struct talitos_edesc { + int src_nents; + int dst_nents; + bool icv_ool; + dma_addr_t iv_dma; + int dma_len; + dma_addr_t dma_link_tbl; + struct talitos_desc desc; + union { + struct talitos_ptr link_tbl[0]; + u8 buf[0]; + }; +}; + /** * talitos_request - descriptor submission request * @desc: descriptor pointer (kernel virtual)
From: Christophe Leroy christophe.leroy@c-s.fr
commit 58cdbc6d2263beb36954408522762bbe73169306 upstream.
On SEC1, hash provides wrong result when performing hashing in several steps with input data SG list has more than one element. This was detected with CONFIG_CRYPTO_MANAGER_EXTRA_TESTS:
[ 44.185947] alg: hash: md5-talitos test failed (wrong result) on test vector 6, cfg="random: may_sleep use_finup src_divs=[<reimport>25.88%@+8063, <flush>24.19%@+9588, 28.63%@+16333, <reimport>4.60%@+6756, 16.70%@+16281] dst_divs=[71.61%@alignmask+16361, 14.36%@+7756, 14.3%@+" [ 44.325122] alg: hash: sha1-talitos test failed (wrong result) on test vector 3, cfg="random: inplace use_final src_divs=[<flush,nosimd>16.56%@+16378, <reimport>52.0%@+16329, 21.42%@alignmask+16380, 10.2%@alignmask+16380] iv_offset=39" [ 44.493500] alg: hash: sha224-talitos test failed (wrong result) on test vector 4, cfg="random: use_final nosimd src_divs=[<reimport>52.27%@+7401, <reimport>17.34%@+16285, <flush>17.71%@+26, 12.68%@+10644] iv_offset=43" [ 44.673262] alg: hash: sha256-talitos test failed (wrong result) on test vector 4, cfg="random: may_sleep use_finup src_divs=[<reimport>60.6%@+12790, 17.86%@+1329, <reimport>12.64%@alignmask+16300, 8.29%@+15, 0.40%@+13506, <reimport>0.51%@+16322, <reimport>0.24%@+16339] dst_divs"
This is due to two issues: - We have an overlap between the buffer used for copying the input data (SEC1 doesn't do scatter/gather) and the chained descriptor. - Data copy is wrong when the previous hash left less than one blocksize of data to hash, implying a complement of the previous block with a few bytes from the new request.
Fix it by: - Moving the second descriptor after the buffer, as moving the buffer after the descriptor would make it more complex for other cipher operations (AEAD, ABLKCIPHER) - Skip the bytes taken from the new request to complete the previous one by moving the SG list forward.
Fixes: 37b5e8897eb5 ("crypto: talitos - chain in buffered data for ahash on SEC1") Cc: stable@vger.kernel.org Signed-off-by: Christophe Leroy christophe.leroy@c-s.fr Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- drivers/crypto/talitos.c | 69 +++++++++++++++++++++++++++-------------------- 1 file changed, 41 insertions(+), 28 deletions(-)
--- a/drivers/crypto/talitos.c +++ b/drivers/crypto/talitos.c @@ -321,6 +321,21 @@ int talitos_submit(struct device *dev, i } EXPORT_SYMBOL(talitos_submit);
+static __be32 get_request_hdr(struct talitos_request *request, bool is_sec1) +{ + struct talitos_edesc *edesc; + + if (!is_sec1) + return request->desc->hdr; + + if (!request->desc->next_desc) + return request->desc->hdr1; + + edesc = container_of(request->desc, struct talitos_edesc, desc); + + return ((struct talitos_desc *)(edesc->buf + edesc->dma_len))->hdr1; +} + /* * process what was done, notify callback of error if not */ @@ -342,12 +357,7 @@ static void flush_channel(struct device
/* descriptors with their done bits set don't get the error */ rmb(); - if (!is_sec1) - hdr = request->desc->hdr; - else if (request->desc->next_desc) - hdr = (request->desc + 1)->hdr1; - else - hdr = request->desc->hdr1; + hdr = get_request_hdr(request, is_sec1);
if ((hdr & DESC_HDR_DONE) == DESC_HDR_DONE) status = 0; @@ -477,8 +487,14 @@ static u32 current_desc_hdr(struct devic } }
- if (priv->chan[ch].fifo[iter].desc->next_desc == cur_desc) - return (priv->chan[ch].fifo[iter].desc + 1)->hdr; + if (priv->chan[ch].fifo[iter].desc->next_desc == cur_desc) { + struct talitos_edesc *edesc; + + edesc = container_of(priv->chan[ch].fifo[iter].desc, + struct talitos_edesc, desc); + return ((struct talitos_desc *) + (edesc->buf + edesc->dma_len))->hdr; + }
return priv->chan[ch].fifo[iter].desc->hdr; } @@ -1436,15 +1452,11 @@ static struct talitos_edesc *talitos_ede edesc->dst_nents = dst_nents; edesc->iv_dma = iv_dma; edesc->dma_len = dma_len; - if (dma_len) { - void *addr = &edesc->link_tbl[0]; - - if (is_sec1 && !dst) - addr += sizeof(struct talitos_desc); - edesc->dma_link_tbl = dma_map_single(dev, addr, + if (dma_len) + edesc->dma_link_tbl = dma_map_single(dev, &edesc->link_tbl[0], edesc->dma_len, DMA_BIDIRECTIONAL); - } + return edesc; }
@@ -1729,14 +1741,16 @@ static void common_nonsnoop_hash_unmap(s struct talitos_private *priv = dev_get_drvdata(dev); bool is_sec1 = has_ftr_sec1(priv); struct talitos_desc *desc = &edesc->desc; - struct talitos_desc *desc2 = desc + 1; + struct talitos_desc *desc2 = (struct talitos_desc *) + (edesc->buf + edesc->dma_len);
unmap_single_talitos_ptr(dev, &edesc->desc.ptr[5], DMA_FROM_DEVICE); if (desc->next_desc && desc->ptr[5].ptr != desc2->ptr[5].ptr) unmap_single_talitos_ptr(dev, &desc2->ptr[5], DMA_FROM_DEVICE);
- talitos_sg_unmap(dev, edesc, req_ctx->psrc, NULL, 0, 0); + if (req_ctx->psrc) + talitos_sg_unmap(dev, edesc, req_ctx->psrc, NULL, 0, 0);
/* When using hashctx-in, must unmap it. */ if (from_talitos_ptr_len(&edesc->desc.ptr[1], is_sec1)) @@ -1803,7 +1817,6 @@ static void talitos_handle_buggy_hash(st
static int common_nonsnoop_hash(struct talitos_edesc *edesc, struct ahash_request *areq, unsigned int length, - unsigned int offset, void (*callback) (struct device *dev, struct talitos_desc *desc, void *context, int error)) @@ -1842,9 +1855,7 @@ static int common_nonsnoop_hash(struct t
sg_count = edesc->src_nents ?: 1; if (is_sec1 && sg_count > 1) - sg_pcopy_to_buffer(req_ctx->psrc, sg_count, - edesc->buf + sizeof(struct talitos_desc), - length, req_ctx->nbuf); + sg_copy_to_buffer(req_ctx->psrc, sg_count, edesc->buf, length); else if (length) sg_count = dma_map_sg(dev, req_ctx->psrc, sg_count, DMA_TO_DEVICE); @@ -1857,7 +1868,7 @@ static int common_nonsnoop_hash(struct t DMA_TO_DEVICE); } else { sg_count = talitos_sg_map(dev, req_ctx->psrc, length, edesc, - &desc->ptr[3], sg_count, offset, 0); + &desc->ptr[3], sg_count, 0, 0); if (sg_count > 1) sync_needed = true; } @@ -1881,7 +1892,8 @@ static int common_nonsnoop_hash(struct t talitos_handle_buggy_hash(ctx, edesc, &desc->ptr[3]);
if (is_sec1 && req_ctx->nbuf && length) { - struct talitos_desc *desc2 = desc + 1; + struct talitos_desc *desc2 = (struct talitos_desc *) + (edesc->buf + edesc->dma_len); dma_addr_t next_desc;
memset(desc2, 0, sizeof(*desc2)); @@ -1902,7 +1914,7 @@ static int common_nonsnoop_hash(struct t DMA_TO_DEVICE); copy_talitos_ptr(&desc2->ptr[2], &desc->ptr[2], is_sec1); sg_count = talitos_sg_map(dev, req_ctx->psrc, length, edesc, - &desc2->ptr[3], sg_count, offset, 0); + &desc2->ptr[3], sg_count, 0, 0); if (sg_count > 1) sync_needed = true; copy_talitos_ptr(&desc2->ptr[5], &desc->ptr[5], is_sec1); @@ -2013,7 +2025,6 @@ static int ahash_process_req(struct ahas struct device *dev = ctx->dev; struct talitos_private *priv = dev_get_drvdata(dev); bool is_sec1 = has_ftr_sec1(priv); - int offset = 0; u8 *ctx_buf = req_ctx->buf[req_ctx->buf_idx];
if (!req_ctx->last && (nbytes + req_ctx->nbuf <= blocksize)) { @@ -2053,6 +2064,8 @@ static int ahash_process_req(struct ahas sg_chain(req_ctx->bufsl, 2, areq->src); req_ctx->psrc = req_ctx->bufsl; } else if (is_sec1 && req_ctx->nbuf && req_ctx->nbuf < blocksize) { + int offset; + if (nbytes_to_hash > blocksize) offset = blocksize - req_ctx->nbuf; else @@ -2065,7 +2078,8 @@ static int ahash_process_req(struct ahas sg_copy_to_buffer(areq->src, nents, ctx_buf + req_ctx->nbuf, offset); req_ctx->nbuf += offset; - req_ctx->psrc = areq->src; + req_ctx->psrc = scatterwalk_ffwd(req_ctx->bufsl, areq->src, + offset); } else req_ctx->psrc = areq->src;
@@ -2105,8 +2119,7 @@ static int ahash_process_req(struct ahas if (ctx->keylen && (req_ctx->first || req_ctx->last)) edesc->desc.hdr |= DESC_HDR_MODE0_MDEU_HMAC;
- return common_nonsnoop_hash(edesc, areq, nbytes_to_hash, offset, - ahash_done); + return common_nonsnoop_hash(edesc, areq, nbytes_to_hash, ahash_done); }
static int ahash_update(struct ahash_request *areq)
From: Haren Myneni haren@linux.vnet.ibm.com
commit e52d484d9869eb291140545746ccbe5ffc7c9306 upstream.
System gets checkstop if RxFIFO overruns with more requests than the maximum possible number of CRBs in FIFO at the same time. The max number of requests per window is controlled by window credits. So find max CRBs from FIFO size and set it to receive window credits.
Fixes: b0d6c9bab5e4 ("crypto/nx: Add P9 NX support for 842 compression engine") CC: stable@vger.kernel.org # v4.14+ Signed-off-by:Haren Myneni haren@us.ibm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
Signed-off-by: Herbert Xu herbert@gondor.apana.org.au
--- drivers/crypto/nx/nx-842-powernv.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
--- a/drivers/crypto/nx/nx-842-powernv.c +++ b/drivers/crypto/nx/nx-842-powernv.c @@ -27,8 +27,6 @@ MODULE_ALIAS_CRYPTO("842-nx"); #define WORKMEM_ALIGN (CRB_ALIGN) #define CSB_WAIT_MAX (5000) /* ms */ #define VAS_RETRIES (10) -/* # of requests allowed per RxFIFO at a time. 0 for unlimited */ -#define MAX_CREDITS_PER_RXFIFO (1024)
struct nx842_workmem { /* Below fields must be properly aligned */ @@ -812,7 +810,11 @@ static int __init vas_cfg_coproc_info(st rxattr.lnotify_lpid = lpid; rxattr.lnotify_pid = pid; rxattr.lnotify_tid = tid; - rxattr.wcreds_max = MAX_CREDITS_PER_RXFIFO; + /* + * Maximum RX window credits can not be more than #CRBs in + * RxFIFO. Otherwise, can get checkstop if RxFIFO overruns. + */ + rxattr.wcreds_max = fifo_size / CRB_SIZE;
/* * Open a VAS receice window which is used to configure RxFIFO
[ Upstream commit 1cbec37b3f9cff074a67bef4fc34b30a09958a0a ]
common_spurious is currently ENDed erroneously. common_interrupt is used in its ENDPROC. So fix this mistake.
Found by my asm macros rewrite patchset.
Fixes: f8a8fe61fec8 ("x86/irq: Seperate unused system vectors from spurious entry again") Signed-off-by: Jiri Slaby jslaby@suse.cz Signed-off-by: Thomas Gleixner tglx@linutronix.de Link: https://lkml.kernel.org/r/20190709063402.19847-1-jslaby@suse.cz Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/entry/entry_32.S | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S index 44c6e6f54bf7..f49e11669271 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -1125,7 +1125,7 @@ common_spurious: movl %esp, %eax call smp_spurious_interrupt jmp ret_from_intr -ENDPROC(common_interrupt) +ENDPROC(common_spurious) #endif
/*
On 18/07/2019 04:01, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.2.2 release. There are 21 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Sat 20 Jul 2019 02:59:27 AM UTC. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.2.2-rc1.g... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.2.y and the diffstat can be found below.
thanks,
greg k-h
All tests are passing for Tegra ...
Test results for stable-v5.2: 12 builds: 12 pass, 0 fail 22 boots: 22 pass, 0 fail 38 tests: 38 pass, 0 fail
Linux version: 5.2.2-rc1-gcc78552c7d92 Boards tested: tegra124-jetson-tk1, tegra186-p2771-0000, tegra194-p2972-0000, tegra20-ventana, tegra210-p2371-2180, tegra30-cardhu-a04
Cheers Jon
On Thu, Jul 18, 2019 at 10:21:40AM +0100, Jon Hunter wrote:
On 18/07/2019 04:01, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.2.2 release. There are 21 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Sat 20 Jul 2019 02:59:27 AM UTC. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.2.2-rc1.g... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.2.y and the diffstat can be found below.
thanks,
greg k-h
All tests are passing for Tegra ...
Test results for stable-v5.2: 12 builds: 12 pass, 0 fail 22 boots: 22 pass, 0 fail 38 tests: 38 pass, 0 fail
Linux version: 5.2.2-rc1-gcc78552c7d92 Boards tested: tegra124-jetson-tk1, tegra186-p2771-0000, tegra194-p2972-0000, tegra20-ventana, tegra210-p2371-2180, tegra30-cardhu-a04
Wonderful, thanks for testing all of these and letting me know.
greg k-h
On Thu, 18 Jul 2019 at 08:33, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 5.2.2 release. There are 21 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Sat 20 Jul 2019 02:59:27 AM UTC. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.2.2-rc1.g... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.2.y and the diffstat can be found below.
thanks,
greg k-h
Results from Linaro’s test farm. No regressions on arm64, arm, x86_64, and i386.
Summary ------------------------------------------------------------------------
kernel: 5.2.2-rc1 git repo: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git git branch: linux-5.2.y git commit: cc78552c7d92a2d16d8fba672a91499028fca830 git describe: v5.2.1-22-gcc78552c7d92 Test details: https://qa-reports.linaro.org/lkft/linux-stable-rc-5.2-oe/build/v5.2.1-22-gc...
No regressions (compared to build v5.2.1)
No fixes (compared to build v5.2.1)
Ran 24019 total tests in the following environments and test suites.
Environments -------------- - dragonboard-410c - hi6220-hikey - i386 - juno-r2 - qemu_arm - qemu_arm64 - qemu_i386 - qemu_x86_64 - x15 - x86
Test Suites ----------- * build * install-android-platform-tools-r2600 * kselftest * libgpiod * ltp-cap_bounds-tests * ltp-commands-tests * ltp-containers-tests * ltp-cpuhotplug-tests * ltp-cve-tests * ltp-dio-tests * ltp-fcntl-locktests-tests * ltp-filecaps-tests * ltp-fs_bind-tests * ltp-fs_perms_simple-tests * ltp-fsx-tests * ltp-hugetlb-tests * ltp-io-tests * ltp-ipc-tests * ltp-math-tests * ltp-mm-tests * ltp-nptl-tests * ltp-pty-tests * ltp-sched-tests * ltp-securebits-tests * ltp-syscalls-tests * ltp-timers-tests * perf * spectre-meltdown-checker-test * v4l2-compliance * libhugetlbfs * network-basic-tests * ltp-fs-tests * ltp-open-posix-tests * kvm-unit-tests * kselftest-vsyscall-mode-native * kselftest-vsyscall-mode-none
On Thu, Jul 18, 2019 at 06:12:34PM +0530, Naresh Kamboju wrote:
On Thu, 18 Jul 2019 at 08:33, Greg Kroah-Hartman gregkh@linuxfoundation.org wrote:
This is the start of the stable review cycle for the 5.2.2 release. There are 21 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Sat 20 Jul 2019 02:59:27 AM UTC. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.2.2-rc1.g... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.2.y and the diffstat can be found below.
thanks,
greg k-h
Results from Linaro’s test farm. No regressions on arm64, arm, x86_64, and i386.
Thanks for testing all of these and letting me know.
greg k-h
On Thu, Jul 18, 2019 at 12:01:18PM +0900, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.2.2 release. There are 21 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Sat 20 Jul 2019 02:59:27 AM UTC. Anything received after that time might be too late.
Build results: total: 159 pass: 159 fail: 0 Qemu test results: total: 364 pass: 364 fail: 0
Guenter
On Thu, Jul 18, 2019 at 12:49:07PM -0700, Guenter Roeck wrote:
On Thu, Jul 18, 2019 at 12:01:18PM +0900, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.2.2 release. There are 21 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Sat 20 Jul 2019 02:59:27 AM UTC. Anything received after that time might be too late.
Build results: total: 159 pass: 159 fail: 0 Qemu test results: total: 364 pass: 364 fail: 0
THanks for testing all of these and letting me know.
greg k-h
On Thu, Jul 18, 2019 at 12:01:18PM +0900, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.2.2 release. There are 21 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Sat 20 Jul 2019 02:59:27 AM UTC. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.2.2-rc1.g... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.2.y and the diffstat can be found below.
thanks,
greg k-h
Compiled and booted with no regressions on my system.
-Kelsey
On Thu, Jul 18, 2019 at 02:58:18PM -0600, Kelsey Skunberg wrote:
On Thu, Jul 18, 2019 at 12:01:18PM +0900, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.2.2 release. There are 21 patches in this series, all will be posted as a response to this one. If anyone has any issues with these being applied, please let me know.
Responses should be made by Sat 20 Jul 2019 02:59:27 AM UTC. Anything received after that time might be too late.
The whole patch series can be found in one patch at: https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.2.2-rc1.g... or in the git tree and branch at: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.2.y and the diffstat can be found below.
thanks,
greg k-h
Compiled and booted with no regressions on my system.
Thanks for testing all of these and letting me know.
greg k-h
stable-rc/linux-5.2.y boot: 135 boots: 0 failed, 134 passed with 1 offline (v5.2.1-22-gcc78552c7d92)
Full Boot Summary: https://kernelci.org/boot/all/job/stable-rc/branch/linux-5.2.y/kernel/v5.2.1... Full Build Summary: https://kernelci.org/build/stable-rc/branch/linux-5.2.y/kernel/v5.2.1-22-gcc...
Tree: stable-rc Branch: linux-5.2.y Git Describe: v5.2.1-22-gcc78552c7d92 Git Commit: cc78552c7d92a2d16d8fba672a91499028fca830 Git URL: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git Tested: 78 unique boards, 28 SoC families, 17 builds out of 209
Offline Platforms:
arm64:
defconfig: gcc-8 meson-gxbb-odroidc2: 1 offline lab
--- For more info write to info@kernelci.org
linux-stable-mirror@lists.linaro.org