From: Tony Lindgren tony@atomide.com
[ Upstream commit 7ffa08169849be898eed6f3694aab8c425497749 ]
This reverts commit 579ced8fdb00b8e94304a83e3cc419f6f8eab08e.
Turns out I was overly optimistic about cpu_pm blocking idle being a solution for handling edge interrupts. While it helps in preventing entering idle states that potentially lose context, we can still get an edge interrupt triggering while entering idle. So we need to also add back the workaround for seeing if there are any pending edge interrupts when waking up.
Signed-off-by: Tony Lindgren tony@atomide.com Cc: Aaro Koskinen aaro.koskinen@iki.fi Cc: Grygorii Strashko grygorii.strashko@ti.com Cc: Keerthy j-keerthy@ti.com Cc: Ladislav Michl ladis@linux-mips.org Cc: Peter Ujfalusi peter.ujfalusi@ti.com Cc: Russell King rmk+kernel@armlinux.org.uk Cc: Tero Kristo t-kristo@ti.com Link: https://lore.kernel.org/r/20201028060556.56038-1-tony@atomide.com Signed-off-by: Linus Walleij linus.walleij@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpio/gpio-omap.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/drivers/gpio/gpio-omap.c b/drivers/gpio/gpio-omap.c index 0ea640fb636cf..3b87989e27640 100644 --- a/drivers/gpio/gpio-omap.c +++ b/drivers/gpio/gpio-omap.c @@ -1114,13 +1114,23 @@ static void omap_gpio_idle(struct gpio_bank *bank, bool may_lose_context) { struct device *dev = bank->chip.parent; void __iomem *base = bank->base; - u32 nowake; + u32 mask, nowake;
bank->saved_datain = readl_relaxed(base + bank->regs->datain);
if (!bank->enabled_non_wakeup_gpios) goto update_gpio_context_count;
+ /* Check for pending EDGE_FALLING, ignore EDGE_BOTH */ + mask = bank->enabled_non_wakeup_gpios & bank->context.fallingdetect; + mask &= ~bank->context.risingdetect; + bank->saved_datain |= mask; + + /* Check for pending EDGE_RISING, ignore EDGE_BOTH */ + mask = bank->enabled_non_wakeup_gpios & bank->context.risingdetect; + mask &= ~bank->context.fallingdetect; + bank->saved_datain &= ~mask; + if (!may_lose_context) goto update_gpio_context_count;
From: Ian Rogers irogers@google.com
[ Upstream commit 1e6f5dcc1b9ec9068f5d38331cec38b35498edf5 ]
The bpf_caps array is shorter without CAP_BPF, avoid out of bounds reads if this isn't defined. Working around this avoids -Wno-array-bounds with clang.
Signed-off-by: Ian Rogers irogers@google.com Signed-off-by: Daniel Borkmann daniel@iogearbox.net Reviewed-by: Tobias Klauser tklauser@distanz.ch Acked-by: Andrii Nakryiko andrii@kernel.org Link: https://lore.kernel.org/bpf/20201027233646.3434896-1-irogers@google.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/bpf/bpftool/feature.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/tools/bpf/bpftool/feature.c b/tools/bpf/bpftool/feature.c index a43a6f10b564c..359960a8f1def 100644 --- a/tools/bpf/bpftool/feature.c +++ b/tools/bpf/bpftool/feature.c @@ -843,9 +843,14 @@ static int handle_perms(void) else p_err("missing %s%s%s%s%s%s%s%srequired for full feature probing; run as root or use 'unprivileged'", capability_msg(bpf_caps, 0), +#ifdef CAP_BPF capability_msg(bpf_caps, 1), capability_msg(bpf_caps, 2), - capability_msg(bpf_caps, 3)); + capability_msg(bpf_caps, 3) +#else + "", "", "", "", "", "" +#endif /* CAP_BPF */ + ); goto exit_free; }
From: Oded Gabbay ogabbay@kernel.org
[ Upstream commit f83f3a31b2972ddc907fbb286c6446dd9db6e198 ]
This interrupt cause is not relevant because of how the user use the QMAN arbitration mechanism. We must mask it as the log explodes with it.
Signed-off-by: Oded Gabbay ogabbay@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/misc/habanalabs/include/gaudi/gaudi_masks.h | 1 - 1 file changed, 1 deletion(-)
diff --git a/drivers/misc/habanalabs/include/gaudi/gaudi_masks.h b/drivers/misc/habanalabs/include/gaudi/gaudi_masks.h index 3510c42d24e31..b734b650fccf7 100644 --- a/drivers/misc/habanalabs/include/gaudi/gaudi_masks.h +++ b/drivers/misc/habanalabs/include/gaudi/gaudi_masks.h @@ -452,7 +452,6 @@ enum axi_id {
#define QM_ARB_ERR_MSG_EN_MASK (\ QM_ARB_ERR_MSG_EN_CHOISE_OVF_MASK |\ - QM_ARB_ERR_MSG_EN_CHOISE_WDT_MASK |\ QM_ARB_ERR_MSG_EN_AXI_LBW_ERR_MASK)
#define PCIE_AUX_FLR_CTRL_HW_CTRL_MASK 0x1
From: Jianqun Xu jay.xu@rock-chips.com
[ Upstream commit 63fbf8013b2f6430754526ef9594f229c7219b1f ]
There need to enable pclk_gpio when do irq_create_mapping, since it will do access to gpio controller.
Signed-off-by: Jianqun Xu jay.xu@rock-chips.com Reviewed-by: Heiko Stuebner heiko@sntech.de Reviewed-by: Kever Yangkever.yang@rock-chips.com Link: https://lore.kernel.org/r/20201013063731.3618-3-jay.xu@rock-chips.com Signed-off-by: Linus Walleij linus.walleij@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/pinctrl/pinctrl-rockchip.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/pinctrl/pinctrl-rockchip.c b/drivers/pinctrl/pinctrl-rockchip.c index 0401c1da79dd0..7b398ed2113e8 100644 --- a/drivers/pinctrl/pinctrl-rockchip.c +++ b/drivers/pinctrl/pinctrl-rockchip.c @@ -3155,7 +3155,9 @@ static int rockchip_gpio_to_irq(struct gpio_chip *gc, unsigned offset) if (!bank->domain) return -ENXIO;
+ clk_enable(bank->clk); virq = irq_create_mapping(bank->domain, offset); + clk_disable(bank->clk);
return (virq) ? : -ENXIO; }
From: Can Guo cang@codeaurora.org
[ Upstream commit da3fecb0040324c08f1587e5bff1f15f36be1872 ]
The scsi_block_reqs_cnt increased in ufshcd_hold() is supposed to be decreased back in ufshcd_ungate_work() in a paired way. However, if specific ufshcd_hold/release sequences are met, it is possible that scsi_block_reqs_cnt is increased twice but only one ungate work is queued. To make sure scsi_block_reqs_cnt is handled by ufshcd_hold() and ufshcd_ungate_work() in a paired way, increase it only if queue_work() returns true.
Link: https://lore.kernel.org/r/1604384682-15837-2-git-send-email-cang@codeaurora.... Reviewed-by: Hongwu Su hongwus@codeaurora.org Reviewed-by: Stanley Chu stanley.chu@mediatek.com Reviewed-by: Bean Huo beanhuo@micron.com Signed-off-by: Can Guo cang@codeaurora.org Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/ufs/ufshcd.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index 316b861305eae..0cb3e71f30ffb 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -1617,12 +1617,12 @@ int ufshcd_hold(struct ufs_hba *hba, bool async) */ fallthrough; case CLKS_OFF: - ufshcd_scsi_block_requests(hba); hba->clk_gating.state = REQ_CLKS_ON; trace_ufshcd_clk_gating(dev_name(hba->dev), hba->clk_gating.state); - queue_work(hba->clk_gating.clk_gating_workq, - &hba->clk_gating.ungate_work); + if (queue_work(hba->clk_gating.clk_gating_workq, + &hba->clk_gating.ungate_work)) + ufshcd_scsi_block_requests(hba); /* * fall through to check if we should wait for this * work to be done or not.
From: Can Guo cang@codeaurora.org
[ Upstream commit 0f52fcb99ea2738a0a0f28e12cf4dd427069dd2a ]
Use the uic_cmd->cmd_active as a flag to track the lifecycle of an UIC cmd. The flag is set before sending the UIC cmd and cleared in IRQ handler. When a PMC or UIC cmd completion timeout happens, if the flag is not set, instead of returning timeout error, we still treat it as a successful operation. This is to deal with the scenario in which completion has been raised but the one waiting for the completion cannot be awaken in time due to kernel scheduling problem.
Link: https://lore.kernel.org/r/1604384682-15837-3-git-send-email-cang@codeaurora.... Reviewed-by: Stanley Chu stanley.chu@mediatek.com Signed-off-by: Can Guo cang@codeaurora.org Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/ufs/ufshcd.c | 26 ++++++++++++++++++++++++-- drivers/scsi/ufs/ufshcd.h | 2 ++ 2 files changed, 26 insertions(+), 2 deletions(-)
diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index 0cb3e71f30ffb..54928a837dad0 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -2100,10 +2100,20 @@ ufshcd_wait_for_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd) unsigned long flags;
if (wait_for_completion_timeout(&uic_cmd->done, - msecs_to_jiffies(UIC_CMD_TIMEOUT))) + msecs_to_jiffies(UIC_CMD_TIMEOUT))) { ret = uic_cmd->argument2 & MASK_UIC_COMMAND_RESULT; - else + } else { ret = -ETIMEDOUT; + dev_err(hba->dev, + "uic cmd 0x%x with arg3 0x%x completion timeout\n", + uic_cmd->command, uic_cmd->argument3); + + if (!uic_cmd->cmd_active) { + dev_err(hba->dev, "%s: UIC cmd has been completed, return the result\n", + __func__); + ret = uic_cmd->argument2 & MASK_UIC_COMMAND_RESULT; + } + }
spin_lock_irqsave(hba->host->host_lock, flags); hba->active_uic_cmd = NULL; @@ -2135,6 +2145,7 @@ __ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd, if (completion) init_completion(&uic_cmd->done);
+ uic_cmd->cmd_active = 1; ufshcd_dispatch_uic_cmd(hba, uic_cmd);
return 0; @@ -3774,10 +3785,18 @@ static int ufshcd_uic_pwr_ctrl(struct ufs_hba *hba, struct uic_command *cmd) dev_err(hba->dev, "pwr ctrl cmd 0x%x with mode 0x%x completion timeout\n", cmd->command, cmd->argument3); + + if (!cmd->cmd_active) { + dev_err(hba->dev, "%s: Power Mode Change operation has been completed, go check UPMCRS\n", + __func__); + goto check_upmcrs; + } + ret = -ETIMEDOUT; goto out; }
+check_upmcrs: status = ufshcd_get_upmcrs(hba); if (status != PWR_LOCAL) { dev_err(hba->dev, @@ -4887,11 +4906,14 @@ static irqreturn_t ufshcd_uic_cmd_compl(struct ufs_hba *hba, u32 intr_status) ufshcd_get_uic_cmd_result(hba); hba->active_uic_cmd->argument3 = ufshcd_get_dme_attr_val(hba); + if (!hba->uic_async_done) + hba->active_uic_cmd->cmd_active = 0; complete(&hba->active_uic_cmd->done); retval = IRQ_HANDLED; }
if ((intr_status & UFSHCD_UIC_PWR_MASK) && hba->uic_async_done) { + hba->active_uic_cmd->cmd_active = 0; complete(hba->uic_async_done); retval = IRQ_HANDLED; } diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h index 363589c0bd370..23f46c7b8cb28 100644 --- a/drivers/scsi/ufs/ufshcd.h +++ b/drivers/scsi/ufs/ufshcd.h @@ -64,6 +64,7 @@ enum dev_cmd_type { * @argument1: UIC command argument 1 * @argument2: UIC command argument 2 * @argument3: UIC command argument 3 + * @cmd_active: Indicate if UIC command is outstanding * @done: UIC command completion */ struct uic_command { @@ -71,6 +72,7 @@ struct uic_command { u32 argument1; u32 argument2; u32 argument3; + int cmd_active; struct completion done; };
From: Andy Shevchenko andriy.shevchenko@linux.intel.com
[ Upstream commit a835d3a114ab0dc2f0d8c6963c3f53734b1c5965 ]
It is useful for debugging to have the error message printed when regmap initialisation fails. Add it to the driver.
Signed-off-by: Andy Shevchenko andriy.shevchenko@linux.intel.com Cc: Martin Hundebøll martin@geanix.com Link: https://lore.kernel.org/r/20201009180856.4738-2-andriy.shevchenko@linux.inte... Tested-by: Jan Kundrát jan.kundrat@cesnet.cz Signed-off-by: Linus Walleij linus.walleij@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/pinctrl/pinctrl-mcp23s08_spi.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/pinctrl/pinctrl-mcp23s08_spi.c b/drivers/pinctrl/pinctrl-mcp23s08_spi.c index 1f47a661b0a79..52e32634bb5a7 100644 --- a/drivers/pinctrl/pinctrl-mcp23s08_spi.c +++ b/drivers/pinctrl/pinctrl-mcp23s08_spi.c @@ -126,6 +126,8 @@ static int mcp23s08_spi_regmap_init(struct mcp23s08 *mcp, struct device *dev, copy->name = name;
mcp->regmap = devm_regmap_init(dev, &mcp23sxx_spi_regmap, mcp, copy); + if (IS_ERR(mcp->regmap)) + dev_err(dev, "regmap init failed for %s\n", mcp->chip.label); return PTR_ERR_OR_ZERO(mcp->regmap); }
From: Aaron Lewis aaronlewis@google.com
[ Upstream commit df11f7dd5834146defa448acba097e8d7703cc42 ]
Fix the layout of 'struct desc64' to match the layout described in the SDM Vol 3, Chapter 3 "Protected-Mode Memory Management", section 3.4.5 "Segment Descriptors", Figure 3-8 "Segment Descriptor". The test added later in this series relies on this and crashes if this layout is not correct.
Signed-off-by: Aaron Lewis aaronlewis@google.com Reviewed-by: Alexander Graf graf@amazon.com Message-Id: 20201012194716.3950330-2-aaronlewis@google.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/testing/selftests/kvm/include/x86_64/processor.h | 2 +- tools/testing/selftests/kvm/lib/x86_64/processor.c | 3 ++- 2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index 82b7fe16a8242..0a65e7bb5249e 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -59,7 +59,7 @@ struct gpr64_regs { struct desc64 { uint16_t limit0; uint16_t base0; - unsigned base1:8, s:1, type:4, dpl:2, p:1; + unsigned base1:8, type:4, s:1, dpl:2, p:1; unsigned limit1:4, avl:1, l:1, db:1, g:1, base2:8; uint32_t base3; uint32_t zero1; diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index f6eb34eaa0d22..1ccf6c9b3476d 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -392,11 +392,12 @@ static void kvm_seg_fill_gdt_64bit(struct kvm_vm *vm, struct kvm_segment *segp) desc->limit0 = segp->limit & 0xFFFF; desc->base0 = segp->base & 0xFFFF; desc->base1 = segp->base >> 16; - desc->s = segp->s; desc->type = segp->type; + desc->s = segp->s; desc->dpl = segp->dpl; desc->p = segp->present; desc->limit1 = segp->limit >> 16; + desc->avl = segp->avl; desc->l = segp->l; desc->db = segp->db; desc->g = segp->g;
From: Hans de Goede hdegoede@redhat.com
[ Upstream commit 7daaa06357bf7f1874b62bb1ea9d66a51d4e567e ]
The Medion Akoya E2228T's ACPI _LID implementation is quite broken, it has the same issues as the one from the Medion Akoya E2215T:
1. For notifications it uses an ActiveLow Edge GpioInt, rather then an ActiveBoth one, meaning that the device is only notified when the lid is closed, not when it is opened.
2. Matching with this its _LID method simply always returns 0 (closed)
In order for the Linux LID code to work properly with this implementation, the lid_init_state selection needs to be set to ACPI_BUTTON_LID_INIT_OPEN, add a DMI quirk for this.
While working on this I also found out that the MD60### part of the model number differs per country/batch while all of the E2215T and E2228T models have this issue, so also remove the " MD60198" part from the E2215T quirk.
Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/acpi/button.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/drivers/acpi/button.c b/drivers/acpi/button.c index da4b125ab4c3e..088ec847fd26a 100644 --- a/drivers/acpi/button.c +++ b/drivers/acpi/button.c @@ -102,7 +102,18 @@ static const struct dmi_system_id dmi_lid_quirks[] = { */ .matches = { DMI_MATCH(DMI_SYS_VENDOR, "MEDION"), - DMI_MATCH(DMI_PRODUCT_NAME, "E2215T MD60198"), + DMI_MATCH(DMI_PRODUCT_NAME, "E2215T"), + }, + .driver_data = (void *)(long)ACPI_BUTTON_LID_INIT_OPEN, + }, + { + /* + * Medion Akoya E2228T, notification of the LID device only + * happens on close, not on open and _LID always returns closed. + */ + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "MEDION"), + DMI_MATCH(DMI_PRODUCT_NAME, "E2228T"), }, .driver_data = (void *)(long)ACPI_BUTTON_LID_INIT_OPEN, },
From: Will Deacon will@kernel.org
[ Upstream commit f969f03888b9438fdb227b6460d99ede5737326d ]
In a surprising turn of events, it transpires that CPU capabilities configured as ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE are never set as the result of late-onlining. Therefore our handling of erratum 1418040 does not get activated if it is not required by any of the boot CPUs, even though we allow late-onlining of an affected CPU.
In order to get things working again, replace the cpus_have_const_cap() invocation with an explicit check for the current CPU using this_cpu_has_cap().
Cc: Sai Prakash Ranjan saiprakash.ranjan@codeaurora.org Cc: Stephen Boyd swboyd@chromium.org Cc: Catalin Marinas catalin.marinas@arm.com Cc: Mark Rutland mark.rutland@arm.com Reviewed-by: Suzuki K Poulose suzuki.poulose@arm.com Acked-by: Marc Zyngier maz@kernel.org Link: https://lore.kernel.org/r/20201106114952.10032-1-will@kernel.org Signed-off-by: Will Deacon will@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/include/asm/cpufeature.h | 2 ++ arch/arm64/kernel/process.c | 5 ++--- 2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 89b4f0142c287..a986ecd0b0074 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -268,6 +268,8 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0; /* * CPU feature detected at boot time based on feature of one or more CPUs. * All possible conflicts for a late CPU are ignored. + * NOTE: this means that a late CPU with the feature will *not* cause the + * capability to be advertised by cpus_have_*cap()! */ #define ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE \ (ARM64_CPUCAP_SCOPE_LOCAL_CPU | \ diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index f1804496b9350..2da5f3f9d345f 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -526,14 +526,13 @@ static void erratum_1418040_thread_switch(struct task_struct *prev, bool prev32, next32; u64 val;
- if (!(IS_ENABLED(CONFIG_ARM64_ERRATUM_1418040) && - cpus_have_const_cap(ARM64_WORKAROUND_1418040))) + if (!IS_ENABLED(CONFIG_ARM64_ERRATUM_1418040)) return;
prev32 = is_compat_thread(task_thread_info(prev)); next32 = is_compat_thread(task_thread_info(next));
- if (prev32 == next32) + if (prev32 == next32 || !this_cpu_has_cap(ARM64_WORKAROUND_1418040)) return;
val = read_sysreg(cntkctl_el1);
From: Will Deacon will@kernel.org
[ Upstream commit 891deb87585017d526b67b59c15d38755b900fea ]
cpu_psci_cpu_die() is called in the context of the dying CPU, which will no longer be online or tracked by RCU. It is therefore not generally safe to call printk() if the PSCI "cpu off" request fails, so remove the pr_crit() invocation.
Cc: Qian Cai cai@redhat.com Cc: "Paul E. McKenney" paulmck@kernel.org Cc: Catalin Marinas catalin.marinas@arm.com Link: https://lore.kernel.org/r/20201106103602.9849-2-will@kernel.org Signed-off-by: Will Deacon will@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/kernel/psci.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/arch/arm64/kernel/psci.c b/arch/arm64/kernel/psci.c index 43ae4e0c968f6..62d2bda7adb80 100644 --- a/arch/arm64/kernel/psci.c +++ b/arch/arm64/kernel/psci.c @@ -66,7 +66,6 @@ static int cpu_psci_cpu_disable(unsigned int cpu)
static void cpu_psci_cpu_die(unsigned int cpu) { - int ret; /* * There are no known implementations of PSCI actually using the * power state field, pass a sensible default for now. @@ -74,9 +73,7 @@ static void cpu_psci_cpu_die(unsigned int cpu) u32 state = PSCI_POWER_STATE_TYPE_POWER_DOWN << PSCI_0_2_POWER_STATE_TYPE_SHIFT;
- ret = psci_ops.cpu_off(state); - - pr_crit("unable to power off CPU%u (%d)\n", cpu, ret); + psci_ops.cpu_off(state); }
static int cpu_psci_cpu_kill(unsigned int cpu)
From: Will Deacon will@kernel.org
[ Upstream commit 04e613ded8c26489b3e0f9101b44462f780d1a35 ]
Commit ce3d31ad3cac ("arm64/smp: Move rcu_cpu_starting() earlier") ensured that RCU is informed early about incoming CPUs that might end up calling into printk() before they are online. However, if such a CPU fails the early CPU feature compatibility checks in check_local_cpu_capabilities(), then it will be powered off or parked without informing RCU, leading to an endless stream of stalls:
| rcu: INFO: rcu_preempt detected stalls on CPUs/tasks: | rcu: 2-O...: (0 ticks this GP) idle=002/1/0x4000000000000000 softirq=0/0 fqs=2593 | (detected by 0, t=5252 jiffies, g=9317, q=136) | Task dump for CPU 2: | task:swapper/2 state:R running task stack: 0 pid: 0 ppid: 1 flags:0x00000028 | Call trace: | ret_from_fork+0x0/0x30
Ensure that the dying CPU invokes rcu_report_dead() prior to being powered off or parked.
Cc: Qian Cai cai@redhat.com Cc: "Paul E. McKenney" paulmck@kernel.org Reviewed-by: Paul E. McKenney paulmck@kernel.org Suggested-by: Qian Cai cai@redhat.com Link: https://lore.kernel.org/r/20201105222242.GA8842@willie-the-truck Link: https://lore.kernel.org/r/20201106103602.9849-3-will@kernel.org Signed-off-by: Will Deacon will@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/kernel/smp.c | 1 + kernel/rcu/tree.c | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 98c059b6bacae..361cfc55cf5a7 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -401,6 +401,7 @@ void cpu_die_early(void)
/* Mark this CPU absent */ set_cpu_present(cpu, 0); + rcu_report_dead(cpu);
if (IS_ENABLED(CONFIG_HOTPLUG_CPU)) { update_cpu_boot_status(CPU_KILL_ME); diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index c8f62e2d02761..b4924fefe2745 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -4024,7 +4024,6 @@ void rcu_cpu_starting(unsigned int cpu) smp_mb(); /* Ensure RCU read-side usage follows above initialization. */ }
-#ifdef CONFIG_HOTPLUG_CPU /* * The outgoing function has no further need of RCU, so remove it from * the rcu_node tree's ->qsmaskinitnext bit masks. @@ -4064,6 +4063,7 @@ void rcu_report_dead(unsigned int cpu) per_cpu(rcu_cpu_started, cpu) = 0; }
+#ifdef CONFIG_HOTPLUG_CPU /* * The outgoing CPU has just passed through the dying-idle state, and we * are being invoked from the CPU that was IPIed to continue the offline
From: Boqun Feng boqun.feng@gmail.com
[ Upstream commit d61fc96a37603384cd531622c1e89de1096b5123 ]
Chris Wilson reported a problem spotted by check_chain_key(): a chain key got changed in validate_chain() because we modify the ->read in validate_chain() to skip checks for dependency adding, and ->read is taken into calculation for chain key since commit f611e8cf98ec ("lockdep: Take read/write status in consideration when generate chainkey").
Fix this by avoiding to modify ->read in validate_chain() based on two facts: a) since we now support recursive read lock detection, there is no need to skip checks for dependency adding for recursive readers, b) since we have a), there is only one case left (nest_lock) where we want to skip checks in validate_chain(), we simply remove the modification for ->read and rely on the return value of check_deadlock() to skip the dependency adding.
Reported-by: Chris Wilson chris@chris-wilson.co.uk Signed-off-by: Boqun Feng boqun.feng@gmail.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Link: https://lkml.kernel.org/r/20201102053743.450459-1-boqun.feng@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/locking/lockdep.c | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index 3eb35ad1b5241..f3a4302a1251f 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -2421,7 +2421,9 @@ print_deadlock_bug(struct task_struct *curr, struct held_lock *prev, * (Note that this has to be done separately, because the graph cannot * detect such classes of deadlocks.) * - * Returns: 0 on deadlock detected, 1 on OK, 2 on recursive read + * Returns: 0 on deadlock detected, 1 on OK, 2 if another lock with the same + * lock class is held but nest_lock is also held, i.e. we rely on the + * nest_lock to avoid the deadlock. */ static int check_deadlock(struct task_struct *curr, struct held_lock *next) @@ -2444,7 +2446,7 @@ check_deadlock(struct task_struct *curr, struct held_lock *next) * lock class (i.e. read_lock(lock)+read_lock(lock)): */ if ((next->read == 2) && prev->read) - return 2; + continue;
/* * We're holding the nest_lock, which serializes this lock's @@ -3227,16 +3229,13 @@ static int validate_chain(struct task_struct *curr,
if (!ret) return 0; - /* - * Mark recursive read, as we jump over it when - * building dependencies (just like we jump over - * trylock entries): - */ - if (ret == 2) - hlock->read = 2; /* * Add dependency only if this lock is not the head - * of the chain, and if it's not a secondary read-lock: + * of the chain, and if the new lock introduces no more + * lock dependency (because we already hold a lock with the + * same lock class) nor deadlock (because the nest_lock + * serializes nesting locks), see the comments for + * check_deadlock(). */ if (!chain_head && ret != 2) { if (!check_prevs_add(curr, hlock))
Hi Sasha,
I don't think this commit should be picked by stable, since the problem it fixes is caused by commit f611e8cf98ec ("lockdep: Take read/write status in consideration when generate chainkey"), which just got merged in the merge window of 5.10. So 5.9 and 5.4 don't have the problem.
Regards, Boqun
On Tue, Nov 17, 2020 at 07:56:44AM -0500, Sasha Levin wrote:
From: Boqun Feng boqun.feng@gmail.com
[ Upstream commit d61fc96a37603384cd531622c1e89de1096b5123 ]
Chris Wilson reported a problem spotted by check_chain_key(): a chain key got changed in validate_chain() because we modify the ->read in validate_chain() to skip checks for dependency adding, and ->read is taken into calculation for chain key since commit f611e8cf98ec ("lockdep: Take read/write status in consideration when generate chainkey").
Fix this by avoiding to modify ->read in validate_chain() based on two facts: a) since we now support recursive read lock detection, there is no need to skip checks for dependency adding for recursive readers, b) since we have a), there is only one case left (nest_lock) where we want to skip checks in validate_chain(), we simply remove the modification for ->read and rely on the return value of check_deadlock() to skip the dependency adding.
Reported-by: Chris Wilson chris@chris-wilson.co.uk Signed-off-by: Boqun Feng boqun.feng@gmail.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Link: https://lkml.kernel.org/r/20201102053743.450459-1-boqun.feng@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org
kernel/locking/lockdep.c | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index 3eb35ad1b5241..f3a4302a1251f 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -2421,7 +2421,9 @@ print_deadlock_bug(struct task_struct *curr, struct held_lock *prev,
- (Note that this has to be done separately, because the graph cannot
- detect such classes of deadlocks.)
- Returns: 0 on deadlock detected, 1 on OK, 2 on recursive read
- Returns: 0 on deadlock detected, 1 on OK, 2 if another lock with the same
- lock class is held but nest_lock is also held, i.e. we rely on the
*/
- nest_lock to avoid the deadlock.
static int check_deadlock(struct task_struct *curr, struct held_lock *next) @@ -2444,7 +2446,7 @@ check_deadlock(struct task_struct *curr, struct held_lock *next) * lock class (i.e. read_lock(lock)+read_lock(lock)): */ if ((next->read == 2) && prev->read)
return 2;
continue;
/* * We're holding the nest_lock, which serializes this lock's @@ -3227,16 +3229,13 @@ static int validate_chain(struct task_struct *curr, if (!ret) return 0;
/*
* Mark recursive read, as we jump over it when
* building dependencies (just like we jump over
* trylock entries):
*/
if (ret == 2)
/*hlock->read = 2;
- Add dependency only if this lock is not the head
* of the chain, and if it's not a secondary read-lock:
* of the chain, and if the new lock introduces no more
* lock dependency (because we already hold a lock with the
* same lock class) nor deadlock (because the nest_lock
* serializes nesting locks), see the comments for
*/ if (!chain_head && ret != 2) { if (!check_prevs_add(curr, hlock))* check_deadlock().
-- 2.27.0
On Wed, Nov 18, 2020 at 09:26:34AM +0800, Boqun Feng wrote:
Hi Sasha,
I don't think this commit should be picked by stable, since the problem it fixes is caused by commit f611e8cf98ec ("lockdep: Take read/write status in consideration when generate chainkey"), which just got merged in the merge window of 5.10. So 5.9 and 5.4 don't have the problem.
I'll drop it, thanks!
From: Richard Weinberger richard@nod.at
[ Upstream commit 9a5085b3fad5d5d6019a3d160cdd70357d35c8b1 ]
Commit b2b29d6d0119 ("mm: account PMD tables like PTE tables") uncovered a bug in uml, we forgot to call the destructor. While we are here, give x a sane name.
Reported-by: Anton Ivanov anton.ivanov@cambridgegreys.com Co-developed-by: Matthew Wilcox (Oracle) willy@infradead.org Signed-off-by: Richard Weinberger richard@nod.at Tested-by: Christopher Obbard chris.obbard@collabora.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/um/include/asm/pgalloc.h | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/arch/um/include/asm/pgalloc.h b/arch/um/include/asm/pgalloc.h index 5393e13e07e0a..2bbf28cf3aa92 100644 --- a/arch/um/include/asm/pgalloc.h +++ b/arch/um/include/asm/pgalloc.h @@ -33,7 +33,13 @@ do { \ } while (0)
#ifdef CONFIG_3_LEVEL_PGTABLES -#define __pmd_free_tlb(tlb,x, address) tlb_remove_page((tlb),virt_to_page(x)) + +#define __pmd_free_tlb(tlb, pmd, address) \ +do { \ + pgtable_pmd_page_dtor(virt_to_page(pmd)); \ + tlb_remove_page((tlb),virt_to_page(pmd)); \ +} while (0) \ + #endif
#endif
From: "Darrick J. Wong" darrick.wong@oracle.com
[ Upstream commit 22843291efc986ce7722610073fcf85a39b4cb13 ]
__sb_start_write has some weird looking lockdep code that claims to exist to handle nested freeze locking requests from xfs. The code as written seems broken -- if we think we hold a read lock on any of the higher freeze levels (e.g. we hold SB_FREEZE_WRITE and are trying to lock SB_FREEZE_PAGEFAULT), it converts a blocking lock attempt into a trylock.
However, it's not correct to downgrade a blocking lock attempt to a trylock unless the downgrading code or the callers are prepared to deal with that situation. Neither __sb_start_write nor its callers handle this at all. For example:
sb_start_pagefault ignores the return value completely, with the result that if xfs_filemap_fault loses a race with a different thread trying to fsfreeze, it will proceed without pagefault freeze protection (thereby breaking locking rules) and then unlocks the pagefault freeze lock that it doesn't own on its way out (thereby corrupting the lock state), which leads to a system hang shortly afterwards.
Normally, this won't happen because our ownership of a read lock on a higher freeze protection level blocks fsfreeze from grabbing a write lock on that higher level. *However*, if lockdep is offline, lock_is_held_type unconditionally returns 1, which means that percpu_rwsem_is_held returns 1, which means that __sb_start_write unconditionally converts blocking freeze lock attempts into trylocks, even when we *don't* hold anything that would block a fsfreeze.
Apparently this all held together until 5.10-rc1, when bugs in lockdep caused lockdep to shut itself off early in an fstests run, and once fstests gets to the "race writes with freezer" tests, kaboom. This might explain the long trail of vanishingly infrequent livelocks in fstests after lockdep goes offline that I've never been able to diagnose.
We could fix it by spinning on the trylock if wait==true, but AFAICT the locking works fine if lockdep is not built at all (and I didn't see any complaints running fstests overnight), so remove this snippet entirely.
NOTE: Commit f4b554af9931 in 2015 created the current weird logic (which used to exist in a different form in commit 5accdf82ba25c from 2012) in __sb_start_write. XFS solved this whole problem in the late 2.6 era by creating a variant of transactions (XFS_TRANS_NO_WRITECOUNT) that don't grab intwrite freeze protection, thus making lockdep's solution unnecessary. The commit claims that Dave Chinner explained that the trylock hack + comment could be removed, but nobody ever did.
Signed-off-by: Darrick J. Wong darrick.wong@oracle.com Reviewed-by: Christoph Hellwig hch@lst.de Reviewed-by: Jan Kara jack@suse.cz Signed-off-by: Sasha Levin sashal@kernel.org --- fs/super.c | 33 ++++----------------------------- 1 file changed, 4 insertions(+), 29 deletions(-)
diff --git a/fs/super.c b/fs/super.c index 904459b351199..3a0777612c49b 100644 --- a/fs/super.c +++ b/fs/super.c @@ -1645,36 +1645,11 @@ EXPORT_SYMBOL(__sb_end_write); */ int __sb_start_write(struct super_block *sb, int level, bool wait) { - bool force_trylock = false; - int ret = 1; + if (!wait) + return percpu_down_read_trylock(sb->s_writers.rw_sem + level-1);
-#ifdef CONFIG_LOCKDEP - /* - * We want lockdep to tell us about possible deadlocks with freezing - * but it's it bit tricky to properly instrument it. Getting a freeze - * protection works as getting a read lock but there are subtle - * problems. XFS for example gets freeze protection on internal level - * twice in some cases, which is OK only because we already hold a - * freeze protection also on higher level. Due to these cases we have - * to use wait == F (trylock mode) which must not fail. - */ - if (wait) { - int i; - - for (i = 0; i < level - 1; i++) - if (percpu_rwsem_is_held(sb->s_writers.rw_sem + i)) { - force_trylock = true; - break; - } - } -#endif - if (wait && !force_trylock) - percpu_down_read(sb->s_writers.rw_sem + level-1); - else - ret = percpu_down_read_trylock(sb->s_writers.rw_sem + level-1); - - WARN_ON(force_trylock && !ret); - return ret; + percpu_down_read(sb->s_writers.rw_sem + level-1); + return 1; } EXPORT_SYMBOL(__sb_start_write);
From: Zhang Qilong zhangqilong3@huawei.com
[ Upstream commit bc923818b190c8b63c91a47702969c8053574f5b ]
In the fail path of gfs2_check_blk_type, forgetting to call gfs2_glock_dq_uninit will result in rgd_gh reference leak.
Signed-off-by: Zhang Qilong zhangqilong3@huawei.com Signed-off-by: Andreas Gruenbacher agruenba@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/gfs2/rgrp.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c index 1bba5a9d45fa3..fc073cb729854 100644 --- a/fs/gfs2/rgrp.c +++ b/fs/gfs2/rgrp.c @@ -2530,13 +2530,13 @@ int gfs2_check_blk_type(struct gfs2_sbd *sdp, u64 no_addr, unsigned int type)
rbm.rgd = rgd; error = gfs2_rbm_from_block(&rbm, no_addr); - if (WARN_ON_ONCE(error)) - goto fail; - - if (gfs2_testbit(&rbm, false) != type) - error = -ESTALE; + if (!WARN_ON_ONCE(error)) { + if (gfs2_testbit(&rbm, false) != type) + error = -ESTALE; + }
gfs2_glock_dq_uninit(&rgd_gh); + fail: return error; }
From: Paul Barker pbarker@konsulko.com
[ Upstream commit fd8feec665fef840277515a5c2b9b7c3e3970fad ]
To convert the number of pulses counted into an RPM estimation, we need to divide by the width of our measurement interval instead of multiplying by it. If the width of the measurement interval is zero we don't update the RPM value to avoid dividing by zero.
We also don't need to do 64-bit division, with 32-bits we can handle a fan running at over 4 million RPM.
Signed-off-by: Paul Barker pbarker@konsulko.com Link: https://lore.kernel.org/r/20201111164643.7087-1-pbarker@konsulko.com Signed-off-by: Guenter Roeck linux@roeck-us.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/hwmon/pwm-fan.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/drivers/hwmon/pwm-fan.c b/drivers/hwmon/pwm-fan.c index 17bb64299bfd8..3642086498d98 100644 --- a/drivers/hwmon/pwm-fan.c +++ b/drivers/hwmon/pwm-fan.c @@ -54,16 +54,18 @@ static irqreturn_t pulse_handler(int irq, void *dev_id) static void sample_timer(struct timer_list *t) { struct pwm_fan_ctx *ctx = from_timer(ctx, t, rpm_timer); + unsigned int delta = ktime_ms_delta(ktime_get(), ctx->sample_start); int pulses; - u64 tmp;
- pulses = atomic_read(&ctx->pulses); - atomic_sub(pulses, &ctx->pulses); - tmp = (u64)pulses * ktime_ms_delta(ktime_get(), ctx->sample_start) * 60; - do_div(tmp, ctx->pulses_per_revolution * 1000); - ctx->rpm = tmp; + if (delta) { + pulses = atomic_read(&ctx->pulses); + atomic_sub(pulses, &ctx->pulses); + ctx->rpm = (unsigned int)(pulses * 1000 * 60) / + (ctx->pulses_per_revolution * delta); + + ctx->sample_start = ktime_get(); + }
- ctx->sample_start = ktime_get(); mod_timer(&ctx->rpm_timer, jiffies + HZ); }
From: Bob Peterson rpeterso@redhat.com
[ Upstream commit 4e79e3f08e576acd51dffb4520037188703238b3 ]
Patch b2a846dbef4e ("gfs2: Ignore journal log writes for jdata holes") tried (unsuccessfully) to fix a case in which writes were done to jdata blocks, the blocks are sent to the ail list, then a punch_hole or truncate operation caused the blocks to be freed. In other words, the ail items are for jdata holes. Before b2a846dbef4e, the jdata hole caused function gfs2_block_map to return -EIO, which was eventually interpreted as an IO error to the journal, and then withdraw.
This patch changes function gfs2_get_block_noalloc, which is only used for jdata writes, so it returns -ENODATA rather than -EIO, and when -ENODATA is returned to gfs2_ail1_start_one, the error is ignored. We can safely ignore it because gfs2_ail1_start_one is only called when the jdata pages have already been written and truncated, so the ail1 content no longer applies.
Signed-off-by: Bob Peterson rpeterso@redhat.com Signed-off-by: Andreas Gruenbacher agruenba@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/gfs2/aops.c | 2 +- fs/gfs2/log.c | 2 ++ 2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c index d4af283fc8886..317a47d49442b 100644 --- a/fs/gfs2/aops.c +++ b/fs/gfs2/aops.c @@ -77,7 +77,7 @@ static int gfs2_get_block_noalloc(struct inode *inode, sector_t lblock, if (error) return error; if (!buffer_mapped(bh_result)) - return -EIO; + return -ENODATA; return 0; }
diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c index 93032feb51599..1ceeec0ffb16c 100644 --- a/fs/gfs2/log.c +++ b/fs/gfs2/log.c @@ -132,6 +132,8 @@ __acquires(&sdp->sd_ail_lock) spin_unlock(&sdp->sd_ail_lock); ret = generic_writepages(mapping, wbc); spin_lock(&sdp->sd_ail_lock); + if (ret == -ENODATA) /* if a jdata write into a new hole */ + ret = 0; /* ignore it */ if (ret || wbc->nr_to_write <= 0) break; return -EBUSY;
From: Konrad Dybcio konrad.dybcio@somainline.org
[ Upstream commit 77473cffef21611b4423f613fe32836afb26405e ]
Add MIDR value for KRYO2XX gold (big) and silver (LITTLE) CPU cores which are used in Qualcomm Technologies, Inc. SoCs. This will be used to identify and apply errata which are applicable for these CPU cores.
Signed-off-by: Konrad Dybcio konrad.dybcio@somainline.org Link: https://lore.kernel.org/r/20201104232218.198800-2-konrad.dybcio@somainline.o... Signed-off-by: Will Deacon will@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/include/asm/cputype.h | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h index 7219cddeba669..f516fe36de30a 100644 --- a/arch/arm64/include/asm/cputype.h +++ b/arch/arm64/include/asm/cputype.h @@ -85,6 +85,8 @@ #define QCOM_CPU_PART_FALKOR_V1 0x800 #define QCOM_CPU_PART_FALKOR 0xC00 #define QCOM_CPU_PART_KRYO 0x200 +#define QCOM_CPU_PART_KRYO_2XX_GOLD 0x800 +#define QCOM_CPU_PART_KRYO_2XX_SILVER 0x801 #define QCOM_CPU_PART_KRYO_3XX_SILVER 0x803 #define QCOM_CPU_PART_KRYO_4XX_GOLD 0x804 #define QCOM_CPU_PART_KRYO_4XX_SILVER 0x805 @@ -114,6 +116,8 @@ #define MIDR_QCOM_FALKOR_V1 MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_FALKOR_V1) #define MIDR_QCOM_FALKOR MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_FALKOR) #define MIDR_QCOM_KRYO MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO) +#define MIDR_QCOM_KRYO_2XX_GOLD MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_2XX_GOLD) +#define MIDR_QCOM_KRYO_2XX_SILVER MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_2XX_SILVER) #define MIDR_QCOM_KRYO_3XX_SILVER MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_3XX_SILVER) #define MIDR_QCOM_KRYO_4XX_GOLD MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_4XX_GOLD) #define MIDR_QCOM_KRYO_4XX_SILVER MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_4XX_SILVER)
From: Konrad Dybcio konrad.dybcio@somainline.org
[ Upstream commit e3dd11a9f2521cecbcf30c2fd17ecc5a445dfb94 ]
QCOM KRYO2XX gold (big) silver (LITTLE) CPU cores are based on Cortex-A73 and Cortex-A53 respectively and are meltdown safe, hence add them to kpti_safe_list[].
Signed-off-by: Konrad Dybcio konrad.dybcio@somainline.org Link: https://lore.kernel.org/r/20201104232218.198800-3-konrad.dybcio@somainline.o... Signed-off-by: Will Deacon will@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/kernel/cpufeature.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 6424584be01e6..9d0e4afdc8caa 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1333,6 +1333,8 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry, MIDR_ALL_VERSIONS(MIDR_CORTEX_A73), MIDR_ALL_VERSIONS(MIDR_HISI_TSV110), MIDR_ALL_VERSIONS(MIDR_NVIDIA_CARMEL), + MIDR_ALL_VERSIONS(MIDR_QCOM_KRYO_2XX_GOLD), + MIDR_ALL_VERSIONS(MIDR_QCOM_KRYO_2XX_SILVER), MIDR_ALL_VERSIONS(MIDR_QCOM_KRYO_3XX_SILVER), MIDR_ALL_VERSIONS(MIDR_QCOM_KRYO_4XX_SILVER), { /* sentinel */ }
From: Konrad Dybcio konrad.dybcio@somainline.org
[ Upstream commit 23c216416056148136bdaf0cdd18caf4904bb6e1 ]
QCOM KRYO2XX Silver cores are Cortex-A53 based and are susceptible to the 845719 erratum. Add them to the lookup list to apply the erratum.
Signed-off-by: Konrad Dybcio konrad.dybcio@somainline.org Link: https://lore.kernel.org/r/20201104232218.198800-5-konrad.dybcio@somainline.o... Signed-off-by: Will Deacon will@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- arch/arm64/kernel/cpu_errata.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index 966672b2213e1..533a957dd83ee 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -750,6 +750,8 @@ static const struct midr_range erratum_845719_list[] = { MIDR_REV_RANGE(MIDR_CORTEX_A53, 0, 0, 4), /* Brahma-B53 r0p[0] */ MIDR_REV(MIDR_BRAHMA_B53, 0, 0), + /* Kryo2XX Silver rAp4 */ + MIDR_REV(MIDR_QCOM_KRYO_2XX_SILVER, 0xa, 0x4), {}, }; #endif
linux-stable-mirror@lists.linaro.org