The mitigation for PBRSB includes adding LFENCE instructions to the
RSB filling sequence. However, RSB filling is done on some older CPUs
that don't support the LFENCE instruction.
Define and use a BARRIER_NOSPEC macro which makes the LFENCE
conditional on X86_FEATURE_LFENCE_RDTSC, like the barrier_nospec()
macro defined for C code in <asm/barrier.h>.
Reported-by: Martin-Éric Racine <martin-eric.racine(a)iki.fi>
References: https://bugs.debian.org/1017425
Cc: stable(a)vger.kernel.org
Cc: regressions(a)lists.linux.dev
Cc: Daniel Sneddon <daniel.sneddon(a)linux.intel.com>
Cc: Pawan Gupta <pawan.kumar.gupta(a)linux.intel.com>
Fixes: 2b1299322016 ("x86/speculation: Add RSB VM Exit protections")
Fixes: ba6e31af2be9 ("x86/speculation: Add LFENCE to RSB fill sequence")
Signed-off-by: Ben Hutchings <benh(a)debian.org>
---
arch/x86/include/asm/nospec-branch.h | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index e64fd20778b6..b1029fd88474 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -34,6 +34,11 @@
#define RSB_CLEAR_LOOPS 32 /* To forcibly overwrite all entries */
+#ifdef __ASSEMBLY__
+
+/* Prevent speculative execution past this barrier. */
+#define BARRIER_NOSPEC ALTERNATIVE "", "lfence", X86_FEATURE_LFENCE_RDTSC
+
/*
* Google experimented with loop-unrolling and this turned out to be
* the optimal version - two calls, each with their own speculation
@@ -62,9 +67,7 @@
dec reg; \
jnz 771b; \
/* barrier for jnz misprediction */ \
- lfence;
-
-#ifdef __ASSEMBLY__
+ BARRIER_NOSPEC;
/*
* This should be used immediately before an indirect jump/call. It tells
@@ -138,7 +141,7 @@
int3
.Lunbalanced_ret_guard_\@:
add $(BITS_PER_LONG/8), %_ASM_SP
- lfence
+ BARRIER_NOSPEC
.endm
/*
Tired of trying Negativer plans but that have only partial effect that last
only several weeks?
Try this complex strategy and get the negative SEO effect to come much
faster and last a lot longer than the traditional Negative strategies
More info here
https://www.creative-digital.co/product/derank-seo-service/
Unsubscribe:
in the footer of our site
On Mon, Aug 01, 2022 at 06:25:11PM +0000, Carlos Llamas wrote:
> A transaction of type BINDER_TYPE_WEAK_HANDLE can fail to increment the
> reference for a node. In this case, the target proc normally releases
> the failed reference upon close as expected. However, if the target is
> dying in parallel the call will race with binder_deferred_release(), so
> the target could have released all of its references by now leaving the
> cleanup of the new failed reference unhandled.
>
> The transaction then ends and the target proc gets released making the
> ref->proc now a dangling pointer. Later on, ref->node is closed and we
> attempt to take spin_lock(&ref->proc->inner_lock), which leads to the
> use-after-free bug reported below. Let's fix this by cleaning up the
> failed reference on the spot instead of relying on the target to do so.
>
> ==================================================================
> BUG: KASAN: use-after-free in _raw_spin_lock+0xa8/0x150
> Write of size 4 at addr ffff5ca207094238 by task kworker/1:0/590
>
> CPU: 1 PID: 590 Comm: kworker/1:0 Not tainted 5.19.0-rc8 #10
> Hardware name: linux,dummy-virt (DT)
> Workqueue: events binder_deferred_func
> Call trace:
> dump_backtrace.part.0+0x1d0/0x1e0
> show_stack+0x18/0x70
> dump_stack_lvl+0x68/0x84
> print_report+0x2e4/0x61c
> kasan_report+0xa4/0x110
> kasan_check_range+0xfc/0x1a4
> __kasan_check_write+0x3c/0x50
> _raw_spin_lock+0xa8/0x150
> binder_deferred_func+0x5e0/0x9b0
> process_one_work+0x38c/0x5f0
> worker_thread+0x9c/0x694
> kthread+0x188/0x190
> ret_from_fork+0x10/0x20
>
> Signed-off-by: Carlos Llamas <cmllamas(a)google.com>
> ---
> drivers/android/binder.c | 12 ++++++++++++
> 1 file changed, 12 insertions(+)
>
> diff --git a/drivers/android/binder.c b/drivers/android/binder.c
> index 362c0deb65f1..9d42afe60180 100644
> --- a/drivers/android/binder.c
> +++ b/drivers/android/binder.c
> @@ -1361,6 +1361,18 @@ static int binder_inc_ref_for_node(struct binder_proc *proc,
> }
> ret = binder_inc_ref_olocked(ref, strong, target_list);
> *rdata = ref->data;
> + if (ret && ref == new_ref) {
> + /*
> + * Cleanup the failed reference here as the target
> + * could now be dead and have already released its
> + * references by now. Calling on the new reference
> + * with strong=0 and a tmp_refs will not decrement
> + * the node. The new_ref gets kfree'd below.
> + */
> + binder_cleanup_ref_olocked(new_ref);
> + ref = NULL;
> + }
> +
> binder_proc_unlock(proc);
> if (new_ref && ref != new_ref)
> /*
> --
> 2.37.1.455.g008518b4e5-goog
>
Sorry, I forgot to CC stable. This patch should be applied to all stable
kernels starting with 4.14 and higher.
Cc: stable(a)vger.kernel.org # 4.14+
From: Cameron Gutman <aicommander(a)gmail.com>
Suspending and resuming the system can sometimes cause the out
URB to get hung after a reset_resume. This causes LED setting
and force feedback to break on resume. To avoid this, just drop
the reset_resume callback so the USB core rebinds xpad to the
wireless pads on resume if a reset happened.
A nice side effect of this change is the LED ring on wireless
controllers is now set correctly on system resume.
Cc: stable(a)vger.kernel.org
Fixes: 4220f7db1e42 ("Input: xpad - workaround dead irq_out after suspend/ resume")
Signed-off-by: Cameron Gutman <aicommander(a)gmail.com>
Signed-off-by: Pavel Rojtberg <rojtberg(a)gmail.com>
---
drivers/input/joystick/xpad.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
index 629646b..4e01056 100644
--- a/drivers/input/joystick/xpad.c
+++ b/drivers/input/joystick/xpad.c
@@ -1991,7 +1991,6 @@ static struct usb_driver xpad_driver = {
.disconnect = xpad_disconnect,
.suspend = xpad_suspend,
.resume = xpad_resume,
- .reset_resume = xpad_resume,
.id_table = xpad_table,
};
--
2.34.1
Older CPUs beyond its Servicing period are not listed in the affected
processor list for MMIO Stale Data vulnerabilities. These CPUs currently
report "Not affected" in sysfs, which may not be correct.
Add support for "Unknown" reporting for such CPUs. Mitigation is not
deployed when the status is "Unknown".
"CPU is beyond its Servicing period" means these CPUs are beyond their
Servicing [1] period and have reached End of Servicing Updates (ESU) [2].
[1] Servicing: The process of providing functional and security
updates to Intel processors or platforms, utilizing the Intel Platform
Update (IPU) process or other similar mechanisms.
[2] End of Servicing Updates (ESU): ESU is the date at which Intel
will no longer provide Servicing, such as through IPU or other similar
update processes. ESU dates will typically be aligned to end of
quarter.
Suggested-by: Andrew Cooper <andrew.cooper3(a)citrix.com>
Suggested-by: Tony Luck <tony.luck(a)intel.com>
Fixes: 8d50cdf8b834 ("x86/speculation/mmio: Add sysfs reporting for Processor MMIO Stale Data")
Cc: stable(a)vger.kernel.org
Signed-off-by: Pawan Gupta <pawan.kumar.gupta(a)linux.intel.com>
---
CPU vulnerability is unknown if, hardware doesn't set the immunity bits
and CPU is not in the known-affected-list.
In order to report the unknown status, this patch sets the MMIO bug
for all Intel CPUs that don't have the hardware immunity bits set.
Based on the known-affected-list of CPUs, mitigation selection then
deploys the mitigation or sets the "Unknown" status; which is ugly.
I will appreciate suggestions to improve this.
Thanks,
Pawan
.../hw-vuln/processor_mmio_stale_data.rst | 3 +++
arch/x86/kernel/cpu/bugs.c | 11 +++++++-
arch/x86/kernel/cpu/common.c | 26 +++++++++++++------
arch/x86/kernel/cpu/cpu.h | 1 +
4 files changed, 32 insertions(+), 9 deletions(-)
diff --git a/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst b/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst
index 9393c50b5afc..55524e0798da 100644
--- a/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst
+++ b/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst
@@ -230,6 +230,9 @@ The possible values in this file are:
* - 'Mitigation: Clear CPU buffers'
- The processor is vulnerable and the CPU buffer clearing mitigation is
enabled.
+ * - 'Unknown: CPU is beyond its Servicing period'
+ - The processor vulnerability status is unknown because it is
+ out of Servicing period. Mitigation is not attempted.
If the processor is vulnerable then the following information is appended to
the above information:
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 0dd04713434b..dd6e78d370bc 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -416,6 +416,7 @@ enum mmio_mitigations {
MMIO_MITIGATION_OFF,
MMIO_MITIGATION_UCODE_NEEDED,
MMIO_MITIGATION_VERW,
+ MMIO_MITIGATION_UNKNOWN,
};
/* Default mitigation for Processor MMIO Stale Data vulnerabilities */
@@ -426,12 +427,18 @@ static const char * const mmio_strings[] = {
[MMIO_MITIGATION_OFF] = "Vulnerable",
[MMIO_MITIGATION_UCODE_NEEDED] = "Vulnerable: Clear CPU buffers attempted, no microcode",
[MMIO_MITIGATION_VERW] = "Mitigation: Clear CPU buffers",
+ [MMIO_MITIGATION_UNKNOWN] = "Unknown: CPU is beyond its servicing period",
};
static void __init mmio_select_mitigation(void)
{
u64 ia32_cap;
+ if (mmio_stale_data_unknown()) {
+ mmio_mitigation = MMIO_MITIGATION_UNKNOWN;
+ return;
+ }
+
if (!boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA) ||
cpu_mitigations_off()) {
mmio_mitigation = MMIO_MITIGATION_OFF;
@@ -1638,6 +1645,7 @@ void cpu_bugs_smt_update(void)
pr_warn_once(MMIO_MSG_SMT);
break;
case MMIO_MITIGATION_OFF:
+ case MMIO_MITIGATION_UNKNOWN:
break;
}
@@ -2235,7 +2243,8 @@ static ssize_t tsx_async_abort_show_state(char *buf)
static ssize_t mmio_stale_data_show_state(char *buf)
{
- if (mmio_mitigation == MMIO_MITIGATION_OFF)
+ if (mmio_mitigation == MMIO_MITIGATION_OFF ||
+ mmio_mitigation == MMIO_MITIGATION_UNKNOWN)
return sysfs_emit(buf, "%s\n", mmio_strings[mmio_mitigation]);
if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 736262a76a12..82088410870e 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1286,6 +1286,22 @@ static bool arch_cap_mmio_immune(u64 ia32_cap)
ia32_cap & ARCH_CAP_SBDR_SSDP_NO);
}
+bool __init mmio_stale_data_unknown(void)
+{
+ u64 ia32_cap = x86_read_arch_cap_msr();
+
+ if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
+ return false;
+ /*
+ * CPU vulnerability is unknown when, hardware doesn't set the
+ * immunity bits and CPU is not in the known affected list.
+ */
+ if (!cpu_matches(cpu_vuln_blacklist, MMIO) &&
+ !arch_cap_mmio_immune(ia32_cap))
+ return true;
+ return false;
+}
+
static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
{
u64 ia32_cap = x86_read_arch_cap_msr();
@@ -1349,14 +1365,8 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
cpu_matches(cpu_vuln_blacklist, SRBDS | MMIO_SBDS))
setup_force_cpu_bug(X86_BUG_SRBDS);
- /*
- * Processor MMIO Stale Data bug enumeration
- *
- * Affected CPU list is generally enough to enumerate the vulnerability,
- * but for virtualization case check for ARCH_CAP MSR bits also, VMM may
- * not want the guest to enumerate the bug.
- */
- if (cpu_matches(cpu_vuln_blacklist, MMIO) &&
+ /* Processor MMIO Stale Data bug enumeration */
+ if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
!arch_cap_mmio_immune(ia32_cap))
setup_force_cpu_bug(X86_BUG_MMIO_STALE_DATA);
diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
index 7c9b5893c30a..a2dbfc1bbc49 100644
--- a/arch/x86/kernel/cpu/cpu.h
+++ b/arch/x86/kernel/cpu/cpu.h
@@ -82,6 +82,7 @@ unsigned int aperfmperf_get_khz(int cpu);
extern void x86_spec_ctrl_setup_ap(void);
extern void update_srbds_msr(void);
+extern bool mmio_stale_data_unknown(void);
extern u64 x86_read_arch_cap_msr(void);
base-commit: 4a57a8400075bc5287c5c877702c68aeae2a033d
--
2.35.3
The vfio_ap_mdev_unlink_adapter and vfio_ap_mdev_unlink_domain functions
add the associated vfio_ap_queue objects to the hashtable that links them
to the matrix mdev to which their APQN is assigned. In order to unlink
them, they must be deleted from the hashtable; if not, they will continue
to be reset whenever userspace closes the mdev fd or removes the mdev.
This patch fixes that issue.
Cc: stable(a)vger.kernel.org
Fixes: 2838ba5bdcd6 ("s390/vfio-ap: reset queues after adapter/domain unassignment")
Reported-by: Tony Krowiak <akrowiak(a)linux.ibm.com>
Signed-off-by: Tony Krowiak <akrowiak(a)linux.ibm.com>
---
drivers/s390/crypto/vfio_ap_ops.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
index ee82207b4e60..2493926b5dfb 100644
--- a/drivers/s390/crypto/vfio_ap_ops.c
+++ b/drivers/s390/crypto/vfio_ap_ops.c
@@ -1049,8 +1049,7 @@ static void vfio_ap_mdev_unlink_adapter(struct ap_matrix_mdev *matrix_mdev,
if (q && qtable) {
if (test_bit_inv(apid, matrix_mdev->shadow_apcb.apm) &&
test_bit_inv(apqi, matrix_mdev->shadow_apcb.aqm))
- hash_add(qtable->queues, &q->mdev_qnode,
- q->apqn);
+ vfio_ap_unlink_queue_fr_mdev(q);
}
}
}
@@ -1236,8 +1235,7 @@ static void vfio_ap_mdev_unlink_domain(struct ap_matrix_mdev *matrix_mdev,
if (q && qtable) {
if (test_bit_inv(apid, matrix_mdev->shadow_apcb.apm) &&
test_bit_inv(apqi, matrix_mdev->shadow_apcb.aqm))
- hash_add(qtable->queues, &q->mdev_qnode,
- q->apqn);
+ vfio_ap_unlink_queue_fr_mdev(q);
}
}
}
--
2.31.1