From: Peter Zijlstra peterz@infradead.org
commit ab3e9c0b75dcb13e9254ef68caa7f15bc66b6471 upstream.
CAT has happened, WBINDV is bad (even before CAT blowing away the entire cache on a multi-core platform wasn't nice), try not to use it ever.
Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Thomas Gleixner tglx@linutronix.de Reviewed-by: Dave Hansen dave.hansen@intel.com Cc: Bin Yang bin.yang@intel.com Cc: Mark Gross mark.gross@intel.com Link: https://lkml.kernel.org/r/20180919085947.933674526@infradead.org Cc: stable@vger.kernel.org # 4.19.x Signed-off-by: Wen Yang wenyang@linux.alibaba.com --- arch/x86/mm/pageattr.c | 18 ++---------------- 1 file changed, 2 insertions(+), 16 deletions(-)
diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index 101f3ad0d6ad..ab87da7a6043 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -239,26 +239,12 @@ static void cpa_flush_array(unsigned long *start, int numpages, int cache, int in_flags, struct page **pages) { unsigned int i, level; -#ifdef CONFIG_PREEMPT - /* - * Avoid wbinvd() because it causes latencies on all CPUs, - * regardless of any CPU isolation that may be in effect. - * - * This should be extended for CAT enabled systems independent of - * PREEMPT because wbinvd() does not respect the CAT partitions and - * this is exposed to unpriviledged users through the graphics - * subsystem. - */ - unsigned long do_wbinvd = 0; -#else - unsigned long do_wbinvd = cache && numpages >= 1024; /* 4M threshold */ -#endif
BUG_ON(irqs_disabled() && !early_boot_irqs_disabled);
- on_each_cpu(__cpa_flush_all, (void *) do_wbinvd, 1); + flush_tlb_all();
- if (!cache || do_wbinvd) + if (!cache) return;
/*