On Thu, Sep 13, 2018 at 11:56:11AM -0400, Jason Andryuk wrote:
From: Juergen Gross jgross@suse.com
commit b2d7a075a1ccef2fb321d595802190c8e9b39004 upstream
Using only 32-bit writes for the pte will result in an intermediate L1TF vulnerable PTE. When running as a Xen PV guest this will at once switch the guest to shadow mode resulting in a loss of performance.
Use arch_atomic64_xchg() instead which will perform the requested operation atomically with all 64 bits.
Some performance considerations according to:
https://software.intel.com/sites/default/files/managed/ad/dc/Intel-Xeon-Scal...
The main number should be the latency, as there is no tight loop around native_ptep_get_and_clear().
"lock cmpxchg8b" has a latency of 20 cycles, while "lock xchg" (with a memory operand) isn't mentioned in that document. "lock xadd" (with xadd having 3 cycles less latency than xchg) has a latency of 11, so we can assume a latency of 14 for "lock xchg".
Signed-off-by: Juergen Gross jgross@suse.com Reviewed-by: Thomas Gleixner tglx@linutronix.de Reviewed-by: Jan Beulich jbeulich@suse.com Tested-by: Jason Andryuk jandryuk@gmail.com Signed-off-by: Boris Ostrovsky boris.ostrovsky@oracle.com Atomic operations gained an arch_ prefix in commit 8bf705d130396e69c04cd8e6e010244ad2ce71f4 s/arch_atomic64_xchg/atomic64_xchg/ for backport. Signed-off-by: Jason Andryuk jandryuk@gmail.com
Thanks for the fix, I've now queued it up everywhere and will push out -rc2 versions of this.
greg k-h