On Thu, Mar 18, 2021 at 05:41:51PM +0000, Ard Biesheuvel wrote:
From: Ard Biesheuvel ardb@kernel.org
Upstream commit 86ad60a65f29dd862a11c22bb4b5be28d6c5cef1
The XTS asm helper arrangement is a bit odd: the 8-way stride helper consists of back-to-back calls to the 4-way core transforms, which are called indirectly, based on a boolean that indicates whether we are performing encryption or decryption.
Given how costly indirect calls are on x86, let's switch to direct calls, and given how the 8-way stride doesn't really add anything substantial, use a 4-way stride instead, and make the asm core routine deal with any multiple of 4 blocks. Since 512 byte sectors or 4 KB blocks are the typical quantities XTS operates on, increase the stride exported to the glue helper to 512 bytes as well.
As a result, the number of indirect calls is reduced from 3 per 64 bytes of in/output to 1 per 512 bytes of in/output, which produces a 65% speedup when operating on 1 KB blocks (measured on a Intel(R) Core(TM) i7-8650U CPU)
Fixes: 9697fa39efd3f ("x86/retpoline/crypto: Convert crypto assembler indirect jumps") Tested-by: Eric Biggers ebiggers@google.com # x86_64 Signed-off-by: Ard Biesheuvel ardb@kernel.org Signed-off-by: Herbert Xu herbert@gondor.apana.org.au [ardb: rebase onto stable/linux-5.4.y] Signed-off-by: Ard Biesheuvel ardb@kernel.org
Please apply on top of backports of
9c1e8836edbb crypto: x86 - Regularize glue function prototypes 032d049ea0f4 crypto: aesni - Use TEST %reg,%reg instead of CMP $0,%reg
Now queued up, thanks.
greg k-h