This is a note to let you know that I've just added the patch titled
[PATCH -stable] arm: crypto: reduce priority of bit-sliced AES cipher
to the 4.9-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git%3Ba=su...
The filename of the patch is: arm-crypto-reduce-priority-of-bit-sliced-aes-cipher.patch and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree, please let stable@vger.kernel.org know about it.
From ebiggers@google.com Sun Nov 19 11:20:08 2017
From: Eric Biggers ebiggers@google.com Date: Fri, 17 Nov 2017 11:50:27 -0800 Subject: [PATCH -stable] arm: crypto: reduce priority of bit-sliced AES cipher To: stable@vger.kernel.org Cc: Ard Biesheuvel ard.biesheuvel@linaro.org, linux-crypto@vger.kernel.org, Eric Biggers ebiggers@google.com Message-ID: 20171117195027.88288-1-ebiggers@google.com
From: Eric Biggers ebiggers@google.com
[ Not upstream because this is a minimal fix for a bug where arm32 kernels can use a much slower implementation of AES than is actually available, potentially forcing vendors to disable encryption on their devices.]
All the aes-bs (bit-sliced) and aes-ce (cryptographic extensions) algorithms had a priority of 300. This is undesirable because it means an aes-bs algorithm may be used when an aes-ce algorithm is available. The aes-ce algorithms have much better performance (up to 10x faster).
Fix it by decreasing the priority of the aes-bs algorithms to 250.
This was fixed upstream by commit cc477bf64573 ("crypto: arm/aes - replace bit-sliced OpenSSL NEON code"), but it was just a small part of a complete rewrite. This patch just fixes the priority bug for older kernels.
Signed-off-by: Eric Biggers ebiggers@google.com Acked-by: Ard Biesheuvel ard.biesheuvel@linaro.org --- arch/arm/crypto/aesbs-glue.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
--- a/arch/arm/crypto/aesbs-glue.c +++ b/arch/arm/crypto/aesbs-glue.c @@ -363,7 +363,7 @@ static struct crypto_alg aesbs_algs[] = }, { .cra_name = "cbc(aes)", .cra_driver_name = "cbc-aes-neonbs", - .cra_priority = 300, + .cra_priority = 250, .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER|CRYPTO_ALG_ASYNC, .cra_blocksize = AES_BLOCK_SIZE, .cra_ctxsize = sizeof(struct async_helper_ctx), @@ -383,7 +383,7 @@ static struct crypto_alg aesbs_algs[] = }, { .cra_name = "ctr(aes)", .cra_driver_name = "ctr-aes-neonbs", - .cra_priority = 300, + .cra_priority = 250, .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER|CRYPTO_ALG_ASYNC, .cra_blocksize = 1, .cra_ctxsize = sizeof(struct async_helper_ctx), @@ -403,7 +403,7 @@ static struct crypto_alg aesbs_algs[] = }, { .cra_name = "xts(aes)", .cra_driver_name = "xts-aes-neonbs", - .cra_priority = 300, + .cra_priority = 250, .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER|CRYPTO_ALG_ASYNC, .cra_blocksize = AES_BLOCK_SIZE, .cra_ctxsize = sizeof(struct async_helper_ctx),
Patches currently in stable-queue which might be from ebiggers@google.com are
queue-4.9/arm-crypto-reduce-priority-of-bit-sliced-aes-cipher.patch