6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Herbert Xu herbert@gondor.apana.org.au
commit d07f951903fa9922c375b8ab1ce81b18a0034e3b upstream.
When processing the last block, the s390 ctr code will always read a whole block, even if there isn't a whole block of data left. Fix this by using the actual length left and copy it into a buffer first for processing.
Fixes: 0200f3ecc196 ("crypto: s390 - add System z hardware support for CTR mode") Cc: stable@vger.kernel.org Reported-by: Guangwu Zhang guazhang@redhat.com Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Reviewd-by: Harald Freudenberger freude@de.ibm.com Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/s390/crypto/aes_s390.c | 4 +++- arch/s390/crypto/paes_s390.c | 4 +++- 2 files changed, 6 insertions(+), 2 deletions(-)
--- a/arch/s390/crypto/aes_s390.c +++ b/arch/s390/crypto/aes_s390.c @@ -601,7 +601,9 @@ static int ctr_aes_crypt(struct skcipher * final block may be < AES_BLOCK_SIZE, copy only nbytes */ if (nbytes) { - cpacf_kmctr(sctx->fc, sctx->key, buf, walk.src.virt.addr, + memset(buf, 0, AES_BLOCK_SIZE); + memcpy(buf, walk.src.virt.addr, nbytes); + cpacf_kmctr(sctx->fc, sctx->key, buf, buf, AES_BLOCK_SIZE, walk.iv); memcpy(walk.dst.virt.addr, buf, nbytes); crypto_inc(walk.iv, AES_BLOCK_SIZE); --- a/arch/s390/crypto/paes_s390.c +++ b/arch/s390/crypto/paes_s390.c @@ -688,9 +688,11 @@ static int ctr_paes_crypt(struct skciphe * final block may be < AES_BLOCK_SIZE, copy only nbytes */ if (nbytes) { + memset(buf, 0, AES_BLOCK_SIZE); + memcpy(buf, walk.src.virt.addr, nbytes); while (1) { if (cpacf_kmctr(ctx->fc, ¶m, buf, - walk.src.virt.addr, AES_BLOCK_SIZE, + buf, AES_BLOCK_SIZE, walk.iv) == AES_BLOCK_SIZE) break; if (__paes_convert_key(ctx))