The patch below does not apply to the 5.4-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 5961060692f8b17cd2080620a3d27b95d2ae05ca Mon Sep 17 00:00:00 2001
From: Tianjia Zhang tianjia.zhang@linux.alibaba.com Date: Mon, 29 Nov 2021 17:32:12 +0800 Subject: [PATCH] net/tls: Fix authentication failure in CCM mode
When the TLS cipher suite uses CCM mode, including AES CCM and SM4 CCM, the first byte of the B0 block is flags, and the real IV starts from the second byte. The XOR operation of the IV and rec_seq should be skip this byte, that is, add the iv_offset.
Fixes: f295b3ae9f59 ("net/tls: Add support of AES128-CCM based ciphers") Signed-off-by: Tianjia Zhang tianjia.zhang@linux.alibaba.com Cc: Vakul Garg vakul.garg@nxp.com Cc: stable@vger.kernel.org # v5.2+ Signed-off-by: David S. Miller davem@davemloft.net
diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c index d3e7ff90889e..dfe623a4e72f 100644 --- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -521,7 +521,7 @@ static int tls_do_encryption(struct sock *sk, memcpy(&rec->iv_data[iv_offset], tls_ctx->tx.iv, prot->iv_size + prot->salt_size);
- xor_iv_with_seq(prot, rec->iv_data, tls_ctx->tx.rec_seq); + xor_iv_with_seq(prot, rec->iv_data + iv_offset, tls_ctx->tx.rec_seq);
sge->offset += prot->prepend_size; sge->length -= prot->prepend_size; @@ -1499,7 +1499,7 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb, else memcpy(iv + iv_offset, tls_ctx->rx.iv, prot->salt_size);
- xor_iv_with_seq(prot, iv, tls_ctx->rx.rec_seq); + xor_iv_with_seq(prot, iv + iv_offset, tls_ctx->rx.rec_seq);
/* Prepare AAD */ tls_make_aad(aad, rxm->full_len - prot->overhead_size +
commit 5961060692f8b17cd2080620a3d27b95d2ae05ca upstream.
When the TLS cipher suite uses CCM mode, including AES CCM and SM4 CCM, the first byte of the B0 block is flags, and the real IV starts from the second byte. The XOR operation of the IV and rec_seq should be skip this byte, that is, add the iv_offset.
Fixes: f295b3ae9f59 ("net/tls: Add support of AES128-CCM based ciphers") Signed-off-by: Tianjia Zhang tianjia.zhang@linux.alibaba.com Cc: Vakul Garg vakul.garg@nxp.com Cc: stable@vger.kernel.org # v5.2+ Signed-off-by: David S. Miller davem@davemloft.net --- net/tls/tls_sw.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c index 02821b914054..1436a36c1934 100644 --- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -512,7 +512,7 @@ static int tls_do_encryption(struct sock *sk, memcpy(&rec->iv_data[iv_offset], tls_ctx->tx.iv, prot->iv_size + prot->salt_size);
- xor_iv_with_seq(prot->version, rec->iv_data, tls_ctx->tx.rec_seq); + xor_iv_with_seq(prot->version, rec->iv_data + iv_offset, tls_ctx->tx.rec_seq);
sge->offset += prot->prepend_size; sge->length -= prot->prepend_size; @@ -1483,7 +1483,7 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb, else memcpy(iv + iv_offset, tls_ctx->rx.iv, prot->salt_size);
- xor_iv_with_seq(prot->version, iv, tls_ctx->rx.rec_seq); + xor_iv_with_seq(prot->version, iv + iv_offset, tls_ctx->rx.rec_seq);
/* Prepare AAD */ tls_make_aad(aad, rxm->full_len - prot->overhead_size +
linux-stable-mirror@lists.linaro.org