From: Eric Biggers ebiggers@google.com
When the user-provided IV buffer is not aligned to the algorithm's alignmask, skcipher_walk_virt() allocates an aligned buffer and copies the IV into it. However, skcipher_walk_virt() can fail after that point, and in this case the buffer will be freed.
This causes a use-after-free read in callers that read from walk->iv unconditionally, e.g. the LRW template. For example, this can be reproduced by trying to encrypt fewer than 16 bytes using "lrw(aes)".
Fix it by making skcipher_walk_first() reset walk->iv to walk->oiv if skcipher_walk_next() fails.
This addresses a similar problem as commit 2b4f27c36bcd ("crypto: skcipher - set walk.iv for zero-length inputs"), but that didn't consider the case where skcipher_walk_virt() fails after copying the IV.
This bug was detected by my patches that improve testmgr to fuzz algorithms against their generic implementation, when the extra self-tests were run on a KASAN-enabled kernel.
Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface") Cc: stable@vger.kernel.org # v4.10+ Signed-off-by: Eric Biggers ebiggers@google.com --- crypto/skcipher.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/crypto/skcipher.c b/crypto/skcipher.c index bcf13d95f54ac..0f639f018047d 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -426,19 +426,27 @@ static int skcipher_copy_iv(struct skcipher_walk *walk)
static int skcipher_walk_first(struct skcipher_walk *walk) { + int err; + if (WARN_ON_ONCE(in_irq())) return -EDEADLK;
walk->buffer = NULL; if (unlikely(((unsigned long)walk->iv & walk->alignmask))) { - int err = skcipher_copy_iv(walk); + err = skcipher_copy_iv(walk); if (err) return err; }
walk->page = NULL;
- return skcipher_walk_next(walk); + err = skcipher_walk_next(walk); + + /* On error, don't leave 'walk->iv' pointing to freed buffer. */ + if (unlikely(err)) + walk->iv = walk->oiv; + + return err; }
static int skcipher_walk_skcipher(struct skcipher_walk *walk,