On Fri, 9 Oct 2020 at 01:34, Kees Cook keescook@chromium.org wrote:
It turns out that SLUB redzoning ("slub_debug=Z") checks from s->object_size rather than from s->inuse (which is normally bumped to make room for the freelist pointer), so a cache created with an object size less than 24 would have their freelist pointer written beyond s->object_size, causing the redzone to corrupt the freelist pointer. This was very visible with "slub_debug=ZF":
BUG test (Tainted: G B ): Redzone overwritten
INFO: 0xffff957ead1c05de-0xffff957ead1c05df @offset=1502. First byte 0x1a instead of 0xbb INFO: Slab 0xffffef3950b47000 objects=170 used=170 fp=0x0000000000000000 flags=0x8000000000000200 INFO: Object 0xffff957ead1c05d8 @offset=1496 fp=0xffff957ead1c0620
Redzone (____ptrval____): bb bb bb bb bb bb bb bb ........ Object (____ptrval____): 00 00 00 00 00 f6 f4 a5 ........ Redzone (____ptrval____): 40 1d e8 1a aa @.... Padding (____ptrval____): 00 00 00 00 00 00 00 00 ........
Adjust the offset to stay within s->object_size.
(Note that there appear to be no such small-sized caches in the kernel currently.)
Reported-by: Marco Elver elver@google.com Link: https://lore.kernel.org/linux-mm/20200807160627.GA1420741@elver.google.com/ Fixes: 89b83f282d8b (slub: avoid redzone when choosing freepointer location) Cc: stable@vger.kernel.org Signed-off-by: Kees Cook keescook@chromium.org
mm/slub.c | 17 +++++------------ 1 file changed, 5 insertions(+), 12 deletions(-)
Tested-by: Marco Elver elver@google.com
Thank you!
diff --git a/mm/slub.c b/mm/slub.c index 68c02b2eecd9..979f5da26992 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3641,7 +3641,6 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order) { slab_flags_t flags = s->flags; unsigned int size = s->object_size;
unsigned int freepointer_area; unsigned int order; /*
@@ -3650,13 +3649,6 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order) * the possible location of the free pointer. */ size = ALIGN(size, sizeof(void *));
/*
* This is the area of the object where a freepointer can be
* safely written. If redzoning adds more to the inuse size, we
* can't use that portion for writing the freepointer, so
* s->offset must be limited within this for the general case.
*/
freepointer_area = size;
#ifdef CONFIG_SLUB_DEBUG /* @@ -3682,7 +3674,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
/* * With that we have determined the number of bytes in actual use
* by the object. This is the potential offset to the free pointer.
* by the object and redzoning. */ s->inuse = size;
@@ -3694,7 +3686,8 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order) * kmem_cache_free. * * This is the case if we do RCU, have a constructor or
* destructor or are poisoning the objects.
* destructor, are poisoning the objects, or are
* redzoning an object smaller than sizeof(void *). * * The assumption that s->offset >= s->inuse means free * pointer is outside of the object is used in the
@@ -3703,13 +3696,13 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order) */ s->offset = size; size += sizeof(void *);
} else if (freepointer_area > sizeof(void *)) {
} else { /* * Store freelist pointer near middle of object to keep * it away from the edges of the object to avoid small * sized over/underflows from neighboring allocations. */
s->offset = ALIGN(freepointer_area / 2, sizeof(void *));
s->offset = ALIGN_DOWN(s->object_size / 2, sizeof(void *)); }
#ifdef CONFIG_SLUB_DEBUG
2.25.1