Check the allocation size before toggling kfence_allocation_gate. This way allocations that can't be served by KFENCE will not result in waiting for another CONFIG_KFENCE_SAMPLE_INTERVAL without allocating anything.
Suggested-by: Marco Elver elver@google.com Cc: Andrew Morton akpm@linux-foundation.org Cc: Dmitry Vyukov dvyukov@google.com Cc: Marco Elver elver@google.com Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: stable@vger.kernel.org # 5.12+ Signed-off-by: Alexander Potapenko glider@google.com --- mm/kfence/core.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 4d21ac44d5d35..33bb20d91bf6a 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -733,6 +733,13 @@ void kfence_shutdown_cache(struct kmem_cache *s)
void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags) { + /* + * Perform size check before switching kfence_allocation_gate, so that + * we don't disable KFENCE without making an allocation. + */ + if (size > PAGE_SIZE) + return NULL; + /* * allocation_gate only needs to become non-zero, so it doesn't make * sense to continue writing to it and pay the associated contention @@ -757,9 +764,6 @@ void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags) if (!READ_ONCE(kfence_enabled)) return NULL;
- if (size > PAGE_SIZE) - return NULL; - return kfence_guarded_alloc(s, size, flags); }
Allocation requests outside ZONE_NORMAL (MOVABLE, HIGHMEM or DNA) cannot be fulfilled by KFENCE, because KFENCE memory pool is located in a zone different from the requested one.
Because callers of kmem_cache_alloc() may actually rely on the allocation to reside in the requested zone (e.g. memory allocations done with __GFP_DMA must be DMAable), skip all allocations done with GFP_ZONEMASK and/or respective SLAB flags (SLAB_CACHE_DMA and SLAB_CACHE_DMA32).
Fixes: 0ce20dd84089 ("mm: add Kernel Electric-Fence infrastructure") Cc: Andrew Morton akpm@linux-foundation.org Cc: Dmitry Vyukov dvyukov@google.com Cc: Marco Elver elver@google.com Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: Souptick Joarder jrdr.linux@gmail.com Cc: stable@vger.kernel.org # 5.12+ Signed-off-by: Alexander Potapenko glider@google.com
---
v2: - added parentheses around the GFP clause, as requested by Marco v3: - ignore GFP_ZONEMASK, which also covers __GFP_HIGHMEM and __GFP_MOVABLE - move the flag check at the beginning of the function, as requested by Souptick Joarder --- mm/kfence/core.c | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 33bb20d91bf6a..d51f77329fd3c 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -740,6 +740,14 @@ void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags) if (size > PAGE_SIZE) return NULL;
+ /* + * Skip allocations from non-default zones, including DMA. We cannot guarantee that pages + * in the KFENCE pool will have the requested properties (e.g. reside in DMAable memory). + */ + if ((flags & GFP_ZONEMASK) || + (s->flags & (SLAB_CACHE_DMA | SLAB_CACHE_DMA32))) + return NULL; + /* * allocation_gate only needs to become non-zero, so it doesn't make * sense to continue writing to it and pay the associated contention
On Wed, 30 Jun 2021 at 15:53, Alexander Potapenko glider@google.com wrote:
Allocation requests outside ZONE_NORMAL (MOVABLE, HIGHMEM or DNA) cannot
s/DNA/DMA/ ... but probably no need to do v4 just for this (everyone knows we're not yet in the business of allocating DNA ;-)).
be fulfilled by KFENCE, because KFENCE memory pool is located in a zone different from the requested one.
Because callers of kmem_cache_alloc() may actually rely on the allocation to reside in the requested zone (e.g. memory allocations done with __GFP_DMA must be DMAable), skip all allocations done with GFP_ZONEMASK and/or respective SLAB flags (SLAB_CACHE_DMA and SLAB_CACHE_DMA32).
Fixes: 0ce20dd84089 ("mm: add Kernel Electric-Fence infrastructure") Cc: Andrew Morton akpm@linux-foundation.org Cc: Dmitry Vyukov dvyukov@google.com Cc: Marco Elver elver@google.com Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: Souptick Joarder jrdr.linux@gmail.com Cc: stable@vger.kernel.org # 5.12+ Signed-off-by: Alexander Potapenko glider@google.com
With the change below, you can add:
Reviewed-by: Marco Elver elver@google.com
v2:
- added parentheses around the GFP clause, as requested by Marco
v3:
- ignore GFP_ZONEMASK, which also covers __GFP_HIGHMEM and __GFP_MOVABLE
- move the flag check at the beginning of the function, as requested by Souptick Joarder
mm/kfence/core.c | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 33bb20d91bf6a..d51f77329fd3c 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -740,6 +740,14 @@ void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags) if (size > PAGE_SIZE) return NULL;
/*
* Skip allocations from non-default zones, including DMA. We cannot guarantee that pages
* in the KFENCE pool will have the requested properties (e.g. reside in DMAable memory).
Comments should still be 80 cols, like the rest of the file. :-/
*/
if ((flags & GFP_ZONEMASK) ||
(s->flags & (SLAB_CACHE_DMA | SLAB_CACHE_DMA32)))
return NULL;
/* * allocation_gate only needs to become non-zero, so it doesn't make * sense to continue writing to it and pay the associated contention
-- 2.32.0.93.g670b81a890-goog
On Wed, 30 Jun 2021 at 15:53, Alexander Potapenko glider@google.com wrote:
Check the allocation size before toggling kfence_allocation_gate. This way allocations that can't be served by KFENCE will not result in waiting for another CONFIG_KFENCE_SAMPLE_INTERVAL without allocating anything.
Suggested-by: Marco Elver elver@google.com Cc: Andrew Morton akpm@linux-foundation.org Cc: Dmitry Vyukov dvyukov@google.com Cc: Marco Elver elver@google.com Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: stable@vger.kernel.org # 5.12+ Signed-off-by: Alexander Potapenko glider@google.com
Reviewed-by: Marco Elver elver@google.com
mm/kfence/core.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 4d21ac44d5d35..33bb20d91bf6a 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -733,6 +733,13 @@ void kfence_shutdown_cache(struct kmem_cache *s)
void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags) {
/*
* Perform size check before switching kfence_allocation_gate, so that
* we don't disable KFENCE without making an allocation.
*/
if (size > PAGE_SIZE)
return NULL;
/* * allocation_gate only needs to become non-zero, so it doesn't make * sense to continue writing to it and pay the associated contention
@@ -757,9 +764,6 @@ void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags) if (!READ_ONCE(kfence_enabled)) return NULL;
if (size > PAGE_SIZE)
return NULL;
return kfence_guarded_alloc(s, size, flags);
}
-- 2.32.0.93.g670b81a890-goog
linux-stable-mirror@lists.linaro.org