On Thu, Aug 28, 2025 at 12:01:38AM +0200, David Hildenbrand wrote:
We want to get rid of nth_page(), and kfence init code is the last user.
Unfortunately, we might actually walk a PFN range where the pages are not contiguous, because we might be allocating an area from memblock that could span memory sections in problematic kernel configs (SPARSEMEM without SPARSEMEM_VMEMMAP).
Sad.
We could check whether the page range is contiguous using page_range_contiguous() and failing kfence init, or making kfence incompatible these problemtic kernel configs.
Sounds iffy though.
Let's keep it simple and simply use pfn_to_page() by iterating PFNs.
Yes.
Cc: Alexander Potapenko glider@google.com Cc: Marco Elver elver@google.com Cc: Dmitry Vyukov dvyukov@google.com Signed-off-by: David Hildenbrand david@redhat.com
Stared at this and can't see anything wrong, so - LGTM and:
Reviewed-by: Lorenzo Stoakes lorenzo.stoakes@oracle.com
mm/kfence/core.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 0ed3be100963a..727c20c94ac59 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -594,15 +594,14 @@ static void rcu_guarded_free(struct rcu_head *h) */ static unsigned long kfence_init_pool(void) {
- unsigned long addr;
- struct page *pages;
unsigned long addr, start_pfn; int i;
if (!arch_kfence_init_pool()) return (unsigned long)__kfence_pool;
addr = (unsigned long)__kfence_pool;
- pages = virt_to_page(__kfence_pool);
start_pfn = PHYS_PFN(virt_to_phys(__kfence_pool));
/*
- Set up object pages: they must have PGTY_slab set to avoid freeing
@@ -613,11 +612,12 @@ static unsigned long kfence_init_pool(void) * enters __slab_free() slow-path. */ for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) {
struct slab *slab = page_slab(nth_page(pages, i));
struct slab *slab;
if (!i || (i % 2)) continue;
slab = page_slab(pfn_to_page(start_pfn + i));
__folio_set_slab(slab_folio(slab));
#ifdef CONFIG_MEMCG slab->obj_exts = (unsigned long)&kfence_metadata_init[i / 2 - 1].obj_exts | @@ -665,10 +665,12 @@ static unsigned long kfence_init_pool(void)
reset_slab: for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) {
struct slab *slab = page_slab(nth_page(pages, i));
struct slab *slab;
if (!i || (i % 2)) continue;
slab = page_slab(pfn_to_page(start_pfn + i));
#ifdef CONFIG_MEMCG slab->obj_exts = 0;
#endif
2.50.1