On Tue Apr 8, 2025 at 6:50 PM UTC, Johannes Weiner wrote:
/* * Find the largest available free page in the other list. This roughly * approximates finding the pageblock with the most free pages, which * would be too costly to do exactly. */ for (current_order = MAX_PAGE_ORDER; current_order >= min_order; --current_order) {
IIUC we could go one step further here and also avoid repeating this iteration? Maybe something for a separate patch though?
That might be worth a test, but agree this should be a separate patch.
AFAICS, in the most common configurations MAX_PAGE_ORDER is only one step above pageblock_order or even the same. It might not be worth the complication.
Oh yeah, makes sense.
/*
* Try the different freelists, native then foreign.
* First try the freelists of the requested migratetype, then try
* fallbacks. Roughly, each fallback stage poses more of a fragmentation
* risk.
How about "then try fallback modes with increasing levels of fragmentation risk."
Yep, nice thanks.
* The fallback logic is expensive and rmqueue_bulk() calls in * a loop with the zone->lock held, meaning the freelists are
@@ -2332,7 +2329,7 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype, case RMQUEUE_CLAIM: page = __rmqueue_claim(zone, order, migratetype, alloc_flags); if (page) {
/* Replenished native freelist, back to normal mode */
/* Replenished requested migratetype's freelist, back to normal mode */ *mode = RMQUEUE_NORMAL;
This line is kind of long now. How about:
/* Replenished preferred freelist, back to normal mode */
Yep, sounds good - it's still 81 characters, the rest of this file sticks to 80 for comments, I guess I'll leave it to Andrew to decide if that is an issue?
But yeah, I like your proposed changes. Would you care to send a proper patch?
Sure, pasting below. Andrew, could you fold this in? Also, I haven't done this style of patch sending before, please let me know if I'm doing something to make your life difficult.
Acked-by: Johannes Weiner hannes@cmpxchg.org
Aside from the commen stuff fixed by the patch below:
Reviewed-by: Brendan Jackman jackmanb@google.com
---
From 8ff20dbb52770d082e182482d2b47e521de028d1 Mon Sep 17 00:00:00 2001 From: Brendan Jackman jackmanb@google.com Date: Wed, 9 Apr 2025 17:22:14 +000 Subject: [PATCH] page_alloc: speed up fallbacks in rmqueue_bulk() - comment updates
Tidy up some terminology and redistribute commentary. Signed-off-by: Brendan Jackman jackmanb@google.com --- mm/page_alloc.c | 22 +++++++++------------- 1 file changed, 9 insertions(+), 13 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index dfb2b3f508af4..220bd0bcc38c3 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2183,21 +2183,13 @@ try_to_claim_block(struct zone *zone, struct page *page, }
/* - * Try finding a free buddy page on the fallback list. - * - * This will attempt to claim a whole pageblock for the requested type - * to ensure grouping of such requests in the future. - * - * If a whole block cannot be claimed, steal an individual page, regressing to - * __rmqueue_smallest() logic to at least break up as little contiguity as - * possible. + * Try to allocate from some fallback migratetype by claiming the entire block, + * i.e. converting it to the allocation's start migratetype. * * The use of signed ints for order and current_order is a deliberate * deviation from the rest of this file, to make the for loop * condition simpler. */ - -/* Try to claim a whole foreign block, take a page, expand the remainder */ static __always_inline struct page * __rmqueue_claim(struct zone *zone, int order, int start_migratetype, unsigned int alloc_flags) @@ -2247,7 +2239,10 @@ __rmqueue_claim(struct zone *zone, int order, int start_migratetype, return NULL; }
-/* Try to steal a single page from a foreign block */ +/* + * Try to steal a single page from some fallback migratetype. Leave the rest of + * the block as its current migratetype, potentially causing fragmentation. + */ static __always_inline struct page * __rmqueue_steal(struct zone *zone, int order, int start_migratetype) { @@ -2307,7 +2302,8 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype, }
/* - * Try the different freelists, native then foreign. + * First try the freelists of the requested migratetype, then try + * fallbacks modes with increasing levels of fragmentation risk. * * The fallback logic is expensive and rmqueue_bulk() calls in * a loop with the zone->lock held, meaning the freelists are @@ -2332,7 +2328,7 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype, case RMQUEUE_CLAIM: page = __rmqueue_claim(zone, order, migratetype, alloc_flags); if (page) { - /* Replenished native freelist, back to normal mode */ + /* Replenished preferred freelist, back to normal mode. */ *mode = RMQUEUE_NORMAL; return page; }
base-commit: aa42382db4e2a4ed1f4ba97ffc50e2ce45accb0c