On Mon, Jan 25, 2021 at 02:12:00PM +0100, Michal Hocko wrote:
On Thu 21-01-21 09:55:00, Minchan Kim wrote:
Contiguous memory allocation can be stalled due to waiting on page writeback and/or page lock which causes unpredictable delay. It's a unavoidable cost for the requestor to get *big* contiguous memory but it's expensive for *small* contiguous memory(e.g., order-4) because caller could retry the request in different range where would have easy migratable pages without stalling.
This patch introduce __GFP_NORETRY as compaction gfp_mask in alloc_contig_range so it will fail fast without blocking when it encounters pages needed waiting.
I am not against controling how hard this allocator tries with gfp mask but this changelog is rather void on any data and any user.
It is also rather dubious to have retries when then caller says to not retry.
Since max_tries is 1 with ++tries, it shouldn't retry.
Also why didn't you consider GFP_NOWAIT semantic for non blocking mode?
GFP_NOWAIT seems to be low(specific) flags rather than the one I want to express. Even though I said only page writeback/lock in the description, the goal is to avoid costly operations we might find later so such "failfast", I thought GFP_NORETRY would be good fit.
Signed-off-by: Minchan Kim minchan@kernel.org
mm/page_alloc.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index b031a5ae0bd5..1cdc3ee0b22e 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8491,12 +8491,16 @@ static int __alloc_contig_migrate_range(struct compact_control *cc, unsigned int nr_reclaimed; unsigned long pfn = start; unsigned int tries = 0;
- unsigned int max_tries = 5; int ret = 0; struct migration_target_control mtc = { .nid = zone_to_nid(cc->zone), .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, };
- if (cc->alloc_contig && cc->mode == MIGRATE_ASYNC)
max_tries = 1;
- migrate_prep();
while (pfn < end || !list_empty(&cc->migratepages)) { @@ -8513,7 +8517,7 @@ static int __alloc_contig_migrate_range(struct compact_control *cc, break; } tries = 0;
} else if (++tries == 5) {
}} else if (++tries == max_tries) { ret = ret < 0 ? ret : -EBUSY; break;
@@ -8564,7 +8568,7 @@ int alloc_contig_range(unsigned long start, unsigned long end, .nr_migratepages = 0, .order = -1, .zone = page_zone(pfn_to_page(start)),
.mode = MIGRATE_SYNC,
.ignore_skip_hint = true, .no_set_skip_hint = true, .gfp_mask = current_gfp_context(gfp_mask),.mode = gfp_mask & __GFP_NORETRY ? MIGRATE_ASYNC : MIGRATE_SYNC,
-- 2.30.0.296.g2bfb1c46d8-goog
-- Michal Hocko SUSE Labs