6.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Mel Gorman mgorman@techsingularity.net
[ Upstream commit c3e58a70425ac6ddaae1529c8146e88b4f7252bb ]
Patch series "Leave IRQs enabled for per-cpu page allocations", v3.
This patch (of 2):
free_unref_page_list() has neglected to remove pages properly from the list of pages to free since forever. It works by coincidence because list_add happened to do the right thing adding the pages to just the PCP lists. However, a later patch added pages to either the PCP list or the zone list but only properly deleted the page from the list in one path leading to list corruption and a subsequent failure. As a preparation patch, always delete the pages from one list properly before adding to another. On its own, this fixes nothing although it adds a fractional amount of overhead but is critical to the next patch.
Link: https://lkml.kernel.org/r/20221118101714.19590-1-mgorman@techsingularity.net Link: https://lkml.kernel.org/r/20221118101714.19590-2-mgorman@techsingularity.net Signed-off-by: Mel Gorman mgorman@techsingularity.net Reported-by: Hugh Dickins hughd@google.com Reviewed-by: Vlastimil Babka vbabka@suse.cz Cc: Marcelo Tosatti mtosatti@redhat.com Cc: Marek Szyprowski m.szyprowski@samsung.com Cc: Michal Hocko mhocko@kernel.org Cc: Yu Zhao yuzhao@google.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Stable-dep-of: 7b086755fb8c ("mm: page_alloc: fix CMA and HIGHATOMIC landing on the wrong buddy list") Signed-off-by: Sasha Levin sashal@kernel.org --- mm/page_alloc.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 69668817fed37..d94ac6d87bc97 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3547,6 +3547,8 @@ void free_unref_page_list(struct list_head *list) list_for_each_entry_safe(page, next, list, lru) { struct zone *zone = page_zone(page);
+ list_del(&page->lru); + /* Different zone, different pcp lock. */ if (zone != locked_zone) { if (pcp)