On 12.08.24 21:24, Suren Baghdasaryan wrote:
During CMA activation, pages in CMA area are prepared and then freed without being allocated. This triggers warnings when memory allocation debug config (CONFIG_MEM_ALLOC_PROFILING_DEBUG) is enabled. Fix this by marking these pages not tagged before freeing them.
Fixes: d224eb0287fb ("codetag: debug: mark codetags for reserved pages as empty") Signed-off-by: Suren Baghdasaryan surenb@google.com Cc: stable@vger.kernel.org # v6.10
changes since v1 [1]
- Added Fixes tag
- CC'ed stable
[1] https://lore.kernel.org/all/20240812184455.86580-1-surenb@google.com/
mm/mm_init.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/mm/mm_init.c b/mm/mm_init.c index 75c3bd42799b..ec9324653ad9 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2245,6 +2245,16 @@ void __init init_cma_reserved_pageblock(struct page *page) set_pageblock_migratetype(page, MIGRATE_CMA); set_page_refcounted(page);
- /* pages were reserved and not allocated */
- if (mem_alloc_profiling_enabled()) {
union codetag_ref *ref = get_page_tag_ref(page);
if (ref) {
set_codetag_empty(ref);
put_page_tag_ref(ref);
}
- }
Should we have a helper like clear_page_tag_ref() that wraps this?