During CMA activation, pages in CMA area are prepared and then freed without being allocated. This triggers warnings when memory allocation debug config (CONFIG_MEM_ALLOC_PROFILING_DEBUG) is enabled. Fix this by marking these pages not tagged before freeing them.
Fixes: d224eb0287fb ("codetag: debug: mark codetags for reserved pages as empty") Signed-off-by: Suren Baghdasaryan surenb@google.com Cc: stable@vger.kernel.org # v6.10 --- changes since v1 [1] - Added Fixes tag - CC'ed stable
[1] https://lore.kernel.org/all/20240812184455.86580-1-surenb@google.com/
mm/mm_init.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/mm/mm_init.c b/mm/mm_init.c index 75c3bd42799b..ec9324653ad9 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2245,6 +2245,16 @@ void __init init_cma_reserved_pageblock(struct page *page)
set_pageblock_migratetype(page, MIGRATE_CMA); set_page_refcounted(page); + + /* pages were reserved and not allocated */ + if (mem_alloc_profiling_enabled()) { + union codetag_ref *ref = get_page_tag_ref(page); + + if (ref) { + set_codetag_empty(ref); + put_page_tag_ref(ref); + } + } __free_pages(page, pageblock_order);
adjust_managed_page_count(page, pageblock_nr_pages);
base-commit: d74da846046aeec9333e802f5918bd3261fb5509
On 12.08.24 21:24, Suren Baghdasaryan wrote:
During CMA activation, pages in CMA area are prepared and then freed without being allocated. This triggers warnings when memory allocation debug config (CONFIG_MEM_ALLOC_PROFILING_DEBUG) is enabled. Fix this by marking these pages not tagged before freeing them.
Fixes: d224eb0287fb ("codetag: debug: mark codetags for reserved pages as empty") Signed-off-by: Suren Baghdasaryan surenb@google.com Cc: stable@vger.kernel.org # v6.10
changes since v1 [1]
- Added Fixes tag
- CC'ed stable
[1] https://lore.kernel.org/all/20240812184455.86580-1-surenb@google.com/
mm/mm_init.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/mm/mm_init.c b/mm/mm_init.c index 75c3bd42799b..ec9324653ad9 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2245,6 +2245,16 @@ void __init init_cma_reserved_pageblock(struct page *page) set_pageblock_migratetype(page, MIGRATE_CMA); set_page_refcounted(page);
- /* pages were reserved and not allocated */
- if (mem_alloc_profiling_enabled()) {
union codetag_ref *ref = get_page_tag_ref(page);
if (ref) {
set_codetag_empty(ref);
put_page_tag_ref(ref);
}
- }
Should we have a helper like clear_page_tag_ref() that wraps this?
On Tue, Aug 13, 2024 at 2:25 AM David Hildenbrand david@redhat.com wrote:
On 12.08.24 21:24, Suren Baghdasaryan wrote:
During CMA activation, pages in CMA area are prepared and then freed without being allocated. This triggers warnings when memory allocation debug config (CONFIG_MEM_ALLOC_PROFILING_DEBUG) is enabled. Fix this by marking these pages not tagged before freeing them.
Fixes: d224eb0287fb ("codetag: debug: mark codetags for reserved pages as empty") Signed-off-by: Suren Baghdasaryan surenb@google.com Cc: stable@vger.kernel.org # v6.10
changes since v1 [1]
- Added Fixes tag
- CC'ed stable
[1] https://lore.kernel.org/all/20240812184455.86580-1-surenb@google.com/
mm/mm_init.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/mm/mm_init.c b/mm/mm_init.c index 75c3bd42799b..ec9324653ad9 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2245,6 +2245,16 @@ void __init init_cma_reserved_pageblock(struct page *page)
set_pageblock_migratetype(page, MIGRATE_CMA); set_page_refcounted(page);
/* pages were reserved and not allocated */
if (mem_alloc_profiling_enabled()) {
union codetag_ref *ref = get_page_tag_ref(page);
if (ref) {
set_codetag_empty(ref);
put_page_tag_ref(ref);
}
}
Should we have a helper like clear_page_tag_ref() that wraps this?
With this one we have 3 instances of this sequence, so it makes sense to have a helper. I'm going to send a v3 with 2 patches - one introducing clear_page_tag_ref() and the next one adding this instance. Thanks for the suggestion, David!
-- Cheers,
David / dhildenb
On Tue, Aug 13, 2024 at 7:27 AM Suren Baghdasaryan surenb@google.com wrote:
On Tue, Aug 13, 2024 at 2:25 AM David Hildenbrand david@redhat.com wrote:
On 12.08.24 21:24, Suren Baghdasaryan wrote:
During CMA activation, pages in CMA area are prepared and then freed without being allocated. This triggers warnings when memory allocation debug config (CONFIG_MEM_ALLOC_PROFILING_DEBUG) is enabled. Fix this by marking these pages not tagged before freeing them.
Fixes: d224eb0287fb ("codetag: debug: mark codetags for reserved pages as empty") Signed-off-by: Suren Baghdasaryan surenb@google.com Cc: stable@vger.kernel.org # v6.10
changes since v1 [1]
- Added Fixes tag
- CC'ed stable
[1] https://lore.kernel.org/all/20240812184455.86580-1-surenb@google.com/
mm/mm_init.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/mm/mm_init.c b/mm/mm_init.c index 75c3bd42799b..ec9324653ad9 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2245,6 +2245,16 @@ void __init init_cma_reserved_pageblock(struct page *page)
set_pageblock_migratetype(page, MIGRATE_CMA); set_page_refcounted(page);
/* pages were reserved and not allocated */
if (mem_alloc_profiling_enabled()) {
union codetag_ref *ref = get_page_tag_ref(page);
if (ref) {
set_codetag_empty(ref);
put_page_tag_ref(ref);
}
}
Should we have a helper like clear_page_tag_ref() that wraps this?
With this one we have 3 instances of this sequence, so it makes sense to have a helper. I'm going to send a v3 with 2 patches - one introducing clear_page_tag_ref() and the next one adding this instance. Thanks for the suggestion, David!
v3 posted at https://lore.kernel.org/all/20240813150758.855881-1-surenb@google.com/
-- Cheers,
David / dhildenb
linux-stable-mirror@lists.linaro.org