On 27 Feb 2025, at 10:14, Matthew Wilcox wrote:
On Thu, Feb 27, 2025 at 05:55:43AM +0000, Matthew Wilcox wrote:
On Wed, Feb 26, 2025 at 04:00:25PM -0500, Zi Yan wrote:
+static int __split_unmapped_folio(struct folio *folio, int new_order,
struct page *split_at, struct page *lock_at,
struct list_head *list, pgoff_t end,
struct xa_state *xas, struct address_space *mapping,
bool uniform_split)
+{
[...]
/* complete memcg works before add pages to LRU */
split_page_memcg(&folio->page, old_order, split_order);
split_page_owner(&folio->page, old_order, split_order);
pgalloc_tag_split(folio, old_order, split_order);
At least split_page_memcg() needs to become aware of 'uniform_split'.
if (folio_memcg_kmem(folio)) obj_cgroup_get_many(__folio_objcg(folio), old_nr / new_nr - 1);
If we're doing uniform_split, that calculation should be old_order - new_order - 1
umm, old_order - new_order. Anyway, here's a patch I've done on top of your work, but it probably needs to be massaged slightly and placed before your work?
Wait. uniform_split is the existing splitting one order-9 to 512 order-0 approach, so split_page_memcg() still works. For !uniform_split, split_page_memcg() is called multiple times, each time old_order = new_order + 1, so what split_page_memcg() does is: 1. two order-8 folios get their memcg, and ref count is increased by 1; 2. one of the order-8s is split into two order-7, each of which gets their memcg, and ref count is increased by 1; …
8. one of the order-1s is split into two order-0, each of which gets their memcg, and ref count is increased by 1.
At the end, the refcount is increased by old_order - new_order like you described above. Let me know if it makes sense to you.
From 190e13ed77e562eb59fa1fa4bfefdefe5d0416ed Mon Sep 17 00:00:00 2001 From: "Matthew Wilcox (Oracle)" willy@infradead.org Date: Mon, 28 Oct 2024 16:23:30 -0400 Subject: [PATCH] mm: Separate folio_split_memcg() from split_page_memcg()
Folios always use memcg_data to refer to the mem_cgroup while pages allocated with GFP_ACCOUNT have a pointer to the obj_cgroup. Since the caller already knows what it has, split the function into two and then we don't need to check.
Signed-off-by: Matthew Wilcox (Oracle) willy@infradead.org
include/linux/memcontrol.h | 7 +++++++ mm/huge_memory.c | 6 ++++-- mm/memcontrol.c | 18 +++++++++++++++--- 3 files changed, 26 insertions(+), 5 deletions(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 57664e2a8fb7..155c3f81f4df 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1039,6 +1039,8 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm, }
void split_page_memcg(struct page *head, int old_order, int new_order); +void folio_split_memcg(struct folio *folio, unsigned old_order,
unsigned new_order, bool uniform_split);
static inline u64 cgroup_id_from_mm(struct mm_struct *mm) { @@ -1463,6 +1465,11 @@ static inline void split_page_memcg(struct page *head, int old_order, int new_or { }
+static inline void folio_split_memcg(struct folio *folio, unsigned old_order,
unsigned new_order, bool uniform)
+{ +}
static inline u64 cgroup_id_from_mm(struct mm_struct *mm) { return 0; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 1e45064046a0..75fa9c9d9ec9 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3401,6 +3401,9 @@ static void __split_folio_to_order(struct folio *folio, int new_order) folio_set_young(new_folio); if (folio_test_idle(folio)) folio_set_idle(new_folio); +#ifdef CONFIG_MEMCG
new_folio->memcg_data = folio->memcg_data;
+#endif
folio_xchg_last_cpupid(new_folio, folio_last_cpupid(folio));
} @@ -3529,8 +3532,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order, } }
/* complete memcg works before add pages to LRU */
split_page_memcg(&folio->page, old_order, split_order);
split_page_owner(&folio->page, old_order, split_order); pgalloc_tag_split(folio, old_order, split_order);folio_split_memcg(folio, old_order, split_order, uniform_split);
diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 16f3bdbd37d8..c2d41e1337cb 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3064,10 +3064,22 @@ void split_page_memcg(struct page *head, int old_order, int new_order) for (i = new_nr; i < old_nr; i += new_nr) folio_page(folio, i)->memcg_data = folio->memcg_data;
- if (folio_memcg_kmem(folio))
obj_cgroup_get_many(__folio_objcg(folio), old_nr / new_nr - 1);
- obj_cgroup_get_many(__folio_objcg(folio), old_nr / new_nr - 1);
+}
+void folio_split_memcg(struct folio *folio, unsigned old_order,
unsigned new_order, bool uniform_split)
+{
- unsigned new_refs;
- if (mem_cgroup_disabled() || !folio_memcg_charged(folio))
return;
- if (uniform_split)
elsenew_refs = (1 << (old_order - new_order)) - 1;
css_get_many(&folio_memcg(folio)->css, old_nr / new_nr - 1);
new_refs = old_order - new_order;
- css_get_many(&__folio_memcg(folio)->css, new_refs);
}
unsigned long mem_cgroup_usage(struct mem_cgroup *memcg, bool swap)
2.47.2
Best Regards, Yan, Zi