On 2023/12/1 04:35, Johannes Weiner wrote:
On Thu, Nov 30, 2023 at 12:07:41PM -0800, Nhat Pham wrote:
On Thu, Nov 30, 2023 at 11:57 AM Matthew Wilcox willy@infradead.org wrote:
On Thu, Nov 30, 2023 at 11:40:18AM -0800, Nhat Pham wrote:
This patch changes list_lru interface so that the caller must explicitly specify numa node and memcg when adding and removing objects. The old list_lru_add() and list_lru_del() are renamed to list_lru_add_obj() and list_lru_del_obj(), respectively.
Wouldn't it be better to add list_lru_add_memcg() and list_lru_del_memcg() and have:
+bool list_lru_del(struct list_lru *lru, struct list_head *item) +{
int nid = page_to_nid(virt_to_page(item));
struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
mem_cgroup_from_slab_obj(item) : NULL;
return list_lru_del_memcg(lru, item, nid, memcg);
+}
Seems like _most_ callers will want the original versions and only a few will want the explicit memcg/nid versions. No?
I actually did something along that line in earlier iterations of this patch series (albeit with poorer naming - __list_lru_add() instead of list_lru_add_memcg()). The consensus after some back and forth was that the original list_lru_add() was not a very good design (the better one was this new version that allows for explicit numa/memcg selection). So I agreed to fix it everywhere as a prep patch.
I don't have strong opinions here to be completely honest, but I do think this new API makes more sense (at the cost of quite a bit of elbow grease to fix every callsites and extra reviewing).
Maybe I can shed some light since I was pushing for doing it this way.
The quiet assumption that 'struct list_head *item' is (embedded in) a slab object that is also charged to a cgroup is a bit much, given that nothing in the name or documentation of the function points to that.
It bit us in the THP shrinker where that list head is embedded in a tailpage (virt_to_page(page) is fun to debug). And it caused some confusion in this case as well, where the zswap entry is a slab object but not charged (the entry descriptor is not attractive for cgroup accounting, only the backing memory it points to.)
Hi,
I have a question, maybe I missed something since I haven't read all the earlier versions.
IIUC, the problem here is that "zswap_entry" has different memcg and node than the "page", so I wonder if we can just charge "zswap_entry" to the same memcg of the "page".
Like we can do these when allocating the "zswap_entry":
old_memcg = set_active_memcg(memcg) kmem_cache_alloc_lru(zswap_entry_cache, lru, gfp) set_active_memcg(old_memcg)
The good points are:
1. "zswap_entry" is charged to the memcg of "page", which is more sensible?
2. We can reuse the kmem_cache_alloc_lru() interface, which makes code simpler since we don't need to manage list_lru_memcg by ourselves.
3. Maybe the new list_lru_add() and list_lru_del() are not needed anymore? Since the "zswap_entry" is of the same memcg and node with the "page". But don't know if THP shrinker still need it.
Thanks!
Yes, for most users - at least right now - the current assumption is accurate. The thinking was just that if we do have to differentiate callers now anyway, we might as well make the interface a bit more self-documenting and harder to misuse going forward, even if it's a bit more churn now.