Hi Andrew,
On Tue, 21 Feb 2023 17:03:13 +0800 Andrew Yang andrew.yang@mediatek.com wrote:
From: "andrew.yang" andrew.yang@mediatek.com
damon_get_page() would always increase page _refcount and isolate_lru_page() would increase page _refcount if the page's lru flag is set.
If a unevictable page isolated successfully, there will be two more _refcount. The one from isolate_lru_page() will be decreased in putback_lru_page(), but the other one from damon_get_page() will be left behind. This causes a pin page.
Whatever the case, the _refcount from damon_get_page() should be decreased.
Thank you for finding this issue! I think the David suggested subject[1] is better, though.
I think we could add below Fixes: and Cc: tags?
Fixes: 57223ac29584 ("mm/damon/paddr: support the pageout scheme") Cc: stable@vger.kernel.org # 5.16.x
Signed-off-by: andrew.yang andrew.yang@mediatek.com
mm/damon/paddr.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c index e1a4315c4be6..56d8abd08fb1 100644 --- a/mm/damon/paddr.c +++ b/mm/damon/paddr.c @@ -223,8 +223,8 @@ static unsigned long damon_pa_pageout(struct damon_region *r) putback_lru_page(page); } else { list_add(&page->lru, &page_list);
}put_page(page);
put_page(page);
Seems your patch is not based on mm-unstable tree[2]. Could you please rebase on it?
Also, let's remove the braces for the single statements[3].
[1] https://lore.kernel.org/damon/1b3e8e88-ed5c-7302-553f-4ddb3400d466@redhat.co... [2] https://docs.kernel.org/next/mm/damon/maintainer-profile.html#scm-trees [3] https://docs.kernel.org/process/coding-style.html?highlight=coding+style#pla...
Thanks, SJ
} applied = reclaim_pages(&page_list); cond_resched(); -- 2.18.0
On Tue, 2023-02-21 at 18:35 +0000, SeongJae Park wrote:
Hi Andrew,
On Tue, 21 Feb 2023 17:03:13 +0800 Andrew Yang < andrew.yang@mediatek.com> wrote:
From: "andrew.yang" andrew.yang@mediatek.com
damon_get_page() would always increase page _refcount and isolate_lru_page() would increase page _refcount if the page's lru flag is set.
If a unevictable page isolated successfully, there will be two more _refcount. The one from isolate_lru_page() will be decreased in putback_lru_page(), but the other one from damon_get_page() will be left behind. This causes a pin page.
Whatever the case, the _refcount from damon_get_page() should be decreased.
Thank you for finding this issue! I think the David suggested subject[1] is better, though.
I think we could add below Fixes: and Cc: tags?
Fixes: 57223ac29584 ("mm/damon/paddr: support the pageout scheme") Cc: stable@vger.kernel.org # 5.16.x
Signed-off-by: andrew.yang andrew.yang@mediatek.com
mm/damon/paddr.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c index e1a4315c4be6..56d8abd08fb1 100644 --- a/mm/damon/paddr.c +++ b/mm/damon/paddr.c @@ -223,8 +223,8 @@ static unsigned long damon_pa_pageout(struct damon_region *r) putback_lru_page(page); } else { list_add(&page->lru, &page_list);
}put_page(page);
put_page(page);
Seems your patch is not based on mm-unstable tree[2]. Could you please rebase on it?
Also, let's remove the braces for the single statements[3].
[1] https://lore.kernel.org/damon/1b3e8e88-ed5c-7302-553f-4ddb3400d466@redhat.co... [2] https://urldefense.com/v3/__https://docs.kernel.org/next/mm/damon/maintainer... [3] https://urldefense.com/v3/__https://docs.kernel.org/process/coding-style.htm...
Thanks, SJ
} applied = reclaim_pages(&page_list); cond_resched(); -- 2.18.0
Thanks for both of your suggestions, I will update the patch.
linux-stable-mirror@lists.linaro.org