On 16 Feb 2024, at 5:06, Pankaj Raghav (Samsung) wrote:
Hi Zi Yan,
On Tue, Feb 13, 2024 at 04:55:13PM -0500, Zi Yan wrote:
From: Zi Yan ziy@nvidia.com
Hi all,
File folio supports any order and multi-size THP is upstreamed[1], so both file and anonymous folios can be >0 order. Currently, split_huge_page() only splits a huge page to order-0 pages, but splitting to orders higher than 0 is going to better utilize large folios. In addition, Large Block Sizes in XFS support would benefit from it[2]. This patchset adds support for splitting a large folio to any lower order folios and uses it during file folio truncate operations.
I added your patches on top of my patches, but removed patch 6 and I added this instead:
diff --git a/mm/truncate.c b/mm/truncate.c index 725b150e47ac..dd07e2e327a8 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -239,7 +239,8 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end) folio_invalidate(folio, offset, length); if (!folio_test_large(folio)) return true;
if (split_folio(folio) == 0)
if (split_folio_to_order(folio,
mapping_min_folio_order(folio->mapping)) == 0) return true; if (folio_test_dirty(folio)) return false;
I ran genric/476 fstest[1] with SOAK_DURATION set to 360 seconds. This test uses fstress to do a lot of writes, truncate operations, etc. I ran this on XFS with **64k block size on a 4k page size system**.
I recorded the vm event for split page and this was the result I got:
Before your patches: root@debian:~/xfstests# cat /proc/vmstat | grep split thp_split_page 0 thp_split_page_failed 5819
After your patches: root@debian:~/xfstests# cat /proc/vmstat | grep split thp_split_page 5846 thp_split_page_failed 20
Your patch series definitely helps with splitting the folios while still maintaining the min_folio_order that LBS requires.
Sounds great! Thanks for testing.
We are still discussing how to quantify this benefit in terms of some metric with this support. If you have some ideas here, let me know.
From my understanding, the benefit will come from that page cache folio size is bigger with LBS (plus this patchset) after truncate. I assume any benchmark testing read/write throughput after truncate operations might be helpful.
I will run the whole xfstests tonight to check for any regressions.
Can you use the update patches from: https://github.com/x-y-z/linux-1gb-thp/tree/split_thp_to_any_order_v5-mm-eve... It contains changes and fixes based on the feedback from this version. I am planning to send this new version out soon.
-- Pankaj
[1] https://github.com/kdave/xfstests/blob/master/tests/generic/476
-- Best Regards, Yan, Zi