On 16 Feb 2025, at 5:32, David Hildenbrand wrote:
On 11.02.25 16:50, Zi Yan wrote:
folio_split() splits a large folio in the same way as buddy allocator splits a large free page for allocation. The purpose is to minimize the number of folios after the split. For example, if user wants to free the 3rd subpage in a order-9 folio, folio_split() will split the order-9 folio as: O-0, O-0, O-0, O-0, O-2, O-3, O-4, O-5, O-6, O-7, O-8 if it is anon, since anon folio does not support order-1 yet.
| | | | | | | | | |O-0|O-0|O-0|O-0| O-2 |...| O-7 | O-8 | | | | | | | | | |
O-1, O-0, O-0, O-2, O-3, O-4, O-5, O-6, O-7, O-9 if it is pagecache
| | | | | | | | | O-1 |O-0|O-0| O-2 |...| O-7 | O-8 | | | | | | | | |
It generates fewer folios (i.e., 11 or 10) than existing page split approach, which splits the order-9 to 512 order-0 folios. It also reduces the number of new xa_node needed during a pagecache folio split from 8 to 1, potentially decreasing the folio split failure rate due to memory constraints.
folio_split() and existing split_huge_page_to_list_to_order() share the folio unmapping and remapping code in __folio_split() and the common backend split code in __split_unmapped_folio() using uniform_split variable to distinguish their operations.
uniform_split_supported() and non_uniform_split_supported() are added to factor out check code and will be used outside __folio_split() in the following commit.
Signed-off-by: Zi Yan ziy@nvidia.com
mm/huge_memory.c | 137 ++++++++++++++++++++++++++++++++++------------- 1 file changed, 100 insertions(+), 37 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 21ebe2dec5a4..400dfe8a6e60 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3853,12 +3853,68 @@ static int __split_unmapped_folio(struct folio *folio, int new_order, return ret; } +static bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
bool warns)
+{
- /* order-1 is not supported for anonymous THP. */
- if (folio_test_anon(folio) && new_order == 1) {
VM_WARN_ONCE(warns, "Cannot split to order-1 folio");
return false;
- }
- /*
* No split if the file system does not support large folio.
* Note that we might still have THPs in such mappings due to
* CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping
* does not actually support large folios properly.
*/
- if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
!mapping_large_folio_support(folio->mapping)) {
In this (and a similar case below), you need
if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && !folio_test_anon(folio) && !mapping_large_folio_support(folio->mapping)) {
Otherwise mapping_large_folio_support() is unhappy:
Thanks. The patch below should fix it.
I am going to send V8, since 1. there have been 4 fixes so far for V7, a new series would help people review;
2. based on the discussion with you in THP cabal meeting, to convert split_huge_page*() to use __folio_split(), the current __folio_split() interface becomes awkward. Two changes are needed: a) use in folio offset instead of struct page, since even in truncate_inode_partial_folio() I needed to convert in folio offset struct page to use my current interface; b) split_huge_page*()'s caller might hold the page lock at a non-head page, so an additional keep_lock_at_in_folio_offset is needed to indicate which after-split folio should be kept locked after split is done.
From 8b2aa5432c8d726a1fb6ce74c971365650da9370 Mon Sep 17 00:00:00 2001 From: Zi Yan ziy@nvidia.com Date: Sun, 16 Feb 2025 09:01:29 -0500 Subject: [PATCH] mm/huge_memory: check folio_test_anon() before mapping_large_folio_support()
Otherwise mapping_large_folio_support() complains.
Signed-off-by: Zi Yan ziy@nvidia.com --- mm/huge_memory.c | 48 ++++++++++++++++++++++++------------------------ 1 file changed, 24 insertions(+), 24 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 87cb62c81bf3..deb16fe662c4 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3629,20 +3629,19 @@ static int __split_unmapped_folio(struct folio *folio, int new_order, bool non_uniform_split_supported(struct folio *folio, unsigned int new_order, bool warns) { - /* order-1 is not supported for anonymous THP. */ - if (folio_test_anon(folio) && new_order == 1) { - VM_WARN_ONCE(warns, "Cannot split to order-1 folio"); - return false; - } - - /* - * No split if the file system does not support large folio. - * Note that we might still have THPs in such mappings due to - * CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping - * does not actually support large folios properly. - */ - if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && + if (folio_test_anon(folio)) { + /* order-1 is not supported for anonymous THP. */ + VM_WARN_ONCE(warns && new_order == 1, + "Cannot split to order-1 folio"); + return new_order != 1; + } else if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && !mapping_large_folio_support(folio->mapping)) { + /* + * No split if the file system does not support large folio. + * Note that we might still have THPs in such mappings due to + * CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping + * does not actually support large folios properly. + */ VM_WARN_ONCE(warns, "Cannot split file folio to non-0 order"); return false; @@ -3662,24 +3661,25 @@ bool non_uniform_split_supported(struct folio *folio, unsigned int new_order, bool uniform_split_supported(struct folio *folio, unsigned int new_order, bool warns) { - if (folio_test_anon(folio) && new_order == 1) { - VM_WARN_ONCE(warns, "Cannot split to order-1 folio"); - return false; - } - - if (new_order) { + if (folio_test_anon(folio)) { + VM_WARN_ONCE(warns && new_order == 1, + "Cannot split to order-1 folio"); + return new_order != 1; + } else if (new_order) { if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && !mapping_large_folio_support(folio->mapping)) { VM_WARN_ONCE(warns, "Cannot split file folio to non-0 order"); return false; } - if (folio_test_swapcache(folio)) { - VM_WARN_ONCE(warns, - "Cannot split swapcache folio to non-0 order"); - return false; - } } + + if (new_order && folio_test_swapcache(folio)) { + VM_WARN_ONCE(warns, + "Cannot split swapcache folio to non-0 order"); + return false; + } + return true; }