folio split clears PG_has_hwpoisoned, but the flag should be preserved in after-split folios containing pages with PG_hwpoisoned flag if the folio is split to >0 order folios. Scan all pages in a to-be-split folio to determine which after-split folios need the flag.
An alternatives is to change PG_has_hwpoisoned to PG_maybe_hwpoisoned to avoid the scan and set it on all after-split folios, but resulting false positive has undesirable negative impact. To remove false positive, caller of folio_test_has_hwpoisoned() and folio_contain_hwpoisoned_page() needs to do the scan. That might be causing a hassle for current and future callers and more costly than doing the scan in the split code. More details are discussed in [1].
This issue can be exposed via: 1. splitting a has_hwpoisoned folio to >0 order from debugfs interface; 2. truncating part of a has_hwpoisoned folio in truncate_inode_partial_folio().
And later accesses to a hwpoisoned page could be possible due to the missing has_hwpoisoned folio flag. This will lead to MCE errors.
Link: https://lore.kernel.org/all/CAHbLzkoOZm0PXxE9qwtF4gKR=cpRXrSrJ9V9Pm2DJexs985... [1] Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages") Cc: stable@vger.kernel.org Signed-off-by: Zi Yan ziy@nvidia.com --- From V3[1]:
1. Separated from the original series; 2. Added Fixes tag and cc'd stable; 3. Simplified page_range_has_hwpoisoned(); 4. Renamed check_poisoned_pages to handle_hwpoison, made it const, and shorten the statement; 5. Removed poisoned_new_folio variable and checked the condition directly.
[1] https://lore.kernel.org/all/20251022033531.389351-2-ziy@nvidia.com/
mm/huge_memory.c | 23 ++++++++++++++++++++--- 1 file changed, 20 insertions(+), 3 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index fc65ec3393d2..5215bb6aecfc 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3455,6 +3455,14 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins) caller_pins; }
+static bool page_range_has_hwpoisoned(struct page *page, long nr_pages) +{ + for (; nr_pages; page++, nr_pages--) + if (PageHWPoison(page)) + return true; + return false; +} + /* * It splits @folio into @new_order folios and copies the @folio metadata to * all the resulting folios. @@ -3462,17 +3470,24 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins) static void __split_folio_to_order(struct folio *folio, int old_order, int new_order) { + /* Scan poisoned pages when split a poisoned folio to large folios */ + const bool handle_hwpoison = folio_test_has_hwpoisoned(folio) && new_order; long new_nr_pages = 1 << new_order; long nr_pages = 1 << old_order; long i;
+ folio_clear_has_hwpoisoned(folio); + + /* Check first new_nr_pages since the loop below skips them */ + if (handle_hwpoison && + page_range_has_hwpoisoned(folio_page(folio, 0), new_nr_pages)) + folio_set_has_hwpoisoned(folio); /* * Skip the first new_nr_pages, since the new folio from them have all * the flags from the original folio. */ for (i = new_nr_pages; i < nr_pages; i += new_nr_pages) { struct page *new_head = &folio->page + i; - /* * Careful: new_folio is not a "real" folio before we cleared PageTail. * Don't pass it around before clear_compound_head(). @@ -3514,6 +3529,10 @@ static void __split_folio_to_order(struct folio *folio, int old_order, (1L << PG_dirty) | LRU_GEN_MASK | LRU_REFS_MASK));
+ if (handle_hwpoison && + page_range_has_hwpoisoned(new_head, new_nr_pages)) + folio_set_has_hwpoisoned(new_folio); + new_folio->mapping = folio->mapping; new_folio->index = folio->index + i;
@@ -3600,8 +3619,6 @@ static int __split_unmapped_folio(struct folio *folio, int new_order, int start_order = uniform_split ? new_order : old_order - 1; int split_order;
- folio_clear_has_hwpoisoned(folio); - /* * split to new_order one order at a time. For uniform split, * folio is split to new_order directly.
On 23.10.25 05:05, Zi Yan wrote:
folio split clears PG_has_hwpoisoned, but the flag should be preserved in after-split folios containing pages with PG_hwpoisoned flag if the folio is split to >0 order folios. Scan all pages in a to-be-split folio to determine which after-split folios need the flag.
An alternatives is to change PG_has_hwpoisoned to PG_maybe_hwpoisoned to avoid the scan and set it on all after-split folios, but resulting false positive has undesirable negative impact. To remove false positive, caller of folio_test_has_hwpoisoned() and folio_contain_hwpoisoned_page() needs to do the scan. That might be causing a hassle for current and future callers and more costly than doing the scan in the split code. More details are discussed in [1].
This issue can be exposed via:
- splitting a has_hwpoisoned folio to >0 order from debugfs interface;
- truncating part of a has_hwpoisoned folio in truncate_inode_partial_folio().
And later accesses to a hwpoisoned page could be possible due to the missing has_hwpoisoned folio flag. This will lead to MCE errors.
Link: https://lore.kernel.org/all/CAHbLzkoOZm0PXxE9qwtF4gKR=cpRXrSrJ9V9Pm2DJexs985... [1] Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages") Cc: stable@vger.kernel.org Signed-off-by: Zi Yan ziy@nvidia.com
Thanks!
Acked-by: David Hildenbrand david@redhat.com
On 10/23/25 05:05, Zi Yan wrote:
folio split clears PG_has_hwpoisoned, but the flag should be preserved in after-split folios containing pages with PG_hwpoisoned flag if the folio is split to >0 order folios. Scan all pages in a to-be-split folio to determine which after-split folios need the flag.
An alternatives is to change PG_has_hwpoisoned to PG_maybe_hwpoisoned to avoid the scan and set it on all after-split folios, but resulting false positive has undesirable negative impact. To remove false positive, caller of folio_test_has_hwpoisoned() and folio_contain_hwpoisoned_page() needs to do the scan. That might be causing a hassle for current and future callers and more costly than doing the scan in the split code. More details are discussed in [1].
This issue can be exposed via:
- splitting a has_hwpoisoned folio to >0 order from debugfs interface;
Is it easy to add a selftest in split_huge_page_test for this scenario?
- truncating part of a has_hwpoisoned folio in truncate_inode_partial_folio().
-- Pankaj
On 23 Oct 2025, at 7:10, Pankaj Raghav wrote:
On 10/23/25 05:05, Zi Yan wrote:
folio split clears PG_has_hwpoisoned, but the flag should be preserved in after-split folios containing pages with PG_hwpoisoned flag if the folio is split to >0 order folios. Scan all pages in a to-be-split folio to determine which after-split folios need the flag.
An alternatives is to change PG_has_hwpoisoned to PG_maybe_hwpoisoned to avoid the scan and set it on all after-split folios, but resulting false positive has undesirable negative impact. To remove false positive, caller of folio_test_has_hwpoisoned() and folio_contain_hwpoisoned_page() needs to do the scan. That might be causing a hassle for current and future callers and more costly than doing the scan in the split code. More details are discussed in [1].
This issue can be exposed via:
- splitting a has_hwpoisoned folio to >0 order from debugfs interface;
Is it easy to add a selftest in split_huge_page_test for this scenario?
Probably, but I prefer to do this in a separate memory failure test. I think the steps are: 0. set up a SIGBUS handler, 1. get a XFS image, like split_huge_page_test does, 2. set block size > page size, 3. fault in a large folio bigger than block size, 4. madvise(MADV_HWPOISON), 5. catch SIGBUS since the folio cannot be split to order-0 and check the corresponding folio's has_hwpoison flag.
I will put this on my TODO list.
- truncating part of a has_hwpoisoned folio in truncate_inode_partial_folio().
-- Pankaj
-- Best Regards, Yan, Zi
On Wed, Oct 22, 2025 at 8:05 PM Zi Yan ziy@nvidia.com wrote:
folio split clears PG_has_hwpoisoned, but the flag should be preserved in after-split folios containing pages with PG_hwpoisoned flag if the folio is split to >0 order folios. Scan all pages in a to-be-split folio to determine which after-split folios need the flag.
An alternatives is to change PG_has_hwpoisoned to PG_maybe_hwpoisoned to avoid the scan and set it on all after-split folios, but resulting false positive has undesirable negative impact. To remove false positive, caller of folio_test_has_hwpoisoned() and folio_contain_hwpoisoned_page() needs to do the scan. That might be causing a hassle for current and future callers and more costly than doing the scan in the split code. More details are discussed in [1].
This issue can be exposed via:
- splitting a has_hwpoisoned folio to >0 order from debugfs interface;
- truncating part of a has_hwpoisoned folio in truncate_inode_partial_folio().
And later accesses to a hwpoisoned page could be possible due to the missing has_hwpoisoned folio flag. This will lead to MCE errors.
Link: https://lore.kernel.org/all/CAHbLzkoOZm0PXxE9qwtF4gKR=cpRXrSrJ9V9Pm2DJexs985... [1] Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages") Cc: stable@vger.kernel.org Signed-off-by: Zi Yan ziy@nvidia.com
Thanks for fixing this. Reviewed-by: Yang Shi yang@os.amperecomputing.com
From V3[1]:
- Separated from the original series;
- Added Fixes tag and cc'd stable;
- Simplified page_range_has_hwpoisoned();
- Renamed check_poisoned_pages to handle_hwpoison, made it const, and shorten the statement;
- Removed poisoned_new_folio variable and checked the condition directly.
[1] https://lore.kernel.org/all/20251022033531.389351-2-ziy@nvidia.com/
mm/huge_memory.c | 23 ++++++++++++++++++++--- 1 file changed, 20 insertions(+), 3 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index fc65ec3393d2..5215bb6aecfc 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3455,6 +3455,14 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins) caller_pins; }
+static bool page_range_has_hwpoisoned(struct page *page, long nr_pages) +{
for (; nr_pages; page++, nr_pages--)if (PageHWPoison(page))return true;return false;+}
/*
- It splits @folio into @new_order folios and copies the @folio metadata to
- all the resulting folios.
@@ -3462,17 +3470,24 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins) static void __split_folio_to_order(struct folio *folio, int old_order, int new_order) {
/* Scan poisoned pages when split a poisoned folio to large folios */const bool handle_hwpoison = folio_test_has_hwpoisoned(folio) && new_order; long new_nr_pages = 1 << new_order; long nr_pages = 1 << old_order; long i;folio_clear_has_hwpoisoned(folio);/* Check first new_nr_pages since the loop below skips them */if (handle_hwpoison &&page_range_has_hwpoisoned(folio_page(folio, 0), new_nr_pages))folio_set_has_hwpoisoned(folio); /* * Skip the first new_nr_pages, since the new folio from them have all * the flags from the original folio. */ for (i = new_nr_pages; i < nr_pages; i += new_nr_pages) { struct page *new_head = &folio->page + i;
/* * Careful: new_folio is not a "real" folio before we cleared PageTail. * Don't pass it around before clear_compound_head().@@ -3514,6 +3529,10 @@ static void __split_folio_to_order(struct folio *folio, int old_order, (1L << PG_dirty) | LRU_GEN_MASK | LRU_REFS_MASK));
if (handle_hwpoison &&page_range_has_hwpoisoned(new_head, new_nr_pages))folio_set_has_hwpoisoned(new_folio);new_folio->mapping = folio->mapping; new_folio->index = folio->index + i;@@ -3600,8 +3619,6 @@ static int __split_unmapped_folio(struct folio *folio, int new_order, int start_order = uniform_split ? new_order : old_order - 1; int split_order;
folio_clear_has_hwpoisoned(folio);/* * split to new_order one order at a time. For uniform split, * folio is split to new_order directly.-- 2.51.0
On 2025/10/23 11:05, Zi Yan wrote:
folio split clears PG_has_hwpoisoned, but the flag should be preserved in after-split folios containing pages with PG_hwpoisoned flag if the folio is split to >0 order folios. Scan all pages in a to-be-split folio to determine which after-split folios need the flag.
An alternatives is to change PG_has_hwpoisoned to PG_maybe_hwpoisoned to avoid the scan and set it on all after-split folios, but resulting false positive has undesirable negative impact. To remove false positive, caller of folio_test_has_hwpoisoned() and folio_contain_hwpoisoned_page() needs to do the scan. That might be causing a hassle for current and future callers and more costly than doing the scan in the split code. More details are discussed in [1].
This issue can be exposed via:
- splitting a has_hwpoisoned folio to >0 order from debugfs interface;
- truncating part of a has_hwpoisoned folio in truncate_inode_partial_folio().
And later accesses to a hwpoisoned page could be possible due to the missing has_hwpoisoned folio flag. This will lead to MCE errors.
Link: https://lore.kernel.org/all/CAHbLzkoOZm0PXxE9qwtF4gKR=cpRXrSrJ9V9Pm2DJexs985... [1] Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages") Cc: stable@vger.kernel.org Signed-off-by: Zi Yan ziy@nvidia.com
LGTM. Reviewed-by: Baolin Wang baolin.wang@linux.alibaba.com
On 2025/10/23 11:05, Zi Yan wrote:
folio split clears PG_has_hwpoisoned, but the flag should be preserved in after-split folios containing pages with PG_hwpoisoned flag if the folio is split to >0 order folios. Scan all pages in a to-be-split folio to determine which after-split folios need the flag.
An alternatives is to change PG_has_hwpoisoned to PG_maybe_hwpoisoned to avoid the scan and set it on all after-split folios, but resulting false positive has undesirable negative impact. To remove false positive, caller of folio_test_has_hwpoisoned() and folio_contain_hwpoisoned_page() needs to do the scan. That might be causing a hassle for current and future callers and more costly than doing the scan in the split code. More details are discussed in [1].
This issue can be exposed via:
- splitting a has_hwpoisoned folio to >0 order from debugfs interface;
- truncating part of a has_hwpoisoned folio in truncate_inode_partial_folio().
And later accesses to a hwpoisoned page could be possible due to the missing has_hwpoisoned folio flag. This will lead to MCE errors.
Link: https://lore.kernel.org/all/CAHbLzkoOZm0PXxE9qwtF4gKR=cpRXrSrJ9V9Pm2DJexs985... [1] Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages") Cc: stable@vger.kernel.org Signed-off-by: Zi Yan ziy@nvidia.com
Thanks for your patch. LGTM.
Reviewed-by: Miaohe Lin linmiaohe@huawei.com
Thanks. .
On 2025/10/23 11:05, Zi Yan wrote:
folio split clears PG_has_hwpoisoned, but the flag should be preserved in after-split folios containing pages with PG_hwpoisoned flag if the folio is split to >0 order folios. Scan all pages in a to-be-split folio to determine which after-split folios need the flag.
An alternatives is to change PG_has_hwpoisoned to PG_maybe_hwpoisoned to avoid the scan and set it on all after-split folios, but resulting false positive has undesirable negative impact. To remove false positive, caller of folio_test_has_hwpoisoned() and folio_contain_hwpoisoned_page() needs to do the scan. That might be causing a hassle for current and future callers and more costly than doing the scan in the split code. More details are discussed in [1].
This issue can be exposed via:
- splitting a has_hwpoisoned folio to >0 order from debugfs interface;
- truncating part of a has_hwpoisoned folio in truncate_inode_partial_folio().
And later accesses to a hwpoisoned page could be possible due to the missing has_hwpoisoned folio flag. This will lead to MCE errors.
Link: https://lore.kernel.org/all/CAHbLzkoOZm0PXxE9qwtF4gKR=cpRXrSrJ9V9Pm2DJexs985... [1] Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages") Cc: stable@vger.kernel.org Signed-off-by: Zi Yan ziy@nvidia.com
Good spot! LGTM, feel free to add:
Reviewed-by: Lance Yang lance.yang@linux.dev
On Wed, Oct 22, 2025 at 11:05:21PM -0400, Zi Yan wrote:
folio split clears PG_has_hwpoisoned, but the flag should be preserved in after-split folios containing pages with PG_hwpoisoned flag if the folio is split to >0 order folios. Scan all pages in a to-be-split folio to determine which after-split folios need the flag.
An alternatives is to change PG_has_hwpoisoned to PG_maybe_hwpoisoned to avoid the scan and set it on all after-split folios, but resulting false positive has undesirable negative impact. To remove false positive, caller of folio_test_has_hwpoisoned() and folio_contain_hwpoisoned_page() needs to do the scan. That might be causing a hassle for current and future callers and more costly than doing the scan in the split code. More details are discussed in [1].
This issue can be exposed via:
- splitting a has_hwpoisoned folio to >0 order from debugfs interface;
- truncating part of a has_hwpoisoned folio in truncate_inode_partial_folio().
And later accesses to a hwpoisoned page could be possible due to the missing has_hwpoisoned folio flag. This will lead to MCE errors.
Link: https://lore.kernel.org/all/CAHbLzkoOZm0PXxE9qwtF4gKR=cpRXrSrJ9V9Pm2DJexs985... [1] Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages") Cc: stable@vger.kernel.org Signed-off-by: Zi Yan ziy@nvidia.com
This seems reasonable to me and is a good spot (thanks!), so:
Reviewed-by: Lorenzo Stoakes lorenzo.stoakes@oracle.com
From V3[1]:
- Separated from the original series;
- Added Fixes tag and cc'd stable;
- Simplified page_range_has_hwpoisoned();
- Renamed check_poisoned_pages to handle_hwpoison, made it const, and shorten the statement;
- Removed poisoned_new_folio variable and checked the condition directly.
[1] https://lore.kernel.org/all/20251022033531.389351-2-ziy@nvidia.com/
mm/huge_memory.c | 23 ++++++++++++++++++++--- 1 file changed, 20 insertions(+), 3 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index fc65ec3393d2..5215bb6aecfc 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3455,6 +3455,14 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins) caller_pins; }
+static bool page_range_has_hwpoisoned(struct page *page, long nr_pages) +{
- for (; nr_pages; page++, nr_pages--)
if (PageHWPoison(page))return true;- return false;
+}
/*
- It splits @folio into @new_order folios and copies the @folio metadata to
- all the resulting folios.
@@ -3462,17 +3470,24 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins) static void __split_folio_to_order(struct folio *folio, int old_order, int new_order) {
- /* Scan poisoned pages when split a poisoned folio to large folios */
- const bool handle_hwpoison = folio_test_has_hwpoisoned(folio) && new_order;
OK was going to mention has_hwpoisoned is FOLIO_SECOND_PAGE but looks like you already deal with that :)
long new_nr_pages = 1 << new_order; long nr_pages = 1 << old_order; long i;
- folio_clear_has_hwpoisoned(folio);
OK so we start by clearing the HW poisoned flag for the folio as a whole, which amounts to &folio->page[1] (which must be a tail page of course as new_order tested above).
No other pages in the range should have this flag set as is a folio thing only.
But this, in practice, sets the has_hwpoisoned flag for the first split folio...
- /* Check first new_nr_pages since the loop below skips them */
- if (handle_hwpoison &&
page_range_has_hwpoisoned(folio_page(folio, 0), new_nr_pages)) /*folio_set_has_hwpoisoned(folio);*/ for (i = new_nr_pages; i < nr_pages; i += new_nr_pages) { struct page *new_head = &folio->page + i;
- Skip the first new_nr_pages, since the new folio from them have all
- the flags from the original folio.
NIT: Why are we removing this newline?
/* * Careful: new_folio is not a "real" folio before we cleared PageTail. * Don't pass it around before clear_compound_head().@@ -3514,6 +3529,10 @@ static void __split_folio_to_order(struct folio *folio, int old_order, (1L << PG_dirty) | LRU_GEN_MASK | LRU_REFS_MASK));
if (handle_hwpoison &&page_range_has_hwpoisoned(new_head, new_nr_pages))folio_set_has_hwpoisoned(new_folio);
...We then, for each folio which will be split, we check again and propagate to each based on pages in range.
new_folio->mapping = folio->mapping; new_folio->index = folio->index + i;@@ -3600,8 +3619,6 @@ static int __split_unmapped_folio(struct folio *folio, int new_order, int start_order = uniform_split ? new_order : old_order - 1; int split_order;
- folio_clear_has_hwpoisoned(folio);
- /*
- split to new_order one order at a time. For uniform split,
- folio is split to new_order directly.
-- 2.51.0
On 24 Oct 2025, at 11:44, Lorenzo Stoakes wrote:
On Wed, Oct 22, 2025 at 11:05:21PM -0400, Zi Yan wrote:
folio split clears PG_has_hwpoisoned, but the flag should be preserved in after-split folios containing pages with PG_hwpoisoned flag if the folio is split to >0 order folios. Scan all pages in a to-be-split folio to determine which after-split folios need the flag.
An alternatives is to change PG_has_hwpoisoned to PG_maybe_hwpoisoned to avoid the scan and set it on all after-split folios, but resulting false positive has undesirable negative impact. To remove false positive, caller of folio_test_has_hwpoisoned() and folio_contain_hwpoisoned_page() needs to do the scan. That might be causing a hassle for current and future callers and more costly than doing the scan in the split code. More details are discussed in [1].
This issue can be exposed via:
- splitting a has_hwpoisoned folio to >0 order from debugfs interface;
- truncating part of a has_hwpoisoned folio in truncate_inode_partial_folio().
And later accesses to a hwpoisoned page could be possible due to the missing has_hwpoisoned folio flag. This will lead to MCE errors.
Link: https://lore.kernel.org/all/CAHbLzkoOZm0PXxE9qwtF4gKR=cpRXrSrJ9V9Pm2DJexs985... [1] Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages") Cc: stable@vger.kernel.org Signed-off-by: Zi Yan ziy@nvidia.com
This seems reasonable to me and is a good spot (thanks!), so:
Reviewed-by: Lorenzo Stoakes lorenzo.stoakes@oracle.com
From V3[1]:
- Separated from the original series;
- Added Fixes tag and cc'd stable;
- Simplified page_range_has_hwpoisoned();
- Renamed check_poisoned_pages to handle_hwpoison, made it const, and shorten the statement;
- Removed poisoned_new_folio variable and checked the condition directly.
[1] https://lore.kernel.org/all/20251022033531.389351-2-ziy@nvidia.com/
mm/huge_memory.c | 23 ++++++++++++++++++++--- 1 file changed, 20 insertions(+), 3 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index fc65ec3393d2..5215bb6aecfc 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3455,6 +3455,14 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins) caller_pins; }
+static bool page_range_has_hwpoisoned(struct page *page, long nr_pages) +{
- for (; nr_pages; page++, nr_pages--)
if (PageHWPoison(page))return true;- return false;
+}
/*
- It splits @folio into @new_order folios and copies the @folio metadata to
- all the resulting folios.
@@ -3462,17 +3470,24 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins) static void __split_folio_to_order(struct folio *folio, int old_order, int new_order) {
- /* Scan poisoned pages when split a poisoned folio to large folios */
- const bool handle_hwpoison = folio_test_has_hwpoisoned(folio) && new_order;
OK was going to mention has_hwpoisoned is FOLIO_SECOND_PAGE but looks like you already deal with that :)
Right. And has_hwpoisoned is only set for large folios.
long new_nr_pages = 1 << new_order; long nr_pages = 1 << old_order; long i;
- folio_clear_has_hwpoisoned(folio);
OK so we start by clearing the HW poisoned flag for the folio as a whole, which amounts to &folio->page[1] (which must be a tail page of course as new_order tested above).
No other pages in the range should have this flag set as is a folio thing only.
But this, in practice, sets the has_hwpoisoned flag for the first split folio...
handle_hwpoison is only true when after-split folios are large (new_order not 0). All folio has_hwpoisoned set code is guarded by handle_hwpoison.
- /* Check first new_nr_pages since the loop below skips them */
- if (handle_hwpoison &&
page_range_has_hwpoisoned(folio_page(folio, 0), new_nr_pages)) /*folio_set_has_hwpoisoned(folio);*/ for (i = new_nr_pages; i < nr_pages; i += new_nr_pages) { struct page *new_head = &folio->page + i;
- Skip the first new_nr_pages, since the new folio from them have all
- the flags from the original folio.
NIT: Why are we removing this newline?
It is a newline between two declarations.
/* * Careful: new_folio is not a "real" folio before we cleared PageTail. * Don't pass it around before clear_compound_head().@@ -3514,6 +3529,10 @@ static void __split_folio_to_order(struct folio *folio, int old_order, (1L << PG_dirty) | LRU_GEN_MASK | LRU_REFS_MASK));
if (handle_hwpoison &&page_range_has_hwpoisoned(new_head, new_nr_pages))folio_set_has_hwpoisoned(new_folio);...We then, for each folio which will be split, we check again and propagate to each based on pages in range.
Yes, but this loop only goes [new_nr_pages, nr_pages), so the code above is needed for [0, new_nr_pages). The loop is done in this way to avoid redundant work, flag and compound head setting, for [0, new_nr_pages) pages and the original folio, since there is no change between the original values and after-split values.
new_folio->mapping = folio->mapping; new_folio->index = folio->index + i;@@ -3600,8 +3619,6 @@ static int __split_unmapped_folio(struct folio *folio, int new_order, int start_order = uniform_split ? new_order : old_order - 1; int split_order;
- folio_clear_has_hwpoisoned(folio);
- /*
- split to new_order one order at a time. For uniform split,
- folio is split to new_order directly.
-- 2.51.0
-- Best Regards, Yan, Zi
linux-stable-mirror@lists.linaro.org