count_bitmap_extents was deleted in version 5.11, but
there is possible mistake in versions 5.6-5.10.
Region size should be calculated by subtracting
the end from the beginning.
Found by Linux Verification Center (linuxtesting.org) with SVACE.
Fixes: dfb79ddb130e ("btrfs: track discardable extents for async discard")
Signed-off-by: Anastasia Belova <abelova(a)astralinux.ru>
---
fs/btrfs/free-space-cache.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
index 4989c60b1df9..a34e266a0969 100644
--- a/fs/btrfs/free-space-cache.c
+++ b/fs/btrfs/free-space-cache.c
@@ -1930,7 +1930,7 @@ static int count_bitmap_extents(struct btrfs_free_space_ctl *ctl,
bitmap_for_each_set_region(bitmap_info->bitmap, rs, re, 0,
BITS_PER_BITMAP) {
- bytes -= (rs - re) * ctl->unit;
+ bytes -= (re - rs) * ctl->unit;
count++;
if (!bytes)
--
2.30.2
From: Zi Yan <ziy(a)nvidia.com>
__flush_dcache_pages() is called during hugetlb migration via
migrate_pages() -> migrate_hugetlbs() -> unmap_and_move_huge_page()
-> move_to_new_folio() -> flush_dcache_folio(). And with hugetlb and
without sparsemem vmemmap, struct page is not guaranteed to be contiguous
beyond a section. Use nth_page() instead.
Fixes: 15fa3e8e3269 ("mips: implement the new page table range API")
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Zi Yan <ziy(a)nvidia.com>
---
arch/mips/mm/cache.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c
index 02042100e267..7f830634dbe7 100644
--- a/arch/mips/mm/cache.c
+++ b/arch/mips/mm/cache.c
@@ -117,7 +117,7 @@ void __flush_dcache_pages(struct page *page, unsigned int nr)
* get faulted into the tlb (and thus flushed) anyways.
*/
for (i = 0; i < nr; i++) {
- addr = (unsigned long)kmap_local_page(page + i);
+ addr = (unsigned long)kmap_local_page(nth_page(page, i));
flush_data_cache_page(addr);
kunmap_local((void *)addr);
}
--
2.40.1
From: Zi Yan <ziy(a)nvidia.com>
When dealing with hugetlb pages, manipulating struct page pointers
directly can get to wrong struct page, since struct page is not guaranteed
to be contiguous on SPARSEMEM without VMEMMAP. Use pfn calculation to
handle it properly.
Fixes: eeb0efd071d8 ("mm,memory_hotplug: fix scan_movable_pages() for gigantic hugepages")
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Zi Yan <ziy(a)nvidia.com>
Reviewed-by: Muchun Song <songmuchun(a)bytedance.com>
Acked-by: David Hildenbrand <david(a)redhat.com>
---
mm/memory_hotplug.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 1b03f4ec6fd2..3b301c4023ff 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1689,7 +1689,7 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
*/
if (HPageMigratable(head))
goto found;
- skip = compound_nr(head) - (page - head);
+ skip = compound_nr(head) - (pfn - page_to_pfn(head));
pfn += skip - 1;
}
return -ENOENT;
--
2.40.1
From: Zi Yan <ziy(a)nvidia.com>
When dealing with hugetlb pages, manipulating struct page pointers
directly can get to wrong struct page, since struct page is not guaranteed
to be contiguous on SPARSEMEM without VMEMMAP. Use nth_page() to handle
it properly.
Fixes: 57a196a58421 ("hugetlb: simplify hugetlb handling in follow_page_mask")
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Zi Yan <ziy(a)nvidia.com>
Reviewed-by: Muchun Song <songmuchun(a)bytedance.com>
---
mm/hugetlb.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index af74e83d92aa..8e68e6c53e66 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6469,7 +6469,7 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
}
}
- page += ((address & ~huge_page_mask(h)) >> PAGE_SHIFT);
+ page = nth_page(page, ((address & ~huge_page_mask(h)) >> PAGE_SHIFT));
/*
* Note that page may be a sub-page, and with vmemmap
--
2.40.1