On 13 Sep 2023, at 16:12, Zi Yan wrote:
From: Zi Yan ziy@nvidia.com
__flush_dcache_pages() is called during hugetlb migration via migrate_pages() -> migrate_hugetlbs() -> unmap_and_move_huge_page() -> move_to_new_folio() -> flush_dcache_folio(). And with hugetlb and without sparsemem vmemmap, struct page is not guaranteed to be contiguous beyond a section. Use nth_page() instead.
Fixes: 15fa3e8e3269 ("mips: implement the new page table range API") Cc: stable@vger.kernel.org Signed-off-by: Zi Yan ziy@nvidia.com
arch/mips/mm/cache.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c index 02042100e267..7f830634dbe7 100644 --- a/arch/mips/mm/cache.c +++ b/arch/mips/mm/cache.c @@ -117,7 +117,7 @@ void __flush_dcache_pages(struct page *page, unsigned int nr) * get faulted into the tlb (and thus flushed) anyways. */ for (i = 0; i < nr; i++) {
addr = (unsigned long)kmap_local_page(page + i);
flush_data_cache_page(addr); kunmap_local((void *)addr); }addr = (unsigned long)kmap_local_page(nth_page(page, i));
-- 2.40.1
Without the fix, a wrong address might be used for data cache page flush. No bug is reported. The fix comes from code inspection.
-- Best Regards, Yan, Zi