On Thu, 29 Oct 2020 at 12:03, Ard Biesheuvel ardb@kernel.org wrote:
free_highpages() iterates over the free memblock regions in high memory, and marks each page as available for the memory management system. However, as it rounds the end of each region downwards, we may end up freeing a page that is memblock_reserve()d, resulting in memory corruption. So align the end of the range to the next page instead.
Cc: stable@vger.kernel.org Signed-off-by: Ard Biesheuvel ardb@kernel.org
arch/arm/mm/init.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index a391804c7ce3..d41781cb5496 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -354,7 +354,7 @@ static void __init free_highpages(void) for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, &range_start, &range_end, NULL) { unsigned long start = PHYS_PFN(range_start);
unsigned long end = PHYS_PFN(range_end);
unsigned long end = PHYS_PFN(PAGE_ALIGN(range_end));
Apologies, this should be
- unsigned long start = PHYS_PFN(range_start); + unsigned long start = PHYS_PFN(PAGE_ALIGN(range_start)); unsigned long end = PHYS_PFN(range_end);
Strangely enough, the wrong version above also fixed the issue I was seeing, but it is start that needs rounding up, not end.
/* Ignore complete lowmem entries */ if (end <= max_low)
-- 2.17.1