free_highpages() iterates over the free memblock regions in high memory, and marks each page as available for the memory management system. However, as it rounds the end of each region downwards, we may end up freeing a page that is memblock_reserve()d, resulting in memory corruption. So align the end of the range to the next page instead.
Cc: stable@vger.kernel.org Signed-off-by: Ard Biesheuvel ardb@kernel.org --- arch/arm/mm/init.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index a391804c7ce3..d41781cb5496 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -354,7 +354,7 @@ static void __init free_highpages(void) for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, &range_start, &range_end, NULL) { unsigned long start = PHYS_PFN(range_start); - unsigned long end = PHYS_PFN(range_end); + unsigned long end = PHYS_PFN(PAGE_ALIGN(range_end));
/* Ignore complete lowmem entries */ if (end <= max_low)
On Thu, 29 Oct 2020 at 12:03, Ard Biesheuvel ardb@kernel.org wrote:
free_highpages() iterates over the free memblock regions in high memory, and marks each page as available for the memory management system. However, as it rounds the end of each region downwards, we may end up freeing a page that is memblock_reserve()d, resulting in memory corruption. So align the end of the range to the next page instead.
Cc: stable@vger.kernel.org Signed-off-by: Ard Biesheuvel ardb@kernel.org
arch/arm/mm/init.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index a391804c7ce3..d41781cb5496 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -354,7 +354,7 @@ static void __init free_highpages(void) for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, &range_start, &range_end, NULL) { unsigned long start = PHYS_PFN(range_start);
unsigned long end = PHYS_PFN(range_end);
unsigned long end = PHYS_PFN(PAGE_ALIGN(range_end));
Apologies, this should be
- unsigned long start = PHYS_PFN(range_start); + unsigned long start = PHYS_PFN(PAGE_ALIGN(range_start)); unsigned long end = PHYS_PFN(range_end);
Strangely enough, the wrong version above also fixed the issue I was seeing, but it is start that needs rounding up, not end.
/* Ignore complete lowmem entries */ if (end <= max_low)
-- 2.17.1
On 10/29/2020 4:14 AM, Ard Biesheuvel wrote:
On Thu, 29 Oct 2020 at 12:03, Ard Biesheuvel ardb@kernel.org wrote:
free_highpages() iterates over the free memblock regions in high memory, and marks each page as available for the memory management system. However, as it rounds the end of each region downwards, we may end up freeing a page that is memblock_reserve()d, resulting in memory corruption. So align the end of the range to the next page instead.
Cc: stable@vger.kernel.org Signed-off-by: Ard Biesheuvel ardb@kernel.org
arch/arm/mm/init.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index a391804c7ce3..d41781cb5496 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -354,7 +354,7 @@ static void __init free_highpages(void) for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, &range_start, &range_end, NULL) { unsigned long start = PHYS_PFN(range_start);
unsigned long end = PHYS_PFN(range_end);
unsigned long end = PHYS_PFN(PAGE_ALIGN(range_end));
Apologies, this should be
unsigned long start = PHYS_PFN(range_start);
unsigned long start = PHYS_PFN(PAGE_ALIGN(range_start)); unsigned long end = PHYS_PFN(range_end);
Strangely enough, the wrong version above also fixed the issue I was seeing, but it is start that needs rounding up, not end.
Is there a particular commit that you identified which could be used as Fixes: tag to ease the back porting of such a change?
(+ Mike)
On Fri, 30 Oct 2020 at 03:25, Florian Fainelli f.fainelli@gmail.com wrote:
On 10/29/2020 4:14 AM, Ard Biesheuvel wrote:
On Thu, 29 Oct 2020 at 12:03, Ard Biesheuvel ardb@kernel.org wrote:
free_highpages() iterates over the free memblock regions in high memory, and marks each page as available for the memory management system. However, as it rounds the end of each region downwards, we may end up freeing a page that is memblock_reserve()d, resulting in memory corruption. So align the end of the range to the next page instead.
Cc: stable@vger.kernel.org Signed-off-by: Ard Biesheuvel ardb@kernel.org
arch/arm/mm/init.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index a391804c7ce3..d41781cb5496 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -354,7 +354,7 @@ static void __init free_highpages(void) for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, &range_start, &range_end, NULL) { unsigned long start = PHYS_PFN(range_start);
unsigned long end = PHYS_PFN(range_end);
unsigned long end = PHYS_PFN(PAGE_ALIGN(range_end));
Apologies, this should be
unsigned long start = PHYS_PFN(range_start);
unsigned long start = PHYS_PFN(PAGE_ALIGN(range_start)); unsigned long end = PHYS_PFN(range_end);
Strangely enough, the wrong version above also fixed the issue I was seeing, but it is start that needs rounding up, not end.
Is there a particular commit that you identified which could be used as Fixes: tag to ease the back porting of such a change?
Ah hold on. This appears to be a very recent regression, in cddb5ddf2b76debdb8cad1728ad0a9321383d933, added in v5.10-rc1.
The old code was
unsigned long start = memblock_region_memory_base_pfn(mem);
which uses PFN_UP() to round up, whereas the new code rounds down.
Looks like this is broken on a lot of platforms.
Mike?
Hi Ard,
On Fri, Oct 30, 2020 at 10:29:16AM +0100, Ard Biesheuvel wrote:
(+ Mike)
On Fri, 30 Oct 2020 at 03:25, Florian Fainelli f.fainelli@gmail.com wrote:
On 10/29/2020 4:14 AM, Ard Biesheuvel wrote:
On Thu, 29 Oct 2020 at 12:03, Ard Biesheuvel ardb@kernel.org wrote:
free_highpages() iterates over the free memblock regions in high memory, and marks each page as available for the memory management system. However, as it rounds the end of each region downwards, we may end up freeing a page that is memblock_reserve()d, resulting in memory corruption. So align the end of the range to the next page instead.
Cc: stable@vger.kernel.org Signed-off-by: Ard Biesheuvel ardb@kernel.org
arch/arm/mm/init.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index a391804c7ce3..d41781cb5496 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -354,7 +354,7 @@ static void __init free_highpages(void) for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, &range_start, &range_end, NULL) { unsigned long start = PHYS_PFN(range_start);
unsigned long end = PHYS_PFN(range_end);
unsigned long end = PHYS_PFN(PAGE_ALIGN(range_end));
Apologies, this should be
unsigned long start = PHYS_PFN(range_start);
unsigned long start = PHYS_PFN(PAGE_ALIGN(range_start)); unsigned long end = PHYS_PFN(range_end);
Strangely enough, the wrong version above also fixed the issue I was seeing, but it is start that needs rounding up, not end.
Is there a particular commit that you identified which could be used as Fixes: tag to ease the back porting of such a change?
Ah hold on. This appears to be a very recent regression, in cddb5ddf2b76debdb8cad1728ad0a9321383d933, added in v5.10-rc1.
The old code was
unsigned long start = memblock_region_memory_base_pfn(mem);
which uses PFN_UP() to round up, whereas the new code rounds down.
Looks like this is broken on a lot of platforms.
Mike?
I've reviewed again the whole series and it seems that only highmem initialization on arm and xtensa (that copied this code from arm) have this problem. I might have missed something again, though.
So, to restore the original behaviour I think the fix should be
for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, &range_start, &range_end, NULL) { unsigned long start = PHYS_UP(range_start); unsigned long end = PHYS_DOWN(range_end);
On Fri, 30 Oct 2020 at 16:18, Mike Rapoport rppt@linux.ibm.com wrote:
Hi Ard,
On Fri, Oct 30, 2020 at 10:29:16AM +0100, Ard Biesheuvel wrote:
(+ Mike)
On Fri, 30 Oct 2020 at 03:25, Florian Fainelli f.fainelli@gmail.com wrote:
On 10/29/2020 4:14 AM, Ard Biesheuvel wrote:
On Thu, 29 Oct 2020 at 12:03, Ard Biesheuvel ardb@kernel.org wrote:
free_highpages() iterates over the free memblock regions in high memory, and marks each page as available for the memory management system. However, as it rounds the end of each region downwards, we may end up freeing a page that is memblock_reserve()d, resulting in memory corruption. So align the end of the range to the next page instead.
Cc: stable@vger.kernel.org Signed-off-by: Ard Biesheuvel ardb@kernel.org
arch/arm/mm/init.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index a391804c7ce3..d41781cb5496 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -354,7 +354,7 @@ static void __init free_highpages(void) for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, &range_start, &range_end, NULL) { unsigned long start = PHYS_PFN(range_start);
unsigned long end = PHYS_PFN(range_end);
unsigned long end = PHYS_PFN(PAGE_ALIGN(range_end));
Apologies, this should be
unsigned long start = PHYS_PFN(range_start);
unsigned long start = PHYS_PFN(PAGE_ALIGN(range_start)); unsigned long end = PHYS_PFN(range_end);
Strangely enough, the wrong version above also fixed the issue I was seeing, but it is start that needs rounding up, not end.
Is there a particular commit that you identified which could be used as Fixes: tag to ease the back porting of such a change?
Ah hold on. This appears to be a very recent regression, in cddb5ddf2b76debdb8cad1728ad0a9321383d933, added in v5.10-rc1.
The old code was
unsigned long start = memblock_region_memory_base_pfn(mem);
which uses PFN_UP() to round up, whereas the new code rounds down.
Looks like this is broken on a lot of platforms.
Mike?
I've reviewed again the whole series and it seems that only highmem initialization on arm and xtensa (that copied this code from arm) have this problem. I might have missed something again, though.
So, to restore the original behaviour I think the fix should be
for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, &range_start, &range_end, NULL) { unsigned long start = PHYS_UP(range_start); unsigned long end = PHYS_DOWN(range_end);
PHYS_UP and PHYS_DOWN don't exist.
Could you please send a patch that fixes this everywhere where it's broken?
On Fri, Oct 30, 2020 at 04:22:37PM +0100, Ard Biesheuvel wrote:
On Fri, 30 Oct 2020 at 16:18, Mike Rapoport rppt@linux.ibm.com wrote:
Hi Ard,
On 10/29/2020 4:14 AM, Ard Biesheuvel wrote:
On Thu, 29 Oct 2020 at 12:03, Ard Biesheuvel ardb@kernel.org wrote:
free_highpages() iterates over the free memblock regions in high memory, and marks each page as available for the memory management system. However, as it rounds the end of each region downwards, we may end up freeing a page that is memblock_reserve()d, resulting in memory corruption. So align the end of the range to the next page instead.
Cc: stable@vger.kernel.org Signed-off-by: Ard Biesheuvel ardb@kernel.org
arch/arm/mm/init.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index a391804c7ce3..d41781cb5496 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -354,7 +354,7 @@ static void __init free_highpages(void) for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, &range_start, &range_end, NULL) { unsigned long start = PHYS_PFN(range_start);
unsigned long end = PHYS_PFN(range_end);
unsigned long end = PHYS_PFN(PAGE_ALIGN(range_end));
Apologies, this should be
unsigned long start = PHYS_PFN(range_start);
unsigned long start = PHYS_PFN(PAGE_ALIGN(range_start)); unsigned long end = PHYS_PFN(range_end);
Strangely enough, the wrong version above also fixed the issue I was seeing, but it is start that needs rounding up, not end.
Is there a particular commit that you identified which could be used as Fixes: tag to ease the back porting of such a change?
Ah hold on. This appears to be a very recent regression, in cddb5ddf2b76debdb8cad1728ad0a9321383d933, added in v5.10-rc1.
The old code was
unsigned long start = memblock_region_memory_base_pfn(mem);
which uses PFN_UP() to round up, whereas the new code rounds down.
Looks like this is broken on a lot of platforms.
Mike?
I've reviewed again the whole series and it seems that only highmem initialization on arm and xtensa (that copied this code from arm) have this problem. I might have missed something again, though.
So, to restore the original behaviour I think the fix should be
for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, &range_start, &range_end, NULL) { unsigned long start = PHYS_UP(range_start); unsigned long end = PHYS_DOWN(range_end);
PHYS_UP and PHYS_DOWN don't exist.
Could you please send a patch that fixes this everywhere where it's broken?
Argh, this should have been PFN_{UP,DOWN}. With the patch below qemu-system-arm boots for me. Does it fix your setup as well?
I kept your authorship as you did the heavy lifting here :)
With acks from ARM and xtensa maintainers I can take it via memblock tree.
From 5399699b9f8de405819c59c3feddecaac0ed1399 Mon Sep 17 00:00:00 2001
From: Ard Biesheuvel ardb@kernel.org Date: Fri, 30 Oct 2020 22:53:02 +0200 Subject: [PATCH] ARM, xtensa: highmem: avoid clobbering non-page aligned memory reservations
free_highpages() iterates over the free memblock regions in high memory, and marks each page as available for the memory management system.
Until commit cddb5ddf2b76 ("arm, xtensa: simplify initialization of high memory pages") it rounded beginning of each region upwards and end of each region downwards.
However, after that commit free_highmem() rounds the beginning and end of each region downwards, we and may end up freeing a page that is memblock_reserve()d, resulting in memory corruption.
Restore the original rounding of the region boundaries to avoid freeing reserved pages.
Fixes: cddb5ddf2b76 ("arm, xtensa: simplify initialization of high memory pages") Signed-off-by: Ard Biesheuvel ardb@kernel.org Co-developed-by: Mike Rapoport rppt@linux.ibm.com Signed-off-by: Mike Rapoport rppt@linux.ibm.com --- arch/arm/mm/init.c | 4 ++-- arch/xtensa/mm/init.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index d57112a276f5..c23dbf8bebee 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -354,8 +354,8 @@ static void __init free_highpages(void) /* set highmem page free */ for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, &range_start, &range_end, NULL) { - unsigned long start = PHYS_PFN(range_start); - unsigned long end = PHYS_PFN(range_end); + unsigned long start = PFN_UP(range_start); + unsigned long end = PFN_DOWN(range_end);
/* Ignore complete lowmem entries */ if (end <= max_low) diff --git a/arch/xtensa/mm/init.c b/arch/xtensa/mm/init.c index c6fc83efee0c..8731b7ad9308 100644 --- a/arch/xtensa/mm/init.c +++ b/arch/xtensa/mm/init.c @@ -89,8 +89,8 @@ static void __init free_highpages(void) /* set highmem page free */ for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, &range_start, &range_end, NULL) { - unsigned long start = PHYS_PFN(range_start); - unsigned long end = PHYS_PFN(range_end); + unsigned long start = PFN_UP(range_start); + unsigned long end = PFN_DOWN(range_end);
/* Ignore complete lowmem entries */ if (end <= max_low)
On Fri, 30 Oct 2020 at 22:32, Mike Rapoport rppt@linux.ibm.com wrote:
On Fri, Oct 30, 2020 at 04:22:37PM +0100, Ard Biesheuvel wrote:
On Fri, 30 Oct 2020 at 16:18, Mike Rapoport rppt@linux.ibm.com wrote:
Hi Ard,
On 10/29/2020 4:14 AM, Ard Biesheuvel wrote:
On Thu, 29 Oct 2020 at 12:03, Ard Biesheuvel ardb@kernel.org wrote: > > free_highpages() iterates over the free memblock regions in high > memory, and marks each page as available for the memory management > system. However, as it rounds the end of each region downwards, we > may end up freeing a page that is memblock_reserve()d, resulting > in memory corruption. So align the end of the range to the next > page instead. > > Cc: stable@vger.kernel.org > Signed-off-by: Ard Biesheuvel ardb@kernel.org > --- > arch/arm/mm/init.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c > index a391804c7ce3..d41781cb5496 100644 > --- a/arch/arm/mm/init.c > +++ b/arch/arm/mm/init.c > @@ -354,7 +354,7 @@ static void __init free_highpages(void) > for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, > &range_start, &range_end, NULL) { > unsigned long start = PHYS_PFN(range_start); > - unsigned long end = PHYS_PFN(range_end); > + unsigned long end = PHYS_PFN(PAGE_ALIGN(range_end)); >
Apologies, this should be
unsigned long start = PHYS_PFN(range_start);
unsigned long start = PHYS_PFN(PAGE_ALIGN(range_start)); unsigned long end = PHYS_PFN(range_end);
Strangely enough, the wrong version above also fixed the issue I was seeing, but it is start that needs rounding up, not end.
Is there a particular commit that you identified which could be used as Fixes: tag to ease the back porting of such a change?
Ah hold on. This appears to be a very recent regression, in cddb5ddf2b76debdb8cad1728ad0a9321383d933, added in v5.10-rc1.
The old code was
unsigned long start = memblock_region_memory_base_pfn(mem);
which uses PFN_UP() to round up, whereas the new code rounds down.
Looks like this is broken on a lot of platforms.
Mike?
I've reviewed again the whole series and it seems that only highmem initialization on arm and xtensa (that copied this code from arm) have this problem. I might have missed something again, though.
So, to restore the original behaviour I think the fix should be
for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, &range_start, &range_end, NULL) { unsigned long start = PHYS_UP(range_start); unsigned long end = PHYS_DOWN(range_end);
PHYS_UP and PHYS_DOWN don't exist.
Could you please send a patch that fixes this everywhere where it's broken?
Argh, this should have been PFN_{UP,DOWN}. With the patch below qemu-system-arm boots for me. Does it fix your setup as well?
Yeah that looks fine.
I kept your authorship as you did the heavy lifting here :)
With acks from ARM and xtensa maintainers I can take it via memblock tree.
That would be up to Russell (on cc) and whoever the xtensa maintainers are, who are not on CC.
Would you mind chasing them?
From 5399699b9f8de405819c59c3feddecaac0ed1399 Mon Sep 17 00:00:00 2001 From: Ard Biesheuvel ardb@kernel.org Date: Fri, 30 Oct 2020 22:53:02 +0200 Subject: [PATCH] ARM, xtensa: highmem: avoid clobbering non-page aligned memory reservations
free_highpages() iterates over the free memblock regions in high memory, and marks each page as available for the memory management system.
Until commit cddb5ddf2b76 ("arm, xtensa: simplify initialization of high memory pages") it rounded beginning of each region upwards and end of each region downwards.
However, after that commit free_highmem() rounds the beginning and end of each region downwards, we and may end up freeing a page that is
we may end up
memblock_reserve()d, resulting in memory corruption.
Restore the original rounding of the region boundaries to avoid freeing reserved pages.
Fixes: cddb5ddf2b76 ("arm, xtensa: simplify initialization of high memory pages") Signed-off-by: Ard Biesheuvel ardb@kernel.org Co-developed-by: Mike Rapoport rppt@linux.ibm.com Signed-off-by: Mike Rapoport rppt@linux.ibm.com
arch/arm/mm/init.c | 4 ++-- arch/xtensa/mm/init.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index d57112a276f5..c23dbf8bebee 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -354,8 +354,8 @@ static void __init free_highpages(void) /* set highmem page free */ for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, &range_start, &range_end, NULL) {
unsigned long start = PHYS_PFN(range_start);
unsigned long end = PHYS_PFN(range_end);
unsigned long start = PFN_UP(range_start);
unsigned long end = PFN_DOWN(range_end); /* Ignore complete lowmem entries */ if (end <= max_low)
diff --git a/arch/xtensa/mm/init.c b/arch/xtensa/mm/init.c index c6fc83efee0c..8731b7ad9308 100644 --- a/arch/xtensa/mm/init.c +++ b/arch/xtensa/mm/init.c @@ -89,8 +89,8 @@ static void __init free_highpages(void) /* set highmem page free */ for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, &range_start, &range_end, NULL) {
unsigned long start = PHYS_PFN(range_start);
unsigned long end = PHYS_PFN(range_end);
unsigned long start = PFN_UP(range_start);
unsigned long end = PFN_DOWN(range_end); /* Ignore complete lowmem entries */ if (end <= max_low)
-- 2.28.0
-- Sincerely yours, Mike.
linux-stable-mirror@lists.linaro.org