Dear stable maintainers,
I encountered a similar issue on a 4.19.33 kernel (Chromium OS). On my board, the system would not even be able to boot if KASLR decides to map the linear region to the top of the virtual address space. This happens every 253 boots on average (there are 0xfd possible random offsets, and only the top one fails).
I tried to debug the issue, and it appears physical memory allocated for vmemmap and mem_section array would end up at the same location, corrupting each other early on boot. I could not figure out exactly why this is happening, but in any case, this patch fixes my issue (no failure in 744 reboots with 240 unique offsets, and counting...), and IMHO the ERR_PTR justification in the commit message is enough to warrant inclusion in -stable branches.
The patch below was committed to mainline as: commit c8a43c18a97845e7f94ed7d181c11f41964976a2 arm64: kaslr: Reserve size of ARM64_MEMSTART_ALIGN in linear region
and should be included in stable branches after this commit: Fixes: c031a4213c11a5db ("arm64: kaslr: randomize the linear region") i.e. anything after kernel 4.5 (git describe says v4.5-rc4-62-gc031a4213c11a5d).
Thanks,
Nicolas
On Wed, Jan 16, 2019 at 4:38 PM Yueyi Li liyueyi@live.com wrote:
On 2019/1/16 15:51, Ard Biesheuvel wrote:
On Wed, 16 Jan 2019 at 04:37, Yueyi Li liyueyi@live.com wrote:
OK, thanks. But seems this mail be ignored, do i need re-sent the patch?
On 2018/12/26 21:49, Ard Biesheuvel wrote:
On Tue, 25 Dec 2018 at 03:30, Yueyi Li liyueyi@live.com wrote:
Hi Ard,
On 2018/12/24 17:45, Ard Biesheuvel wrote:
Does the following change fix your issue as well?
index 9b432d9fcada..9dcf0ff75a11 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -447,7 +447,7 @@ void __init arm64_memblock_init(void) * memory spans, randomize the linear region as well. */ if (memstart_offset_seed > 0 && range >= ARM64_MEMSTART_ALIGN) {
range = range / ARM64_MEMSTART_ALIGN + 1;
range /= ARM64_MEMSTART_ALIGN; memstart_addr -= ARM64_MEMSTART_ALIGN * ((range * memstart_offset_seed) >> 16); }
Yes, it can fix this also. I just think modify the first *range* calculation would be easier to grasp, what do you think?
I don't think there is a difference, to be honest, but I will leave it up to the maintainers to decide which approach they prefer.
No it has been merged already. It is in v5.0-rc2 I think.
OK, thanks. :-)
On Sat, Apr 13, 2019 at 08:41:33PM +0800, Nicolas Boichat wrote:
Dear stable maintainers,
I encountered a similar issue on a 4.19.33 kernel (Chromium OS). On my board, the system would not even be able to boot if KASLR decides to map the linear region to the top of the virtual address space. This happens every 253 boots on average (there are 0xfd possible random offsets, and only the top one fails).
I tried to debug the issue, and it appears physical memory allocated for vmemmap and mem_section array would end up at the same location, corrupting each other early on boot. I could not figure out exactly why this is happening, but in any case, this patch fixes my issue (no failure in 744 reboots with 240 unique offsets, and counting...), and IMHO the ERR_PTR justification in the commit message is enough to warrant inclusion in -stable branches.
The patch below was committed to mainline as: commit c8a43c18a97845e7f94ed7d181c11f41964976a2 arm64: kaslr: Reserve size of ARM64_MEMSTART_ALIGN in linear region
and should be included in stable branches after this commit: Fixes: c031a4213c11a5db ("arm64: kaslr: randomize the linear region") i.e. anything after kernel 4.5 (git describe says v4.5-rc4-62-gc031a4213c11a5d).
I've queued it for 4.9-4.19, thanks for the report.
-- Thanks, Sasha
linux-stable-mirror@lists.linaro.org