From: Joerg Roedel jroedel@suse.de
When vmalloc_sync_all() iterates over the address space until FIX_ADDR_TOP it will sync the whole kernel address space starting from VMALLOC_START.
This is not a problem when the kernel address range is identical in all page-tables, but this is no longer the case when PTI is enabled on x86-32. In that case the per-process LDT is mapped in the kernel address range and vmalloc_sync_all() clears the LDT mapping for all processes.
To make LDT working again vmalloc_sync_all() must only iterate over the volatile parts of the kernel address range that are identical between all processes. This includes the VMALLOC and the PKMAP areas on x86-32.
The order of the ranges in the address space is:
VMALLOC -> PKMAP -> LDT -> CPU_ENTRY_AREA -> FIX_ADDR
So the right check in vmalloc_sync_all() is "address < LDT_BASE_ADDR" to make sure the VMALLOC and PKMAP areas are synchronized and the LDT mapping is not falsely overwritten. the CPU_ENTRY_AREA and the FIXMAP area are no longer synced as well, but these ranges are synchronized on page-table creation time and do not change during runtime.
This change fixes the ldt_gdt selftest in my setup.
Fixes: 7757d607c6b3 ("x86/pti: AllowCONFIG_PAGE_TABLE_ISOLATION for x86_32") Cc: stable@vger.kernel.org Signed-off-by: Joerg Roedel jroedel@suse.de --- arch/x86/mm/fault.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 9ceacd1156db..144329c44436 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -197,7 +197,7 @@ void vmalloc_sync_all(void) return;
for (address = VMALLOC_START & PMD_MASK; - address >= TASK_SIZE_MAX && address < FIXADDR_TOP; + address >= TASK_SIZE_MAX && address < LDT_BASE_ADDR; address += PMD_SIZE) { struct page *page;
On Tue, Nov 26, 2019 at 11:09:42AM +0100, Joerg Roedel wrote:
From: Joerg Roedel jroedel@suse.de
When vmalloc_sync_all() iterates over the address space until FIX_ADDR_TOP it will sync the whole kernel address space starting from VMALLOC_START.
This is not a problem when the kernel address range is identical in all page-tables, but this is no longer the case when PTI is enabled on x86-32. In that case the per-process LDT is mapped in the kernel address range and vmalloc_sync_all() clears the LDT mapping for all processes.
To make LDT working again vmalloc_sync_all() must only iterate over the volatile parts of the kernel address range that are identical between all processes. This includes the VMALLOC and the PKMAP areas on x86-32.
The order of the ranges in the address space is:
VMALLOC -> PKMAP -> LDT -> CPU_ENTRY_AREA -> FIX_ADDR
So the right check in vmalloc_sync_all() is "address < LDT_BASE_ADDR" to make sure the VMALLOC and PKMAP areas are synchronized and the LDT mapping is not falsely overwritten. the CPU_ENTRY_AREA and the FIXMAP area are no longer synced as well, but these ranges are synchronized on page-table creation time and do not change during runtime.
This change fixes the ldt_gdt selftest in my setup.
Fixes: 7757d607c6b3 ("x86/pti: AllowCONFIG_PAGE_TABLE_ISOLATION for x86_32") Cc: stable@vger.kernel.org Signed-off-by: Joerg Roedel jroedel@suse.de
arch/x86/mm/fault.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
Reported-by: Borislav Petkov bp@suse.de Tested-by: Borislav Petkov bp@suse.de
Thx Jörg!
* Joerg Roedel joro@8bytes.org wrote:
From: Joerg Roedel jroedel@suse.de
When vmalloc_sync_all() iterates over the address space until FIX_ADDR_TOP it will sync the whole kernel address space starting from VMALLOC_START.
This is not a problem when the kernel address range is identical in all page-tables, but this is no longer the case when PTI is enabled on x86-32. In that case the per-process LDT is mapped in the kernel address range and vmalloc_sync_all() clears the LDT mapping for all processes.
To make LDT working again vmalloc_sync_all() must only iterate over the volatile parts of the kernel address range that are identical between all processes. This includes the VMALLOC and the PKMAP areas on x86-32.
The order of the ranges in the address space is:
VMALLOC -> PKMAP -> LDT -> CPU_ENTRY_AREA -> FIX_ADDR
So the right check in vmalloc_sync_all() is "address < LDT_BASE_ADDR" to make sure the VMALLOC and PKMAP areas are synchronized and the LDT mapping is not falsely overwritten. the CPU_ENTRY_AREA and the FIXMAP area are no longer synced as well, but these ranges are synchronized on page-table creation time and do not change during runtime.
Note that the last sentence is not really true, because various fixmap PTE entries and the CEA areas may change: ACPI uses a dynamic fixmap entry in ghes_map() and PTI uses dynamic PTEs as well, such as when mapping the debug store in alloc_bts_buffer(), etc.
What you wanted to say is probably that on 32-bit kernels with !SHARED_KERNEL_PMD page table layouts the init_mm.pgd is the 'reference kernel page table', which, whenever vmalloc pmds get removed, must be copied over into all page tables listed in pgd_list.
(The addition of vmalloc PMD and PTE entries is lazy processed, at fault time.)
The vmalloc_sync_all() also iterating over the LDT range is buggy, because for the LDT the mappings are *intentionally* and fundamentally different between processes, i.e. not synchronized.
Furthermore I'm not sure we need to iterate over the PKMAP range either: those are effectively permanent PMDs as well, and they are not part of the vmalloc.c lazy deallocation scheme in any case - they are handled entirely separately in mm/highmem.c et al.
The reason vmalloc_sync_all() doesn't wreck the pkmap range is really just accidental, because kmap() is a globally synchronized mapping concept as well - but it doesn't actually remove pmds.
Anyway, below is the patch modified to only iterate over the vmalloc ranges.
Note that VMALLOC_END is two guard pages short of the true end of the vmalloc area - this should not matter because vmalloc_sync_all() only looks down to the pmd depth, which is at least 2MB granular.
Note that this is *completely* untested - I might have wrecked PKMAP in my ignorance. Mind giving it a careful review and a test?
Thanks,
Ingo
===========================> Subject: x86/mm/32: Sync only to VMALLOC_END in vmalloc_sync_all() From: Joerg Roedel jroedel@suse.de Date: Tue, 26 Nov 2019 11:09:42 +0100
From: Joerg Roedel jroedel@suse.de
The job of vmalloc_sync_all() is to help the lazy freeing of vmalloc() ranges: before such vmap ranges are reused we make sure that they are unmapped from every task's page tables.
This is really easy on pagetable setups where the kernel page tables are shared between all tasks - this is the case on 32-bit kernels with SHARED_KERNEL_PMD = 1.
But on !SHARED_KERNEL_PMD 32-bit kernels this involves iterating over the pgd_list and clearing all pmd entries in the pgds that are cleared in the init_mm.pgd, which is the reference pagetable that the vmalloc() code uses.
In that context the current practice of vmalloc_sync_all() iterating until FIX_ADDR_TOP is buggy:
for (address = VMALLOC_START & PMD_MASK; address >= TASK_SIZE_MAX && address < FIXADDR_TOP; address += PMD_SIZE) { struct page *page;
Because iterating up to FIXADDR_TOP will involve a lot of non-vmalloc address ranges:
VMALLOC -> PKMAP -> LDT -> CPU_ENTRY_AREA -> FIX_ADDR
This is mostly harmless for the FIX_ADDR and CPU_ENTRY_AREA ranges that don't clear their pmds, but it's lethal for the LDT range, which relies on having different mappings in different processes, and 'synchronizing' them in the vmalloc sense corrupts those pagetable entries (clearing them).
This got particularly prominent with PTI, which turns SHARED_KERNEL_PMD off and makes this the dominant mapping mode on 32-bit.
To make LDT working again vmalloc_sync_all() must only iterate over the volatile parts of the kernel address range that are identical between all processes.
So the correct check in vmalloc_sync_all() is "address < VMALLOC_END" to make sure the VMALLOC areas are synchronized and the LDT mapping is not falsely overwritten.
The CPU_ENTRY_AREA and the FIXMAP area are no longer synced either, but this is not really a proplem since their PMDs get established during bootup and never change.
This change fixes the ldt_gdt selftest in my setup.
Reported-by: Borislav Petkov bp@suse.de Tested-by: Borislav Petkov bp@suse.de Signed-off-by: Joerg Roedel jroedel@suse.de Cc: stable@vger.kernel.org Cc: Andy Lutomirski luto@kernel.org Cc: Borislav Petkov bp@alien8.de Cc: Brian Gerst brgerst@gmail.com Cc: Dave Hansen dave.hansen@linux.intel.com Cc: H. Peter Anvin hpa@zytor.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Fixes: 7757d607c6b3: ("x86/pti: Allow CONFIG_PAGE_TABLE_ISOLATION for x86_32") Link: https://lkml.kernel.org/r/20191126100942.13059-1-joro@8bytes.org Signed-off-by: Ingo Molnar mingo@kernel.org --- arch/x86/mm/fault.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
Index: tip/arch/x86/mm/fault.c =================================================================== --- tip.orig/arch/x86/mm/fault.c +++ tip/arch/x86/mm/fault.c @@ -197,7 +197,7 @@ void vmalloc_sync_all(void) return;
for (address = VMALLOC_START & PMD_MASK; - address >= TASK_SIZE_MAX && address < FIXADDR_TOP; + address >= TASK_SIZE_MAX && address < VMALLOC_END; address += PMD_SIZE) { struct page *page;
Hi Ingo,
On Tue, Nov 26, 2019 at 12:11:19PM +0100, Ingo Molnar wrote:
The vmalloc_sync_all() also iterating over the LDT range is buggy, because for the LDT the mappings are *intentionally* and fundamentally different between processes, i.e. not synchronized.
Yes, you are right, your patch description is much better, thanks for making it more clear and correct.
Furthermore I'm not sure we need to iterate over the PKMAP range either: those are effectively permanent PMDs as well, and they are not part of the vmalloc.c lazy deallocation scheme in any case - they are handled entirely separately in mm/highmem.c et al.
I looked a bit at that, and I didn't find an explict place where the PKMAP PMD gets established. It probably happens implicitly on the first kmap() call, so we are safe as long as the first call to kmap happens before the kernel starts the first userspace process.
But that is not an issue that should be handled by vmalloc_sync_all(), as the name already implies that it only cares about the vmalloc range. So your change to only iterate to VMALLOC_END makes sense and we should establish the PKMAP PMD at a defined place to make sure it exists when we start the first process.
Note that this is *completely* untested - I might have wrecked PKMAP in my ignorance. Mind giving it a careful review and a test?
My testing environment for 32 bit is quite limited these days, but I tested it in my PTI-x32 environment and the patch below works perfectly fine there and still fixes the ldt_gdt selftest.
Regards,
Joerg
===========================> Subject: x86/mm/32: Sync only to VMALLOC_END in vmalloc_sync_all() From: Joerg Roedel jroedel@suse.de Date: Tue, 26 Nov 2019 11:09:42 +0100
From: Joerg Roedel jroedel@suse.de
The job of vmalloc_sync_all() is to help the lazy freeing of vmalloc() ranges: before such vmap ranges are reused we make sure that they are unmapped from every task's page tables.
This is really easy on pagetable setups where the kernel page tables are shared between all tasks - this is the case on 32-bit kernels with SHARED_KERNEL_PMD = 1.
But on !SHARED_KERNEL_PMD 32-bit kernels this involves iterating over the pgd_list and clearing all pmd entries in the pgds that are cleared in the init_mm.pgd, which is the reference pagetable that the vmalloc() code uses.
In that context the current practice of vmalloc_sync_all() iterating until FIX_ADDR_TOP is buggy:
for (address = VMALLOC_START & PMD_MASK; address >= TASK_SIZE_MAX && address < FIXADDR_TOP; address += PMD_SIZE) { struct page *page;
Because iterating up to FIXADDR_TOP will involve a lot of non-vmalloc address ranges:
VMALLOC -> PKMAP -> LDT -> CPU_ENTRY_AREA -> FIX_ADDR
This is mostly harmless for the FIX_ADDR and CPU_ENTRY_AREA ranges that don't clear their pmds, but it's lethal for the LDT range, which relies on having different mappings in different processes, and 'synchronizing' them in the vmalloc sense corrupts those pagetable entries (clearing them).
This got particularly prominent with PTI, which turns SHARED_KERNEL_PMD off and makes this the dominant mapping mode on 32-bit.
To make LDT working again vmalloc_sync_all() must only iterate over the volatile parts of the kernel address range that are identical between all processes.
So the correct check in vmalloc_sync_all() is "address < VMALLOC_END" to make sure the VMALLOC areas are synchronized and the LDT mapping is not falsely overwritten.
The CPU_ENTRY_AREA and the FIXMAP area are no longer synced either, but this is not really a proplem since their PMDs get established during bootup and never change.
This change fixes the ldt_gdt selftest in my setup.
Reported-by: Borislav Petkov bp@suse.de Tested-by: Borislav Petkov bp@suse.de Signed-off-by: Joerg Roedel jroedel@suse.de Cc: stable@vger.kernel.org Cc: Andy Lutomirski luto@kernel.org Cc: Borislav Petkov bp@alien8.de Cc: Brian Gerst brgerst@gmail.com Cc: Dave Hansen dave.hansen@linux.intel.com Cc: H. Peter Anvin hpa@zytor.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Fixes: 7757d607c6b3: ("x86/pti: Allow CONFIG_PAGE_TABLE_ISOLATION for x86_32") Link: https://lkml.kernel.org/r/20191126100942.13059-1-joro@8bytes.org Signed-off-by: Ingo Molnar mingo@kernel.org
arch/x86/mm/fault.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
Index: tip/arch/x86/mm/fault.c
--- tip.orig/arch/x86/mm/fault.c +++ tip/arch/x86/mm/fault.c @@ -197,7 +197,7 @@ void vmalloc_sync_all(void) return; for (address = VMALLOC_START & PMD_MASK;
address >= TASK_SIZE_MAX && address < FIXADDR_TOP;
struct page *page;address >= TASK_SIZE_MAX && address < VMALLOC_END; address += PMD_SIZE) {
* Joerg Roedel jroedel@suse.de wrote:
Hi Ingo,
On Tue, Nov 26, 2019 at 12:11:19PM +0100, Ingo Molnar wrote:
The vmalloc_sync_all() also iterating over the LDT range is buggy, because for the LDT the mappings are *intentionally* and fundamentally different between processes, i.e. not synchronized.
Yes, you are right, your patch description is much better, thanks for making it more clear and correct.
Furthermore I'm not sure we need to iterate over the PKMAP range either: those are effectively permanent PMDs as well, and they are not part of the vmalloc.c lazy deallocation scheme in any case - they are handled entirely separately in mm/highmem.c et al.
I looked a bit at that, and I didn't find an explict place where the PKMAP PMD gets established. It probably happens implicitly on the first kmap() call, so we are safe as long as the first call to kmap happens before the kernel starts the first userspace process.
No, it happens during early boot, in permanent_kmaps_init():
vaddr = PKMAP_BASE; page_table_range_init(vaddr, vaddr + PAGE_SIZE*LAST_PKMAP, pgd_base);
That page_table_range_init() will go from PKMAP_BASE to the last PKMAP, which on PAE kernels is typically 0xff600000...0xff800000, 2MB in size, taking up exactly one PMD entry.
This single pagetable page, covering 2MB of virtual memory via 4K entries, gets passed on to the mm/highmem.c code via:
pkmap_page_table = pte;
The pkmap_page_table is mapped early on into init_mm, every task started after that with a new pgd inherits it, and the pmd entry never changes, so there's nothing to synchronize.
The pte entries within this single pagetable page do change frequently according to the kmap() code, but since the pagetable page is shared between all tasks and the TLB flushes are SMP safe, it's all synchronized by only modifying pkmap_page_table, as it should.
But that is not an issue that should be handled by vmalloc_sync_all(), as the name already implies that it only cares about the vmalloc range.
Well, hypothetically it could *accidentally* have some essentially effect on bootstrapping the PKMAP pagetables - I don't think that's so, based on the reading of the code, but only testing will tell for sure.
So your change to only iterate to VMALLOC_END makes sense and we should establish the PKMAP PMD at a defined place to make sure it exists when we start the first process.
I believe that's done in permanent_kmaps_init().
Note that this is *completely* untested - I might have wrecked PKMAP in my ignorance. Mind giving it a careful review and a test?
My testing environment for 32 bit is quite limited these days, but I tested it in my PTI-x32 environment and the patch below works perfectly fine there and still fixes the ldt_gdt selftest.
Cool, thanks! I'll apply it with your Tested-by.
Ingo
The following commit has been merged into the x86/urgent branch of tip:
Commit-ID: 9a62d20027da3164a22244d9f022c0c987261687 Gitweb: https://git.kernel.org/tip/9a62d20027da3164a22244d9f022c0c987261687 Author: Joerg Roedel jroedel@suse.de AuthorDate: Tue, 26 Nov 2019 11:09:42 +01:00 Committer: Ingo Molnar mingo@kernel.org CommitterDate: Tue, 26 Nov 2019 21:53:34 +01:00
x86/mm/32: Sync only to VMALLOC_END in vmalloc_sync_all()
The job of vmalloc_sync_all() is to help the lazy freeing of vmalloc() ranges: before such vmap ranges are reused we make sure that they are unmapped from every task's page tables.
This is really easy on pagetable setups where the kernel page tables are shared between all tasks - this is the case on 32-bit kernels with SHARED_KERNEL_PMD = 1.
But on !SHARED_KERNEL_PMD 32-bit kernels this involves iterating over the pgd_list and clearing all pmd entries in the pgds that are cleared in the init_mm.pgd, which is the reference pagetable that the vmalloc() code uses.
In that context the current practice of vmalloc_sync_all() iterating until FIX_ADDR_TOP is buggy:
for (address = VMALLOC_START & PMD_MASK; address >= TASK_SIZE_MAX && address < FIXADDR_TOP; address += PMD_SIZE) { struct page *page;
Because iterating up to FIXADDR_TOP will involve a lot of non-vmalloc address ranges:
VMALLOC -> PKMAP -> LDT -> CPU_ENTRY_AREA -> FIX_ADDR
This is mostly harmless for the FIX_ADDR and CPU_ENTRY_AREA ranges that don't clear their pmds, but it's lethal for the LDT range, which relies on having different mappings in different processes, and 'synchronizing' them in the vmalloc sense corrupts those pagetable entries (clearing them).
This got particularly prominent with PTI, which turns SHARED_KERNEL_PMD off and makes this the dominant mapping mode on 32-bit.
To make LDT working again vmalloc_sync_all() must only iterate over the volatile parts of the kernel address range that are identical between all processes.
So the correct check in vmalloc_sync_all() is "address < VMALLOC_END" to make sure the VMALLOC areas are synchronized and the LDT mapping is not falsely overwritten.
The CPU_ENTRY_AREA and the FIXMAP area are no longer synced either, but this is not really a proplem since their PMDs get established during bootup and never change.
This change fixes the ldt_gdt selftest in my setup.
[ mingo: Fixed up the changelog to explain the logic and modified the copying to only happen up until VMALLOC_END. ]
Reported-by: Borislav Petkov bp@suse.de Tested-by: Borislav Petkov bp@suse.de Signed-off-by: Joerg Roedel jroedel@suse.de Cc: stable@vger.kernel.org Cc: Andy Lutomirski luto@kernel.org Cc: Borislav Petkov bp@alien8.de Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Joerg Roedel joro@8bytes.org Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Cc: hpa@zytor.com Fixes: 7757d607c6b3: ("x86/pti: Allow CONFIG_PAGE_TABLE_ISOLATION for x86_32") Link: https://lkml.kernel.org/r/20191126111119.GA110513@gmail.com Signed-off-by: Ingo Molnar mingo@kernel.org --- arch/x86/mm/fault.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 9ceacd1..304d31d 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -197,7 +197,7 @@ void vmalloc_sync_all(void) return;
for (address = VMALLOC_START & PMD_MASK; - address >= TASK_SIZE_MAX && address < FIXADDR_TOP; + address >= TASK_SIZE_MAX && address < VMALLOC_END; address += PMD_SIZE) { struct page *page;
linux-stable-mirror@lists.linaro.org