Changes since v2 [1]: * Fix links in the changelogs to reference lkml.kernel.org instead of lore.kernel.org (Peter) * Collect Acked-by's from Kirill and Peter. * Add more Cc's for patches 1 and 2. * Strengthen the lead-in comment for the set_p*_safe() helpers (Dave)
[1]: https://lkml.org/lkml/2018/12/1/358
---
From patch 5:
Commit f77084d96355 "x86/mm/pat: Disable preemption around __flush_tlb_all()" addressed a case where __flush_tlb_all() is called without preemption being disabled. It also left a warning to catch other cases where preemption is not disabled. That warning triggers for the memory hotplug path which is also used for persistent memory enabling:
WARNING: CPU: 35 PID: 911 at ./arch/x86/include/asm/tlbflush.h:460 RIP: 0010:__flush_tlb_all+0x1b/0x3a [..] Call Trace: phys_pud_init+0x29c/0x2bb kernel_physical_mapping_init+0xfc/0x219 init_memory_mapping+0x1a5/0x3b0 arch_add_memory+0x2c/0x50 devm_memremap_pages+0x3aa/0x610 pmem_attach_disk+0x585/0x700 [nd_pmem]
Andy wondered why a path that can sleep was using __flush_tlb_all() [1] and Dave confirmed the expectation for TLB flush is for modifying / invalidating existing pte entries, but not initial population [2]. Drop the usage of __flush_tlb_all() in phys_{p4d,pud,pmd}_init() on the expectation that this path is only ever populating empty entries for the linear map. Note, at linear map teardown time there is a call to the all-cpu flush_tlb_all() to invalidate the removed mappings.
Additionally, Dave wanted some runtime assurances that kernel_physical_mapping_init() is only populating and not changing existing page table entries. Patches 1 - 4 are implementing a new set_pte() family of helpers for that purpose.
Patch 5 is tagged for -stable because the false positive warning is now showing up on 4.19-stable kernels. Patches 1 - 4 are not tagged for -stable, but if the sanity checking is needed please mark them as such.
The hang that was observed while developing the sanity checking implementation was resolved by Peter's suggestion to not trigger when the same pte value is being rewritten.
---
Dan Williams (5): generic/pgtable: Make {pmd,pud}_same() unconditionally available generic/pgtable: Introduce {p4d,pgd}_same() generic/pgtable: Introduce set_pte_safe() x86/mm: Validate kernel_physical_mapping_init() pte population x86/mm: Drop usage of __flush_tlb_all() in kernel_physical_mapping_init()
arch/x86/include/asm/pgalloc.h | 27 ++++++++++++++ arch/x86/mm/init_64.c | 30 ++++++---------- include/asm-generic/5level-fixup.h | 1 + include/asm-generic/pgtable-nop4d-hack.h | 1 + include/asm-generic/pgtable-nop4d.h | 1 + include/asm-generic/pgtable-nopud.h | 1 + include/asm-generic/pgtable.h | 56 +++++++++++++++++++++++++----- 7 files changed, 90 insertions(+), 27 deletions(-)