On Tue, 20 Aug 2019, Song Liu wrote:
pti_clone_pgtable() increases addr by PUD_SIZE for pud_none(*pud) case. This is not accurate because addr may not be PUD_SIZE aligned.
You fail to explain how this happened. The code before the 32bit support did always increase by PMD_SIZE. The 32bit support broke that.
In our x86_64 kernel, pti_clone_pgtable() fails to clone 7 PMDs because of this issuse, including PMD for the irq entry table. For a memcache like workload, this introduces about 4.5x more iTLB-load and about 2.5x more iTLB-load-misses on a Skylake CPU.
This information is largely irrelevant. What matters is the fact that this got broken and incorrectly forwards the address by PUD_SIZE which is wrong if address is not PUD_SIZE aligned.
This patch fixes this issue by adding PMD_SIZE to addr for pud_none() case.
git grep 'This patch' Documentation/process/submitting-patches.rst
Cc: stable@vger.kernel.org # v4.19+ Fixes: 16a3fe634f6a ("x86/mm/pti: Clone kernel-image on PTE level for 32 bit") Signed-off-by: Song Liu songliubraving@fb.com Cc: Joerg Roedel jroedel@suse.de Cc: Thomas Gleixner tglx@linutronix.de Cc: Dave Hansen dave.hansen@linux.intel.com Cc: Andy Lutomirski luto@kernel.org Cc: Peter Zijlstra peterz@infradead.org
arch/x86/mm/pti.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c index b196524759ec..5a67c3015f59 100644 --- a/arch/x86/mm/pti.c +++ b/arch/x86/mm/pti.c @@ -330,7 +330,7 @@ pti_clone_pgtable(unsigned long start, unsigned long end, pud = pud_offset(p4d, addr); if (pud_none(*pud)) {
addr += PUD_SIZE;
addr += PMD_SIZE;
The right fix is to skip forward to the next PUD boundary instead of doing this in a loop with PMD_SIZE increments.
Thanks,
tglx