Hi, Greg,
On Tue, Jan 30, 2024 at 12:53 AM gregkh@linuxfoundation.org wrote:
The patch below does not apply to the 6.1-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
To reproduce the conflict and resubmit, you may use the following commands:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-6.1.y git checkout FETCH_HEAD git cherry-pick -x 5056c596c3d1848021a4eaa76ee42f4c05c50346 # <resolve conflicts, build, test, etc.> git commit -s git send-email --to 'stable@vger.kernel.org' --in-reply-to '2024012911-outright-violin-e677@gregkh' --subject-prefix 'PATCH 6.1.y' HEAD^..
Possible dependencies:
5056c596c3d1 ("LoongArch/smp: Call rcutree_report_cpu_starting() at tlb_init()")
Similar to the commit which it fixes, please change rcutree_report_cpu_starting() to rcu_cpu_starting() in the code.
Huacai
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 5056c596c3d1848021a4eaa76ee42f4c05c50346 Mon Sep 17 00:00:00 2001 From: Huacai Chen chenhuacai@kernel.org Date: Fri, 26 Jan 2024 16:22:07 +0800 Subject: [PATCH] LoongArch/smp: Call rcutree_report_cpu_starting() at tlb_init()
Machines which have more than 8 nodes fail to boot SMP after commit a2ccf46333d7b2cf96 ("LoongArch/smp: Call rcutree_report_cpu_starting() earlier"). Because such machines use tlb-based per-cpu base address rather than dmw-based per-cpu base address, resulting per-cpu variables can only be accessed after tlb_init(). But rcutree_report_cpu_starting() is now called before tlb_init() and accesses per-cpu variables indeed.
Since the original patch want to avoid the lockdep warning caused by page allocation in tlb_init(), we can move rcutree_report_cpu_starting() to tlb_init() where after tlb exception configuration but before page allocation.
Fixes: a2ccf46333d7b2cf96 ("LoongArch/smp: Call rcutree_report_cpu_starting() earlier") Signed-off-by: Huacai Chen chenhuacai@loongson.cn
diff --git a/arch/loongarch/kernel/smp.c b/arch/loongarch/kernel/smp.c index a16e3dbe9f09..2b49d30eb7c0 100644 --- a/arch/loongarch/kernel/smp.c +++ b/arch/loongarch/kernel/smp.c @@ -509,7 +509,6 @@ asmlinkage void start_secondary(void) sync_counter(); cpu = raw_smp_processor_id(); set_my_cpu_offset(per_cpu_offset(cpu));
rcutree_report_cpu_starting(cpu); cpu_probe(); constant_clockevent_init();
diff --git a/arch/loongarch/mm/tlb.c b/arch/loongarch/mm/tlb.c index 2c0a411f23aa..0b95d32b30c9 100644 --- a/arch/loongarch/mm/tlb.c +++ b/arch/loongarch/mm/tlb.c @@ -284,12 +284,16 @@ static void setup_tlb_handler(int cpu) set_handler(EXCCODE_TLBNR * VECSIZE, handle_tlb_protect, VECSIZE); set_handler(EXCCODE_TLBNX * VECSIZE, handle_tlb_protect, VECSIZE); set_handler(EXCCODE_TLBPE * VECSIZE, handle_tlb_protect, VECSIZE);
}
} else {
int vec_sz __maybe_unused;
void *addr __maybe_unused;
struct page *page __maybe_unused;
/* Avoid lockdep warning */
rcutree_report_cpu_starting(cpu);
#ifdef CONFIG_NUMA
else {
void *addr;
struct page *page;
const int vec_sz = sizeof(exception_handlers);
vec_sz = sizeof(exception_handlers); if (pcpu_handlers[cpu]) return;
@@ -305,8 +309,8 @@ static void setup_tlb_handler(int cpu) csr_write64(pcpu_handlers[cpu], LOONGARCH_CSR_EENTRY); csr_write64(pcpu_handlers[cpu], LOONGARCH_CSR_MERRENTRY); csr_write64(pcpu_handlers[cpu] + 80*VECSIZE, LOONGARCH_CSR_TLBRENTRY);
}
#endif
}
}
void tlb_init(int cpu)