From: Sebastian Andrzej Siewior bigeasy@linutronix.de
[ Upstream commit 1c0829788a6e6e165846b9bedd0b908ef16260b6 ]
The statistics are incremented with raw_cpu_inc() assuming it always happens with bottom half disabled. Without per-CPU locking in local_bh_disable() on PREEMPT_RT this is no longer true.
Use this_cpu_inc() on PREEMPT_RT for the increment to not worry about preemption.
Cc: David Ahern dsahern@kernel.org Signed-off-by: Sebastian Andrzej Siewior bigeasy@linutronix.de Link: https://patch.msgid.link/20250512092736.229935-4-bigeasy@linutronix.de Signed-off-by: Paolo Abeni pabeni@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org ---
**Answer: YES** This commit should be backported to stable kernel trees based on the following analysis: ## Technical Analysis **The Issue:** The commit fixes a correctness bug in IPv4 routing statistics collection on PREEMPT_RT kernels. The `RT_CACHE_STAT_INC` macro uses `raw_cpu_inc()` which assumes preemption is disabled, but this assumption is violated on PREEMPT_RT systems where many previously non- preemptible contexts become preemptible. **Code Changes:** The fix is minimal and surgical: ```c #ifndef CONFIG_PREEMPT_RT #define RT_CACHE_STAT_INC(field) raw_cpu_inc(rt_cache_stat.field) #else #define RT_CACHE_STAT_INC(field) this_cpu_inc(rt_cache_stat.field) #endif ``` This conditional compilation ensures that: - Non-PREEMPT_RT systems maintain existing performance with `raw_cpu_inc()` - PREEMPT_RT systems get correctness with `this_cpu_inc()` which includes implicit preemption protection ## Backport Justification **1. Follows Stable Tree Criteria:** - **Bug Fix:** Corrects statistics corruption on PREEMPT_RT systems - **Small and Contained:** Only 4 lines changed in a single macro definition - **Low Risk:** No behavioral change for non-PREEMPT_RT systems - **No Architectural Changes:** Simple conditional compilation approach **2. Consistent with Similar Commits:** The change follows the exact same pattern as Similar Commit #1 and #3 (both marked "Backport Status: YES"), which fix preemption-related issues in per-CPU statistics. Like those commits, this addresses scenarios where `__this_cpu_*` or `raw_cpu_*` operations are called from preemptible context on PREEMPT_RT. **3. Real User Impact:** - **Affected Systems:** PREEMPT_RT kernels with IPv4 routing (real-time systems, industrial applications) - **Symptoms:** Inaccurate routing statistics due to race conditions during CPU migration - **Debugging Impact:** Could mislead network troubleshooting efforts **4. Technical Correctness:** Race conditions can occur when a process is preempted and migrated to another CPU between reading the per-CPU pointer and incrementing the counter, leading to lost statistics updates. The `this_cpu_inc()` variant prevents this by ensuring atomic access to per-CPU data. **5. Minimal Risk Profile:** - No functional changes to packet forwarding - Zero impact on non-PREEMPT_RT systems - Follows established kernel patterns for PREEMPT_RT safety - No performance regression expected This is exactly the type of targeted correctness fix that stable trees are designed to include - it addresses a real bug affecting a specific subset of users with minimal risk to the broader user base.
net/ipv4/route.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/net/ipv4/route.c b/net/ipv4/route.c index 41b320f0c20eb..88d7c96bfac06 100644 --- a/net/ipv4/route.c +++ b/net/ipv4/route.c @@ -189,7 +189,11 @@ const __u8 ip_tos2prio[16] = { EXPORT_SYMBOL(ip_tos2prio);
static DEFINE_PER_CPU(struct rt_cache_stat, rt_cache_stat); +#ifndef CONFIG_PREEMPT_RT #define RT_CACHE_STAT_INC(field) raw_cpu_inc(rt_cache_stat.field) +#else +#define RT_CACHE_STAT_INC(field) this_cpu_inc(rt_cache_stat.field) +#endif
#ifdef CONFIG_PROC_FS static void *rt_cache_seq_start(struct seq_file *seq, loff_t *pos)