From: Hou Tao houtao1@huawei.com
[ Upstream commit d4965578267e2e81f67c86e2608481e77e9c8569 ]
bpf_map_lookup_percpu_elem() helper is also available for sleepable bpf program. When BPF JIT is disabled or under 32-bit host, bpf_map_lookup_percpu_elem() will not be inlined. Using it in a sleepable bpf program will trigger the warning in bpf_map_lookup_percpu_elem(), because the bpf program only holds rcu_read_lock_trace lock. Therefore, add the missed check.
Reported-by: syzbot+dce5aae19ae4d6399986@syzkaller.appspotmail.com Closes: https://lore.kernel.org/bpf/000000000000176a130617420310@google.com/ Signed-off-by: Hou Tao houtao1@huawei.com Link: https://lore.kernel.org/r/20250526062534.1105938-1-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov ast@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org ---
**YES** This commit should be backported to stable kernel trees based on my analysis: ## Analysis **1. Clear Bug Fix:** This commit fixes a legitimate bug where `bpf_map_lookup_percpu_elem()` triggers warnings in sleepable BPF programs when the BPF JIT is disabled or on 32-bit hosts. The warning occurs because sleepable BPF programs hold `rcu_read_lock_trace` instead of `rcu_read_lock`, but the function only checked for the latter. **2. Minimal, Contained Change:** The fix is extremely small and surgical - it only adds `!rcu_read_lock_trace_held()` to the existing warning condition in `bpf_map_lookup_percpu_elem()` at kernel/bpf/helpers.c:132-133. This follows the exact same pattern established by the previous similar fix. **3. Strong Historical Precedent:** Multiple similar commits have been successfully backported: - **Commit 169410eba271** (Similar Commit #1 - Backport Status: YES) - Added the same `rcu_read_lock_trace_held()` check to `bpf_map_{lookup,update,delete}_elem()` helpers with identical reasoning - **Commit 29a7e00ffadd** (Similar Commit #4 - Backport Status: YES) - Fixed missed RCU read lock in `bpf_task_under_cgroup()` for sleepable programs **4. Clear User Impact:** The commit was reported by syzbot and fixes a concrete issue affecting users running sleepable BPF programs. Without this fix, users see spurious warnings that indicate potential RCU usage bugs. **5. Low Regression Risk:** The change only expands the conditions under which the warning is suppressed - it doesn't change any functional behavior, just makes the assertion more accurate for sleepable BPF programs. **6. Part of Ongoing Pattern:** This is the missing piece in a series of similar fixes that have systematically addressed RCU assertions for sleepable BPF programs. The previous commit 169410eba271 fixed the basic map helpers but missed this percpu variant. **7. Stable Tree Criteria Alignment:** - Fixes important functionality (eliminates false warnings) - No architectural changes - Minimal risk of regression - Confined to BPF subsystem - Clear side effects (none beyond fixing the warning) The commit perfectly matches the stable tree backporting criteria and follows the established pattern of similar successful backports.
kernel/bpf/helpers.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index f27ce162427ab..5e5e945a86b9b 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -129,7 +129,8 @@ const struct bpf_func_proto bpf_map_peek_elem_proto = {
BPF_CALL_3(bpf_map_lookup_percpu_elem, struct bpf_map *, map, void *, key, u32, cpu) { - WARN_ON_ONCE(!rcu_read_lock_held() && !rcu_read_lock_bh_held()); + WARN_ON_ONCE(!rcu_read_lock_held() && !rcu_read_lock_trace_held() && + !rcu_read_lock_bh_held()); return (unsigned long) map->ops->map_lookup_percpu_elem(map, key, cpu); }