From: Saket Kumar Bhaskar skb99@linux.ibm.com
[ Upstream commit a9d4e9f0e871352a48a82da11a50df7196fe567a ]
For systems having CONFIG_NR_CPUS set to > 1024 in kernel config the selftest fails as arena_spin_lock_irqsave() returns EOPNOTSUPP. (eg - incase of powerpc default value for CONFIG_NR_CPUS is 8192)
The selftest is skipped incase bpf program returns EOPNOTSUPP, with a descriptive message logged.
Tested-by: Venkat Rao Bagalkote venkat88@linux.ibm.com Signed-off-by: Saket Kumar Bhaskar skb99@linux.ibm.com Link: https://lore.kernel.org/r/20250913091337.1841916-1-skb99@linux.ibm.com Signed-off-by: Alexei Starovoitov ast@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org ---
LLM Generated explanations, may be completely bogus:
YES - `arena_spin_lock` returns `-EOPNOTSUPP` once `CONFIG_NR_CPUS` exceeds 1024 (`tools/testing/selftests/bpf/progs/bpf_arena_spin_lock.h:497`), so on platforms like powerpc (default 8192 CPUs) every test run exits early and the user space harness currently asserts that the retval must be zero (`tools/testing/selftests/bpf/prog_tests/arena_spin_lock.c:41-47` before this change), causing the subtests to fail outright. - The patch teaches the harness to recognize that specific failure mode: `spin_lock_thread()` now short‑circuits when it sees `-EOPNOTSUPP` instead of tripping the ASSERT (`tools/testing/selftests/bpf/prog_tests/arena_spin_lock.c:44-50`), and the BPF program annotates the condition by setting `test_skip = 3` before returning (`tools/testing/selftests/bpf/progs/arena_spin_lock.c:40-44`). - After all worker threads complete, the host test checks that flag, prints an explicit skip message, and marks the subtest as skipped instead of comparing the counter and failing (`tools/testing/selftests/bpf/prog_tests/arena_spin_lock.c:94-101`). This lets kselftest succeed on high-NR_CPUS systems while still reporting the unsupported configuration. - The change is entirely confined to selftests, has no runtime or ABI impact, and aligns the tests with the documented hardware limitation, making it a low-risk fix for a real, reproducible failure on existing platforms.
.../selftests/bpf/prog_tests/arena_spin_lock.c | 13 +++++++++++++ tools/testing/selftests/bpf/progs/arena_spin_lock.c | 5 ++++- 2 files changed, 17 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/arena_spin_lock.c b/tools/testing/selftests/bpf/prog_tests/arena_spin_lock.c index 0223fce4db2bc..693fd86fbde62 100644 --- a/tools/testing/selftests/bpf/prog_tests/arena_spin_lock.c +++ b/tools/testing/selftests/bpf/prog_tests/arena_spin_lock.c @@ -40,8 +40,13 @@ static void *spin_lock_thread(void *arg)
err = bpf_prog_test_run_opts(prog_fd, &topts); ASSERT_OK(err, "test_run err"); + + if (topts.retval == -EOPNOTSUPP) + goto end; + ASSERT_EQ((int)topts.retval, 0, "test_run retval");
+end: pthread_exit(arg); }
@@ -63,6 +68,7 @@ static void test_arena_spin_lock_size(int size) skel = arena_spin_lock__open_and_load(); if (!ASSERT_OK_PTR(skel, "arena_spin_lock__open_and_load")) return; + if (skel->data->test_skip == 2) { test__skip(); goto end; @@ -86,6 +92,13 @@ static void test_arena_spin_lock_size(int size) goto end_barrier; }
+ if (skel->data->test_skip == 3) { + printf("%s:SKIP: CONFIG_NR_CPUS exceed the maximum supported by arena spinlock\n", + __func__); + test__skip(); + goto end_barrier; + } + ASSERT_EQ(skel->bss->counter, repeat * nthreads, "check counter value");
end_barrier: diff --git a/tools/testing/selftests/bpf/progs/arena_spin_lock.c b/tools/testing/selftests/bpf/progs/arena_spin_lock.c index c4500c37f85e0..086b57a426cf5 100644 --- a/tools/testing/selftests/bpf/progs/arena_spin_lock.c +++ b/tools/testing/selftests/bpf/progs/arena_spin_lock.c @@ -37,8 +37,11 @@ int prog(void *ctx) #if defined(ENABLE_ATOMICS_TESTS) && defined(__BPF_FEATURE_ADDR_SPACE_CAST) unsigned long flags;
- if ((ret = arena_spin_lock_irqsave(&lock, flags))) + if ((ret = arena_spin_lock_irqsave(&lock, flags))) { + if (ret == -EOPNOTSUPP) + test_skip = 3; return ret; + } if (counter != limit) counter++; bpf_repeat(cs_count);