We see the following failure a few times a week:
# RUN global.data_steal ...
# tls.c:3280:data_steal:Expected recv(cfd, buf2, sizeof(buf2), MSG_DONTWAIT) (10000) == -1 (-1)
# data_steal: Test failed
# FAIL global.data_steal
not ok 8 global.data_steal
The 10000 bytes read suggests that the child process did a recv()
of half of the data using the TLS ULP and we're now getting the
remaining half. The intent of the test is to get the child to
enter _TCP_ recvmsg handler, so it needs to enter the syscall before
parent installed the TLS recvmsg with setsockopt(SOL_TLS).
Instead of the 10msec sleep send 1 byte of data and wait for the
child to consume it.
Signed-off-by: Jakub Kicinski <kuba(a)kernel.org>
---
CC: sd(a)queasysnail.net
CC: shuah(a)kernel.org
CC: linux-kselftest(a)vger.kernel.org
---
tools/testing/selftests/net/tls.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/net/tls.c b/tools/testing/selftests/net/tls.c
index a4d16a460fbe..9e2ccea13d70 100644
--- a/tools/testing/selftests/net/tls.c
+++ b/tools/testing/selftests/net/tls.c
@@ -3260,17 +3260,25 @@ TEST(data_steal) {
ASSERT_EQ(setsockopt(cfd, IPPROTO_TCP, TCP_ULP, "tls", sizeof("tls")), 0);
/* Spawn a child and get it into the read wait path of the underlying
- * TCP socket.
+ * TCP socket (before kernel .recvmsg is replaced with the TLS one).
*/
pid = fork();
ASSERT_GE(pid, 0);
if (!pid) {
- EXPECT_EQ(recv(cfd, buf, sizeof(buf) / 2, MSG_WAITALL),
- sizeof(buf) / 2);
+ EXPECT_EQ(recv(cfd, buf, sizeof(buf) / 2 + 1, MSG_WAITALL),
+ sizeof(buf) / 2 + 1);
exit(!__test_passed(_metadata));
}
- usleep(10000);
+ /* Send a sync byte and poll until it's consumed to ensure
+ * the child is in recv() before we proceed to install TLS.
+ */
+ ASSERT_EQ(send(fd, buf, 1, 0), 1);
+ do {
+ usleep(500);
+ } while (recv(cfd, buf, 1, MSG_PEEK | MSG_DONTWAIT) == 1);
+ EXPECT_EQ(errno, EAGAIN);
+
ASSERT_EQ(setsockopt(fd, SOL_TLS, TLS_TX, &tls, tls.len), 0);
ASSERT_EQ(setsockopt(cfd, SOL_TLS, TLS_RX, &tls, tls.len), 0);
--
2.52.0
run_vmtests.sh relies on being invoked from its own directory and uses
relative paths to run tests.
Change to the script directory at startup so it can be run from any
working directory without failing.
Signed-off-by: Sun Jian <sun.jian.kdev(a)gmail.com>
---
tools/testing/selftests/mm/run_vmtests.sh | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh
index d9173f2312b7..74c33fd07764 100755
--- a/tools/testing/selftests/mm/run_vmtests.sh
+++ b/tools/testing/selftests/mm/run_vmtests.sh
@@ -5,6 +5,10 @@
# Kselftest framework requirement - SKIP code is 4.
ksft_skip=4
+# Ensure relative paths work regardless of caller's cwd.
+SCRIPT_DIR=$(CDPATH= cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)
+cd "$SCRIPT_DIR" || exit 1
+
count_total=0
count_pass=0
count_fail=0
--
2.43.0
This patch series introduces BPF iterators for wakeup_source, enabling
BPF programs to efficiently traverse a device's wakeup sources.
Currently, inspecting wakeup sources typically involves reading interfaces
like /sys/class/wakeup/* or debugfs. The repeated syscalls to query the
sysfs nodes is inefficient, as there can be hundreds of wakeup_sources, and
each wakeup source have multiple stats, with one sysfs node per stat.
debugfs is unstable and insecure.
This series implements two types of iterators:
1. Standard BPF Iterator: Allows creating a BPF link to iterate over
wakeup sources
2. Open-coded Iterator: Enables the use of wakeup_source iterators directly
within BPF programs
Both iterators utilize pre-existing APIs wakeup_sources_walk_* to traverse
over the SRCU that backs the list of wakeup_sources.
Changes in v2:
- Guard BPF Makefile with CONFIG_PM_SLEEP to fix build errors
- Update copyright from 2025 to 2026
- v1 link: https://lore.kernel.org/all/20251204025003.3162056-1-wusamuel@google.com/
Samuel Wu (4):
bpf: Add wakeup_source iterator
bpf: Open coded BPF for wakeup_sources
selftests/bpf: Add tests for wakeup_sources
selftests/bpf: Open coded BPF wakeup_sources test
kernel/bpf/Makefile | 3 +
kernel/bpf/helpers.c | 3 +
kernel/bpf/wakeup_source_iter.c | 137 ++++++++
.../testing/selftests/bpf/bpf_experimental.h | 5 +
tools/testing/selftests/bpf/config | 1 +
.../bpf/prog_tests/wakeup_source_iter.c | 323 ++++++++++++++++++
.../selftests/bpf/progs/wakeup_source_iter.c | 117 +++++++
7 files changed, 589 insertions(+)
create mode 100644 kernel/bpf/wakeup_source_iter.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/wakeup_source_iter.c
create mode 100644 tools/testing/selftests/bpf/progs/wakeup_source_iter.c
--
2.52.0.457.g6b5491de43-goog
From: Fushuai Wang <wangfushuai(a)baidu.com>
When /sys/kernel/tracing/buffer_size_kb is less than 12KB,
the test_multiple_writes test will stall and wait for more
input due to insufficient buffer space.
Check current buffer_size_kb value before the test. If it is
less than 12KB, it temporarily increase the buffer to 12KB,
and restore the original value after the tests are completed.
Fixes: 37f46601383a ("selftests/tracing: Add basic test for trace_marker_raw file")
Suggested-by: Steven Rostedt <rostedt(a)goodmis.org>
Signed-off-by: Fushuai Wang <wangfushuai(a)baidu.com>
Acked-by: Steven Rostedt (Google) <rostedt(a)goodmis.org>
---
V2 -> V3: Make the From and SoB match.
V1 -> V2: Restore buffer_size_kb outside of awk script.
.../ftrace/test.d/00basic/trace_marker_raw.tc | 18 +++++++++++++++++-
1 file changed, 17 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/ftrace/test.d/00basic/trace_marker_raw.tc b/tools/testing/selftests/ftrace/test.d/00basic/trace_marker_raw.tc
index 7daf7292209e..a2c42e13f614 100644
--- a/tools/testing/selftests/ftrace/test.d/00basic/trace_marker_raw.tc
+++ b/tools/testing/selftests/ftrace/test.d/00basic/trace_marker_raw.tc
@@ -89,6 +89,7 @@ test_buffer() {
# The id must be four bytes, test that 3 bytes fails a write
if echo -n abc > ./trace_marker_raw ; then
echo "Too small of write expected to fail but did not"
+ echo ${ORIG} > buffer_size_kb
exit_fail
fi
@@ -99,9 +100,24 @@ test_buffer() {
if write_buffer 0xdeadbeef $size ; then
echo "Too big of write expected to fail but did not"
+ echo ${ORIG} > buffer_size_kb
exit_fail
fi
}
+ORIG=`cat buffer_size_kb`
+
+# test_multiple_writes test needs at least 12KB buffer
+NEW_SIZE=12
+
+if [ ${ORIG} -lt ${NEW_SIZE} ]; then
+ echo ${NEW_SIZE} > buffer_size_kb
+fi
+
test_buffer
-test_multiple_writes
+if ! test_multiple_writes; then
+ echo ${ORIG} > buffer_size_kb
+ exit_fail
+fi
+
+echo ${ORIG} > buffer_size_kb
--
2.36.1
Resending the patch series due to a previous "4.7.1 Error: too many recipients"
failure.
===
This patch series builds upon the discussion in
"[PATCH bpf-next v4 0/4] bpf: Improve error reporting for freplace attachment failure" [1].
This patch series introduces support for *common attributes* in the BPF
syscall, providing a unified mechanism for passing shared metadata across
all BPF commands.
The initial set of common attributes includes:
1. 'log_buf': User-provided buffer for storing log output.
2. 'log_size': Size of the provided log buffer.
3. 'log_level': Verbosity level for logging.
4. 'log_true_size': The size of log reported by kernel.
With this extension, the BPF syscall will be able to return meaningful
error messages (e.g., failures of creating map), improving debuggability
and user experience.
Changes:
RFC v3 -> v4:
* Drop RFC.
* Address comments from Andrii:
* Add parentheses in 'sys_bpf_ext()'.
* Avoid creating new fd in 'probe_sys_bpf_ext()'.
* Add a new struct to wrap log fields in libbpf.
* Address comments from Alexei:
* Do not skip writing to user space when log_true_size is zero.
* Do not use 'bool' arguments.
* Drop the adding WARN_ON_ONCE()'s.
RFC v2 -> RFC v3:
* Rename probe_sys_bpf_extended to probe_sys_bpf_ext.
* Refactor reporting 'log_true_size' for prog_load.
* Refactor reporting 'btf_log_true_size' for btf_load.
* Add warnings for internal bugs in map_create.
* Check log_true_size in test cases.
* Address comment from Alexei:
* Change kvzalloc/kvfree to kzalloc/kfree.
* Address comments from Andrii:
* Move BPF_COMMON_ATTRS to 'enum bpf_cmd' alongside brief comment.
* Add bpf_check_uarg_tail_zero() for extra checks.
* Rename sys_bpf_extended to sys_bpf_ext.
* Rename sys_bpf_fd_extended to sys_bpf_ext_fd.
* Probe the new feature using NULL and -EFAULT.
* Move probe_sys_bpf_ext to libbpf_internal.h and drop LIBBPF_API.
* Return -EUSERS when log attrs are conflict between bpf_attr and
bpf_common_attr.
* Avoid touching bpf_vlog_init().
* Update the reason messages in map_create.
* Finalize the log using __cleanup().
* Report log size to users.
* Change type of log_buf from '__u64' to 'const char *' and cast type
using ptr_to_u64() in bpf_map_create().
* Do not return -EOPNOTSUPP when kernel doesn't support this feature
in bpf_map_create().
* Add log_level support for map creation for consistency.
* Address comment from Eduard:
* Use common_attrs->log_level instead of BPF_LOG_FIXED.
RFC v1 -> RFC v2:
* Fix build error reported by test bot.
* Address comments from Alexei:
* Drop new uapi for freplace.
* Add common attributes support for prog_load and btf_load.
* Add common attributes support for map_create.
Links:
[1] https://lore.kernel.org/bpf/20250224153352.64689-1-leon.hwang@linux.dev/
Leon Hwang (9):
bpf: Extend bpf syscall with common attributes support
libbpf: Add support for extended bpf syscall
bpf: Refactor reporting log_true_size for prog_load
bpf: Add common attr support for prog_load
bpf: Refactor reporting btf_log_true_size for btf_load
bpf: Add common attr support for btf_load
bpf: Add common attr support for map_create
libbpf: Add common attr support for map_create
selftests/bpf: Add tests to verify map create failure log
include/linux/bpf.h | 2 +-
include/linux/btf.h | 2 +-
include/linux/syscalls.h | 3 +-
include/uapi/linux/bpf.h | 8 +
kernel/bpf/btf.c | 25 +-
kernel/bpf/syscall.c | 223 ++++++++++++++++--
kernel/bpf/verifier.c | 12 +-
tools/include/uapi/linux/bpf.h | 8 +
tools/lib/bpf/bpf.c | 49 +++-
tools/lib/bpf/bpf.h | 17 +-
tools/lib/bpf/features.c | 8 +
tools/lib/bpf/libbpf_internal.h | 3 +
.../selftests/bpf/prog_tests/map_init.c | 143 +++++++++++
13 files changed, 448 insertions(+), 55 deletions(-)
--
2.52.0
When the PMU LBR is running in branch-sensitive mode,
'perf_snapshot_branch_stack()' may capture branch entries from the
trampoline entry up to the call site inside a BPF program. These branch
entries are not useful for analyzing the control flow of the tracee.
To eliminate such noise for tracing programs, the branch snapshot should
be taken as early as possible:
* Call 'perf_snapshot_branch_stack()' at the very beginning of the
trampoline for fentry programs.
* Call 'perf_snapshot_branch_stack()' immediately after invoking the
tracee for fexit programs.
With this change, LBR snapshots remain meaningful even when multiple BPF
programs execute before the one requesting LBR data.
In addition, more relevant branch entries can be captured on AMD CPUs,
which provide a 16-entry-deep LBR stack.
Testing
The series was tested in a VM configured with LBR enabled:
vmtest --kvm-cpu-args 'host,pmu=on,lbr-fmt=0x5' -k $(make -s image_name) -
Branch records were verified using bpfsnoop [1]:
/path/to/bpfsnoop -k '(l)icmp_rcv' -E 1 -v \
--kernel-vmlinux /path/to/kernel/vmlinux
For comparison, the following command was used without
BPF_BRANCH_SNAPSHOT_F_COPY:
/path/to/bpfsnoop -k '(l)icmp_rcv' -E 1 -v \
--force-get-branch-snapshot --kernel-vmlinux /path/to/kernel/vmlinux
Without BPF_BRANCH_SNAPSHOT_F_COPY, no branch records related to the
tracee are captured. With it enabled, 17 branch records from the tracee
are observed.
Detailed verification results are available in the gist [2].
With this series applied, retsnoop [3] can benefit from improved LBR
support when using the '--lbr --fentries' options.
Links:
[1] https://github.com/bpfsnoop/bpfsnoop
[2] https://gist.github.com/Asphaltt/cffdeb4b2f2db4c3c42f91a59109f9e7
[3] https://github.com/anakryiko/retsnoop
Leon Hwang (3):
bpf, x64: Call perf_snapshot_branch_stack in trampoline
bpf: Introduce BPF_BRANCH_SNAPSHOT_F_COPY flag for
bpf_get_branch_snapshot helper
selftests/bpf: Add BPF_BRANCH_SNAPSHOT_F_COPY test
arch/x86/net/bpf_jit_comp.c | 66 +++++++++++++++++++
include/linux/bpf.h | 18 ++++-
include/linux/bpf_verifier.h | 1 +
kernel/bpf/verifier.c | 30 +++++++++
kernel/trace/bpf_trace.c | 17 ++++-
.../bpf/prog_tests/get_branch_snapshot.c | 26 +++++++-
.../selftests/bpf/progs/get_branch_snapshot.c | 3 +-
7 files changed, 153 insertions(+), 8 deletions(-)
--
2.52.0