Pahole fails to encode BTF for some Go projects (e.g. Kubernetes and
Podman) due to recursive type definitions that create reference loops
not representable in C. These recursive typedefs trigger a failure in
the BTF deduplication algorithm.
This patch extends btf_dedup_struct_types() to properly handle potential
recursion for BTF_KIND_TYPEDEF, similar to how recursion is already
handled for BTF_KIND_STRUCT. This allows pahole to successfully
generate BTF for Go binaries using recursive types without impacting
existing C-based workflows.
Changes in v4: fix typo found by Claude-based CI
Changes in v3:
1. Patch 1: Adjusted the comment of btf_dedup_ref_type() to refer to
typedef as well.
2. Patch 2: Update of the "dedup: recursive typedef" test to include a
duplicated version of the types to make sure deduplication still happens
in this case.
Changes in v2:
1. Patch 1: Refactored code to prevent copying existing logic. Instead of
adding a new function we modify the existing btf_dedup_struct_type()
function to handle the BTF_KIND_TYPEDEF case. Calls to btf_hash_struct()
and btf_shallow_equal_struct() are replaced with calls to functions that
select btf_hash_struct() / btf_hash_typedef() based on the type.
2. Patch 2: Added tests
v3: https://lore.kernel.org/lkml/cover.1763024337.git.paul.houssel@orange.com/
v2: https://lore.kernel.org/lkml/cover.1762956564.git.paul.houssel@orange.com/
v1: https://lore.kernel.org/lkml/20251107153408.159342-1-paulhoussel2@gmail.com/
Paul Houssel (2):
libbpf: fix BTF dedup to support recursive typedef definitions
selftests/bpf: add BTF dedup tests for recursive typedef definitions
tools/lib/bpf/btf.c | 71 +++++++++++++++-----
tools/testing/selftests/bpf/prog_tests/btf.c | 65 ++++++++++++++++++
2 files changed, 120 insertions(+), 16 deletions(-)
--
2.51.0
Hello,
IIUC this is the first independent patch series for guest_memfd's in-place
conversion series! Happy to finally bring this out on its own.
Previous versions of this feature, part of other series, are available at
[1][2][3].
Many prior discussions have led up to these main features of this series, and
these are the main points I'd like feedback on.
1. Having private/shared status stored in a maple tree (Thanks Michael for your
support of using maple trees over xarrays for performance! [4]).
2. Having a new guest_memfd ioctl (not a vm ioctl) that performs conversions.
3. Using ioctls/structs/input attribute similar to the existing vm ioctl
KVM_SET_MEMORY_ATTRIBUTES to perform conversions.
4. Storing requested attributes directly in the maple tree.
5. Using a KVM module-wide param to toggle between setting memory attributes via
vm and guest_memfd ioctls (making them mututally exclusive - a single loaded
KVM module can only do one of the two.)
6. Skipping LRU in guest_memfd folios - make guest_memfd folios not participate
in LRU to avoid LRU refcounts from interfering with conversions.
This series is based on kvm/next, followed by
+ v12 of NUMA mempolicy support patches [5]
+ 3 cleanup patches from Sean [6][7][8]
Everything is stitched together here for your convenience
https://github.com/googleprodkernel/linux-cc/commits/guest_memfd-inplace-co…
Thank you all for helping with this series!
If I missed out your comment from a previous series, it's not intentional!
Please do raise it again.
TODOs:
+ There might be an issue with memory failure handling because when guest_memfd
folios stop participating in LRU. From a preliminary analysis,
HWPoisonHandlable() is only true if PageLRU() is true. This needs further
investigation.
[1] https://lore.kernel.org/all/bd163de3118b626d1005aa88e71ef2fb72f0be0f.172600…
[2] https://lore.kernel.org/all/20250117163001.2326672-6-tabba@google.com/
[3] https://lore.kernel.org/all/b784326e9ccae6a08388f1bf39db70a2204bdc51.174726…
[4] https://lore.kernel.org/all/20250529054227.hh2f4jmyqf6igd3i@amd.com/
[5] https://lore.kernel.org/all/20251007221420.344669-1-seanjc@google.com/T/
[6] https://lore.kernel.org/all/20250924174255.2141847-1-seanjc@google.com/
[7] https://lore.kernel.org/all/20251007224515.374516-1-seanjc@google.com/
[8] https://lore.kernel.org/all/20251007223625.369939-1-seanjc@google.com/
Ackerley Tng (19):
KVM: guest_memfd: Update kvm_gmem_populate() to use gmem attributes
KVM: Introduce KVM_SET_MEMORY_ATTRIBUTES2
KVM: guest_memfd: Don't set FGP_ACCESSED when getting folios
KVM: guest_memfd: Skip LRU for guest_memfd folios
KVM: guest_memfd: Add support for KVM_SET_MEMORY_ATTRIBUTES
KVM: selftests: Update framework to use KVM_SET_MEMORY_ATTRIBUTES2
KVM: selftests: guest_memfd: Test basic single-page conversion flow
KVM: selftests: guest_memfd: Test conversion flow when INIT_SHARED
KVM: selftests: guest_memfd: Test indexing in guest_memfd
KVM: selftests: guest_memfd: Test conversion before allocation
KVM: selftests: guest_memfd: Convert with allocated folios in
different layouts
KVM: selftests: guest_memfd: Test precision of conversion
KVM: selftests: guest_memfd: Test that truncation does not change
shared/private status
KVM: selftests: guest_memfd: Test conversion with elevated page
refcount
KVM: selftests: Reset shared memory after hole-punching
KVM: selftests: Provide function to look up guest_memfd details from
gpa
KVM: selftests: Make TEST_EXPECT_SIGBUS thread-safe
KVM: selftests: Update private_mem_conversions_test to mmap()
guest_memfd
KVM: selftests: Add script to exercise private_mem_conversions_test
Sean Christopherson (18):
KVM: guest_memfd: Introduce per-gmem attributes, use to guard user
mappings
KVM: Rename KVM_GENERIC_MEMORY_ATTRIBUTES to KVM_VM_MEMORY_ATTRIBUTES
KVM: Enumerate support for PRIVATE memory iff kvm_arch_has_private_mem
is defined
KVM: Stub in ability to disable per-VM memory attribute tracking
KVM: guest_memfd: Wire up kvm_get_memory_attributes() to per-gmem
attributes
KVM: guest_memfd: Enable INIT_SHARED on guest_memfd for x86 Coco VMs
KVM: Move KVM_VM_MEMORY_ATTRIBUTES config definition to x86
KVM: Let userspace disable per-VM mem attributes, enable per-gmem
attributes
KVM: selftests: Create gmem fd before "regular" fd when adding memslot
KVM: selftests: Rename guest_memfd{,_offset} to gmem_{fd,offset}
KVM: selftests: Add support for mmap() on guest_memfd in core library
KVM: selftests: Add helpers for calling ioctls on guest_memfd
KVM: selftests: guest_memfd: Test that shared/private status is
consistent across processes
KVM: selftests: Add selftests global for guest memory attributes
capability
KVM: selftests: Provide common function to set memory attributes
KVM: selftests: Check fd/flags provided to mmap() when setting up
memslot
KVM: selftests: Update pre-fault test to work with per-guest_memfd
attributes
KVM: selftests: Update private memory exits test work with per-gmem
attributes
Documentation/virt/kvm/api.rst | 72 ++-
arch/x86/include/asm/kvm_host.h | 2 +-
arch/x86/kvm/Kconfig | 15 +-
arch/x86/kvm/mmu/mmu.c | 4 +-
arch/x86/kvm/x86.c | 13 +-
include/linux/kvm_host.h | 44 +-
include/trace/events/kvm.h | 4 +-
include/uapi/linux/kvm.h | 17 +
mm/filemap.c | 1 +
mm/memcontrol.c | 2 +
tools/testing/selftests/kvm/.gitignore | 1 +
tools/testing/selftests/kvm/Makefile.kvm | 1 +
.../kvm/guest_memfd_conversions_test.c | 498 ++++++++++++++++++
.../testing/selftests/kvm/include/kvm_util.h | 127 ++++-
.../testing/selftests/kvm/include/test_util.h | 29 +-
tools/testing/selftests/kvm/lib/kvm_util.c | 128 +++--
tools/testing/selftests/kvm/lib/test_util.c | 7 -
.../selftests/kvm/pre_fault_memory_test.c | 2 +-
.../kvm/x86/private_mem_conversions_test.c | 55 +-
.../kvm/x86/private_mem_conversions_test.py | 159 ++++++
.../kvm/x86/private_mem_kvm_exits_test.c | 36 +-
virt/kvm/Kconfig | 4 +-
virt/kvm/guest_memfd.c | 414 +++++++++++++--
virt/kvm/kvm_main.c | 104 +++-
24 files changed, 1554 insertions(+), 185 deletions(-)
create mode 100644 tools/testing/selftests/kvm/guest_memfd_conversions_test.c
create mode 100755 tools/testing/selftests/kvm/x86/private_mem_conversions_test.py
--
2.51.0.858.gf9c4a03a3a-goog
Since commit 31158ad02ddb ("rqspinlock: Add deadlock detection
and recovery") the updated path on re-entrancy now reports deadlock
via -EDEADLK instead of the previous -EBUSY.
Also, the way reentrancy was exercised (via fentry/lookup_elem_raw)
has been fragile because lookup_elem_raw may be inlined
(find_kernel_btf_id() will return -ESRCH).
To fix this fentry is attached to bpf_obj_free_fields() instead of
lookup_elem_raw() and:
- The htab map is made to use a BTF-described struct val with a
struct bpf_timer so that check_and_free_fields() reliably calls
bpf_obj_free_fields() on element replacement.
- The selftest is updated to do two updates to the same key (insert +
replace) in prog_test.
- The selftest is updated to align with expected errno with the
kernel’s current behavior.
Signed-off-by: Saket Kumar Bhaskar <skb99(a)linux.ibm.com>
---
Changes since v1:
Addressed comments from Alexei:
* Fixed the scenario where test may fail when lookup_elem_raw()
is inlined.
v1: https://lore.kernel.org/all/20251106052628.349117-1-skb99@linux.ibm.com/
.../selftests/bpf/prog_tests/htab_update.c | 38 ++++++++++++++-----
.../testing/selftests/bpf/progs/htab_update.c | 19 +++++++---
2 files changed, 41 insertions(+), 16 deletions(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/htab_update.c b/tools/testing/selftests/bpf/prog_tests/htab_update.c
index 2bc85f4814f4..96b65c1a321a 100644
--- a/tools/testing/selftests/bpf/prog_tests/htab_update.c
+++ b/tools/testing/selftests/bpf/prog_tests/htab_update.c
@@ -15,17 +15,17 @@ struct htab_update_ctx {
static void test_reenter_update(void)
{
struct htab_update *skel;
- unsigned int key, value;
+ void *value = NULL;
+ unsigned int key, value_size;
int err;
skel = htab_update__open();
if (!ASSERT_OK_PTR(skel, "htab_update__open"))
return;
- /* lookup_elem_raw() may be inlined and find_kernel_btf_id() will return -ESRCH */
- bpf_program__set_autoload(skel->progs.lookup_elem_raw, true);
+ bpf_program__set_autoload(skel->progs.bpf_obj_free_fields, true);
err = htab_update__load(skel);
- if (!ASSERT_TRUE(!err || err == -ESRCH, "htab_update__load") || err)
+ if (!ASSERT_TRUE(!err, "htab_update__load") || err)
goto out;
skel->bss->pid = getpid();
@@ -33,14 +33,32 @@ static void test_reenter_update(void)
if (!ASSERT_OK(err, "htab_update__attach"))
goto out;
- /* Will trigger the reentrancy of bpf_map_update_elem() */
- key = 0;
- value = 0;
- err = bpf_map_update_elem(bpf_map__fd(skel->maps.htab), &key, &value, 0);
- if (!ASSERT_OK(err, "add element"))
+ value_size = bpf_map__value_size(skel->maps.htab);
+
+ value = calloc(1, value_size);
+ if (!ASSERT_OK_PTR(value, "calloc value"))
+ goto out;
+ /*
+ * First update: plain insert. This should NOT trigger the re-entrancy
+ * path, because there is no old element to free yet.
+ */
+ err = bpf_map_update_elem(bpf_map__fd(skel->maps.htab), &key, &value, BPF_ANY);
+ if (!ASSERT_OK(err, "first update (insert)"))
+ goto out;
+
+ /*
+ * Second update: replace existing element with same key and trigger
+ * the reentrancy of bpf_map_update_elem().
+ * check_and_free_fields() calls bpf_obj_free_fields() on the old
+ * value, which is where fentry program runs and performs a nested
+ * bpf_map_update_elem(), triggering -EDEADLK.
+ */
+ memset(&value, 0, sizeof(value));
+ err = bpf_map_update_elem(bpf_map__fd(skel->maps.htab), &key, &value, BPF_ANY);
+ if (!ASSERT_OK(err, "second update (replace)"))
goto out;
- ASSERT_EQ(skel->bss->update_err, -EBUSY, "no reentrancy");
+ ASSERT_EQ(skel->bss->update_err, -EDEADLK, "no reentrancy");
out:
htab_update__destroy(skel);
}
diff --git a/tools/testing/selftests/bpf/progs/htab_update.c b/tools/testing/selftests/bpf/progs/htab_update.c
index 7481bb30b29b..195d3b2fba00 100644
--- a/tools/testing/selftests/bpf/progs/htab_update.c
+++ b/tools/testing/selftests/bpf/progs/htab_update.c
@@ -6,24 +6,31 @@
char _license[] SEC("license") = "GPL";
+/* Map value type: has BTF-managed field (bpf_timer) */
+struct val {
+ struct bpf_timer t;
+ __u64 payload;
+};
+
struct {
__uint(type, BPF_MAP_TYPE_HASH);
__uint(max_entries, 1);
- __uint(key_size, sizeof(__u32));
- __uint(value_size, sizeof(__u32));
+ __type(key, __u32);
+ __type(value, struct val);
} htab SEC(".maps");
int pid = 0;
int update_err = 0;
-SEC("?fentry/lookup_elem_raw")
-int lookup_elem_raw(void *ctx)
+SEC("?fentry/bpf_obj_free_fields")
+int bpf_obj_free_fields(void *ctx)
{
- __u32 key = 0, value = 1;
+ __u32 key = 0;
+ struct val value = { .payload = 1 };
if ((bpf_get_current_pid_tgid() >> 32) != pid)
return 0;
- update_err = bpf_map_update_elem(&htab, &key, &value, 0);
+ update_err = bpf_map_update_elem(&htab, &key, &value, BPF_ANY);
return 0;
}
--
2.51.0
While debugging issues related to aarch64 only systems I ran into
speedbumps due to the lack of detail in the results reported when the
guest register read and reset value preservation tests were run, they
generated an immediately fatal assert without indicating which register
was being tested. Update these tests to report a result per register,
making it much easier to see what the problem being reported is.
A similar, though less severe, issue exists with the validation of the
individual bitfields in registers due to the use of immediately fatal
asserts. Update those asserts to be standard kselftest reports.
Finally we have a fix for spurious errors on some NV systems.
Signed-off-by: Mark Brown <broonie(a)kernel.org>
---
Changes in v2:
- Add a fix for spurious failures with 64 bit only guests.
- Link to v1: https://patch.msgid.link/20251030-kvm-arm64-set-id-regs-aarch64-v1-0-96fe0d…
---
Mark Brown (4):
KVM: selftests: arm64: Report set_id_reg reads of test registers as tests
KVM: selftests: arm64: Report register reset tests individually
KVM: selftests: arm64: Make set_id_regs bitfield validatity checks non-fatal
KVM: selftests: arm64: Skip all 32 bit IDs when set_id_regs is aarch64 only
tools/testing/selftests/kvm/arm64/set_id_regs.c | 150 ++++++++++++++++++------
1 file changed, 111 insertions(+), 39 deletions(-)
---
base-commit: 211ddde0823f1442e4ad052a2f30f050145ccada
change-id: 20251028-kvm-arm64-set-id-regs-aarch64-ebb77969401c
Best regards,
--
Mark Brown <broonie(a)kernel.org>
On systems that support shared guest memory, write() is useful, for
example, for population of the initial image. Even though the same can
also be achieved via userspace mapping and memcpying from userspace,
write() provides a more performant option because it does not need to
set user page tables and it does not cause a page fault for every page
like memcpy would. Note that memcpy cannot be accelerated via
MADV_POPULATE_WRITE as it is not supported by guest_memfd and relies on
GUP.
Populating 512MiB of guest_memfd on a x86 machine:
- via memcpy: 436 ms
- via write: 202 ms (-54%)
Only PAGE_ALIGNED offset and len are allowed. Even though non-aligned
writes are technically possible, when in-place conversion support is
implemented [1], the restriction makes handling of mixed shared/private
huge pages simpler. write() will only be allowed to populate shared
pages.
When direct map removal is implemented [2]
- write() will not be allowed to access pages that have already
been removed from direct map
- on completion, write() will remove the populated pages from
direct map
While it is technically possible to implement read() syscall on systems
with shared guest memory, it is not supported as there is currently no
use case for it.
[1]
https://lore.kernel.org/kvm/cover.1760731772.git.ackerleytng@google.com
[2]
https://lore.kernel.org/kvm/20250924151101.2225820-1-patrick.roy@campus.lmu…
Nikita Kalyazin (2):
KVM: guest_memfd: add generic population via write
KVM: selftests: update guest_memfd write tests
Documentation/virt/kvm/api.rst | 2 +
include/linux/kvm_host.h | 2 +-
include/uapi/linux/kvm.h | 1 +
.../testing/selftests/kvm/guest_memfd_test.c | 58 +++++++++++++++++--
virt/kvm/guest_memfd.c | 52 +++++++++++++++++
5 files changed, 108 insertions(+), 7 deletions(-)
base-commit: 8a4821412cf2c1429fffa07c012dd150f2edf78c
--
2.50.1
Enable the preset of filter parameters from kconfig options, similar to
how other KUnit configuration parameters are handled already.
This is useful to run a subset of tests even if the cmdline is not
readily modifyable.
Signed-off-by: Thomas Weißschuh <thomas.weissschuh(a)linutronix.de>
---
lib/kunit/Kconfig | 24 ++++++++++++++++++++++++
lib/kunit/executor.c | 8 +++++---
2 files changed, 29 insertions(+), 3 deletions(-)
diff --git a/lib/kunit/Kconfig b/lib/kunit/Kconfig
index 7a6af361d2fc6276b9667be8c694b0c80e33c1e8..50ecf55d2b9c8a82f2aff7a0b4156bd6179b0a2f 100644
--- a/lib/kunit/Kconfig
+++ b/lib/kunit/Kconfig
@@ -93,6 +93,30 @@ config KUNIT_AUTORUN_ENABLED
In most cases this should be left as Y. Only if additional opt-in
behavior is needed should this be set to N.
+config KUNIT_DEFAULT_FILTER_GLOB
+ string "Default value of the filter_glob module parameter"
+ help
+ Sets the default value of kunit.filter_glob. If set to a non-empty
+ string only matching tests are executed.
+
+ If unsure, leave empty so all tests are executed.
+
+config KUNIT_DEFAULT_FILTER
+ string "Default value of the filter module parameter"
+ help
+ Sets the default value of kunit.filter. If set to a non-empty
+ string only matching tests are executed.
+
+ If unsure, leave empty so all tests are executed.
+
+config KUNIT_DEFAULT_FILTER_ACTION
+ string "Default value of the filter_action module parameter"
+ help
+ Sets the default value of kunit.filter_action. If set to a non-empty
+ string only matching tests are executed.
+
+ If unsure, leave empty so all tests are executed.
+
config KUNIT_DEFAULT_TIMEOUT
int "Default value of the timeout module parameter"
default 300
diff --git a/lib/kunit/executor.c b/lib/kunit/executor.c
index 0061d4c7e35170634a3c1d1cff7179037fb8ba07..02ff380ab7938cfac2be3f8c0e7630a78961cc3d 100644
--- a/lib/kunit/executor.c
+++ b/lib/kunit/executor.c
@@ -45,9 +45,11 @@ bool kunit_autorun(void)
return autorun_param;
}
-static char *filter_glob_param;
-static char *filter_param;
-static char *filter_action_param;
+#define PARAM_FROM_CONFIG(config) (config[0] ? config : NULL)
+
+static char *filter_glob_param = PARAM_FROM_CONFIG(CONFIG_KUNIT_DEFAULT_FILTER_GLOB);
+static char *filter_param = PARAM_FROM_CONFIG(CONFIG_KUNIT_DEFAULT_FILTER);
+static char *filter_action_param = PARAM_FROM_CONFIG(CONFIG_KUNIT_DEFAULT_FILTER_ACTION);
module_param_named(filter_glob, filter_glob_param, charp, 0600);
MODULE_PARM_DESC(filter_glob,
---
base-commit: 3a8660878839faadb4f1a6dd72c3179c1df56787
change-id: 20251106-kunit-filter-kconfig-f08998936fc6
Best regards,
--
Thomas Weißschuh <thomas.weissschuh(a)linutronix.de>
On systems where the shmget() syscall is not supported, tests like
anon_page and shared_waitv will fail. Skip these tests in such cases to
allow the rest of the test suite to run.
Signed-off-by: Carlos Llamas <cmllamas(a)google.com>
---
tools/testing/selftests/futex/functional/futex_wait.c | 2 ++
tools/testing/selftests/futex/functional/futex_waitv.c | 2 ++
2 files changed, 4 insertions(+)
diff --git a/tools/testing/selftests/futex/functional/futex_wait.c b/tools/testing/selftests/futex/functional/futex_wait.c
index 152ca4612886..1269642bb662 100644
--- a/tools/testing/selftests/futex/functional/futex_wait.c
+++ b/tools/testing/selftests/futex/functional/futex_wait.c
@@ -71,6 +71,8 @@ TEST(anon_page)
/* Testing an anon page shared memory */
shm_id = shmget(IPC_PRIVATE, 4096, IPC_CREAT | 0666);
if (shm_id < 0) {
+ if (errno == ENOSYS)
+ ksft_exit_skip("shmget syscall not supported\n");
perror("shmget");
exit(1);
}
diff --git a/tools/testing/selftests/futex/functional/futex_waitv.c b/tools/testing/selftests/futex/functional/futex_waitv.c
index c684b10eb76e..3bc4e5dc70e7 100644
--- a/tools/testing/selftests/futex/functional/futex_waitv.c
+++ b/tools/testing/selftests/futex/functional/futex_waitv.c
@@ -86,6 +86,8 @@ TEST(shared_waitv)
int shm_id = shmget(IPC_PRIVATE, 4096, IPC_CREAT | 0666);
if (shm_id < 0) {
+ if (errno == ENOSYS)
+ ksft_exit_skip("shmget syscall not supported\n");
perror("shmget");
exit(1);
}
--
2.52.0.rc1.455.g30608eb744-goog
Overall, we encountered a warning [1] that can be triggered by running the
selftest I provided.
sockmap works by replacing sk_data_ready, recvmsg, sendmsg operations and
implementing fast socket-level forwarding logic:
1. Users can obtain file descriptors through userspace socket()/accept()
interfaces, then call BPF syscall to perform these replacements.
2. Users can also use the bpf_sock_hash_update helper (in sockops programs)
to replace handlers when TCP connections enter ESTABLISHED state
(BPF_SOCK_OPS_PASSIVE_ESTABLISHED_CB/BPF_SOCK_OPS_ACTIVE_ESTABLISHED_CB)
However, when combined with MPTCP, an issue arises: MPTCP creates subflow
sk's and performs TCP handshakes, so the BPF program obtains subflow sk's
and may incorrectly replace their sk_prot. We need to reject such
operations. In patch 1, we set psock_update_sk_prot to NULL in the
subflow's custom sk_prot.
Additionally, if the server's listening socket has MPTCP enabled and the
client's TCP also uses MPTCP, we should allow the combination of subflow
and sockmap. This is because the latest Golang programs have enabled MPTCP
for listening sockets by default [2]. For programs already using sockmap,
upgrading Golang should not cause sockmap functionality to fail.
Patch 2 prevents the WARNING from occurring.
Despite these patches fixing stream corruption, users of sockmap must set
GODEBUG=multipathtcp=0 to disable MPTCP until sockmap fully supports it.
[1] truncated warning:
------------[ cut here ]------------
WARNING: CPU: 1 PID: 388 at net/mptcp/protocol.c:68 mptcp_stream_accept+0x34c/0x380
Modules linked in:
RIP: 0010:mptcp_stream_accept+0x34c/0x380
RSP: 0018:ffffc90000cf3cf8 EFLAGS: 00010202
PKRU: 55555554
Call Trace:
<TASK>
do_accept+0xeb/0x190
? __x64_sys_pselect6+0x61/0x80
? _raw_spin_unlock+0x12/0x30
? alloc_fd+0x11e/0x190
__sys_accept4+0x8c/0x100
__x64_sys_accept+0x1f/0x30
x64_sys_call+0x202f/0x20f0
do_syscall_64+0x72/0x9a0
? switch_fpu_return+0x60/0xf0
? irqentry_exit_to_user_mode+0xdb/0x1e0
? irqentry_exit+0x3f/0x50
? clear_bhb_loop+0x50/0xa0
? clear_bhb_loop+0x50/0xa0
? clear_bhb_loop+0x50/0xa0
entry_SYSCALL_64_after_hwframe+0x76/0x7e
</TASK>
---[ end trace 0000000000000000 ]---
[2]: https://go-review.googlesource.com/c/go/+/607715
---
v4 -> v5: Dropped redundant selftest code, updated the Fixes tag, and
added a Reviewed-by tag.
v3 -> v4: Addressed questions from Matthieu and Paolo, explained sockmap's
operational mechanism, and finalized the changes
v2 -> v3: Adopted Jakub Sitnicki's suggestions - atomic retrieval of
sk_family is required
v1 -> v2: Had initial discussion with Matthieu on sockmap and MPTCP
technical details
v4: https://lore.kernel.org/bpf/20251105113625.148900-1-jiayuan.chen@linux.dev/
v3: https://lore.kernel.org/bpf/20251023125450.105859-1-jiayuan.chen@linux.dev/
v2: https://lore.kernel.org/bpf/20251020060503.325369-1-jiayuan.chen@linux.dev/…
v1: https://lore.kernel.org/mptcp/a0a2b87119a06c5ffaa51427a0964a05534fe6f1@linu…
Jiayuan Chen (3):
mptcp: disallow MPTCP subflows from sockmap
net,mptcp: fix proto fallback detection with BPF
selftests/bpf: Add mptcp test with sockmap
net/mptcp/protocol.c | 6 +-
net/mptcp/subflow.c | 8 +
.../testing/selftests/bpf/prog_tests/mptcp.c | 141 ++++++++++++++++++
.../selftests/bpf/progs/mptcp_sockmap.c | 43 ++++++
4 files changed, 196 insertions(+), 2 deletions(-)
create mode 100644 tools/testing/selftests/bpf/progs/mptcp_sockmap.c
base-commit: 8c0726e861f3920bac958d76cf134b5a3aa14ce4
--
2.43.0
On Fri, Nov 14, 2025 at 4:59 AM Guopeng Zhang <zhangguopeng(a)kylinos.cn> wrote:
>
> Hi Michal,
>
> Thanks for reviewing and pointing out [1].
>
> > Could you please explain more why is the TAP layout beneficial?
> > (I understand selftest are for oneself, i.e. human readable only by default.)
>
> Actually, selftests are no longer just something for developers to view locally; they are now extensively
> run in CI and stable branch regression testing. Using a standardized layout means that general test runners
> and CI systems can parse the cgroup test results without any special handling.
I second that.
In fact, we do run some of those tests in the CI; i.e.
https://openqa.opensuse.org/tests/5453031#external
We added this: https://github.com/os-autoinst/openQA/blob/master/lib/OpenQA/Parser/Format/…
to our CI
but frankly the use of the KTAP across the selftests is very
inconsistent, so we need to post-process some of the output files
quite a lot.
Therefore the more standardized the output, the better for any CI.
Small ask: should we amend the commit message to say KTAP?
That being said - the cgroups tests produce nice output which is easy
to parse and gives us no issues in our CI apart
from the shell tests, specifically test_cpuset_prs.sh.
We currently run the cgroup tests only internally because some of them
tend to fail when crossing resource-usage
boundaries and don’t provide clear information about by how much.
That ties into my earlier effort Michal linked here::
https://lore.kernel.org/all/rua6ubri67gh3b7atarbm5mggqgjyh6646mzkry2n2547jn…
I’ll try to add the cgroup tests to the public openSUSE CI and will
test your patches.
>
> TAP provides a structured format that is both human-readable and machine-readable. The plan/result lines are parsed by tools,
> while the diagnostic lines can still contain human-readable debug information. Over time, other selftest suites (such as mm, KVM, mptcp, etc.)
> have also been converted to TAP-style output, so this change just brings the cgroup tests in line with that broader direction.
>
> > Or is this part of some tree-wide effort?
>
> This patch is not part of a formal, tree-wide conversion series I am running; it is an incremental step to align the
> cgroup C tests with the existing TAP usage. I started here because these tests already use ksft_test_result_*() and
> only require minor changes to generate proper TAP output.
>
> > I'm asking to better asses whether also the scripts listed in
> > Makefile:TEST_PROGS should be converted too.
>
> I agree that having them produce TAP output would benefit tooling and CI. I did not want to mix
> that into this change, but if you and other maintainers think this direction is reasonable,
> I would be happy to follow up and convert the cgroup shell tests to TAP as well.
>
> Thanks again for your review.
>
> Best regards,
> Guopeng
>
>