From: Willem de Bruijn willemb@google.com
Lay the groundwork to import into kselftests the over 150 packetdrill TCP/IP conformance tests on github.com/google/packetdrill.
1/2: add kselftest infra for TEST_PROGS that need an interpreter
2/2: add the specific packetdrill tests
Both can go through net-next, I imagine. But let me know if the core infra should go through linux-kselftest.
Willem de Bruijn (2): selftests: support interpreted scripts with ksft_runner.sh selftests/net: integrate packetdrill with ksft
tools/testing/selftests/Makefile | 5 +- tools/testing/selftests/kselftest/runner.sh | 7 ++- .../selftests/net/packetdrill/Makefile | 9 +++ .../testing/selftests/net/packetdrill/config | 1 + .../selftests/net/packetdrill/defaults.sh | 63 +++++++++++++++++++ .../selftests/net/packetdrill/ksft_runner.sh | 40 ++++++++++++ .../net/packetdrill/tcp_inq_client.pkt | 51 +++++++++++++++ .../net/packetdrill/tcp_inq_server.pkt | 51 +++++++++++++++ .../tcp_md5_md5-only-on-client-ack.pkt | 28 +++++++++ 9 files changed, 251 insertions(+), 4 deletions(-) create mode 100644 tools/testing/selftests/net/packetdrill/Makefile create mode 100644 tools/testing/selftests/net/packetdrill/config create mode 100755 tools/testing/selftests/net/packetdrill/defaults.sh create mode 100755 tools/testing/selftests/net/packetdrill/ksft_runner.sh create mode 100644 tools/testing/selftests/net/packetdrill/tcp_inq_client.pkt create mode 100644 tools/testing/selftests/net/packetdrill/tcp_inq_server.pkt create mode 100644 tools/testing/selftests/net/packetdrill/tcp_md5_md5-only-on-client-ack.pkt
From: Willem de Bruijn willemb@google.com
Support testcases that are themselves not executable, but need an interpreter to run them.
If a test file is not executable, but an executable file ksft_runner.sh exists in the TARGET dir, kselftest will run
./ksft_runner.sh ./$BASENAME_TEST
Packetdrill may add hundreds of packetdrill scripts for testing. These scripts must be passed to the packetdrill process.
Have kselftest run each test directly, as it already solves common runner requirements like parallel execution and isolation (netns). A previous RFC added a wrapper in between, which would have to reimplement such functionality.
Link: https://lore.kernel.org/netdev/66d4d97a4cac_3df182941a@willemb.c.googlers.co... Signed-off-by: Willem de Bruijn willemb@google.com --- tools/testing/selftests/kselftest/runner.sh | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/kselftest/runner.sh b/tools/testing/selftests/kselftest/runner.sh index 74954f6a8f94b..2c3c58e65a419 100644 --- a/tools/testing/selftests/kselftest/runner.sh +++ b/tools/testing/selftests/kselftest/runner.sh @@ -111,8 +111,11 @@ run_one() stdbuf="/usr/bin/stdbuf --output=L " fi eval kselftest_cmd_args="$${kselftest_cmd_args_ref:-}" - cmd="$stdbuf ./$BASENAME_TEST $kselftest_cmd_args" - if [ ! -x "$TEST" ]; then + if [ -x "$TEST" ]; then + cmd="$stdbuf ./$BASENAME_TEST $kselftest_cmd_args" + elif [ -x "./ksft_runner.sh" ]; then + cmd="$stdbuf ./ksft_runner.sh ./$BASENAME_TEST" + else echo "# Warning: file $TEST is not executable"
if [ $(head -n 1 "$TEST" | cut -c -2) = "#!" ]
From: Willem de Bruijn willemb@google.com
Lay the groundwork to import into kselftests the over 150 packetdrill TCP/IP conformance tests on github.com/google/packetdrill.
Florian recently added support for packetdrill tests in nf_conntrack, in commit a8a388c2aae49 ("selftests: netfilter: add packetdrill based conntrack tests").
This patch takes a slightly different approach. It relies on ksft_runner.sh to run every *.pkt file in the directory.
Any future imports of packetdrill tests should require no additional coding. Just add the *.pkt files.
Initially import only two features/directories from github. One with a single script, and one with two. This was the only reason to pick tcp/inq and tcp/md5.
The path replaces the directory hierarchy in github with a flat space of files: $(subst /,_,$(wildcard tcp/**/*.pkt)). This is the most straightforward option to integrate with kselftests. The Linked thread reviewed two ways to maintain the hierarchy: TEST_PROGS_RECURSE and PRESERVE_TEST_DIRS. But both introduce significant changes to kselftest infra and with that risk to existing tests.
Implementation notes: - restore alphabetical order when adding the new directory to tools/testing/selftests/Makefile - imported *.pkt files and support verbatim from the github project, except for - update `source ./defaults.sh` path (to adjust for flat dir) - add SPDX headers - remove one author statement - Acknowledgment: drop an e (checkpatch)
Tested: make -C tools/testing/selftests \ TARGETS=net/packetdrill \ run_tests
make -C tools/testing/selftests \ TARGETS=net/packetdrill \ install INSTALL_PATH=$KSFT_INSTALL_PATH
# in virtme-ng export KSELFTEST_PKT_INTERP=./packetdrill_ksft.sh ./run_kselftest.sh -c net/packetdrill ./run_kselftest.sh -t net/packetdrill:tcp_inq_client.pkt
Link: https://lore.kernel.org/netdev/20240827193417.2792223-1-willemdebruijn.kerne... Signed-off-by: Willem de Bruijn willemb@google.com
---
Changes: - RFC -> v1 - replace custom runner with ksft_runner.sh (previous patch in series) and ktap_helpers.sh - flatten the github tcp/**/*.pkt directory structure - add config for MD5 dependency - drop unused set_sysctls.py --- tools/testing/selftests/Makefile | 5 +- .../selftests/net/packetdrill/Makefile | 9 +++ .../testing/selftests/net/packetdrill/config | 1 + .../selftests/net/packetdrill/defaults.sh | 63 +++++++++++++++++++ .../selftests/net/packetdrill/ksft_runner.sh | 41 ++++++++++++ .../net/packetdrill/tcp_inq_client.pkt | 51 +++++++++++++++ .../net/packetdrill/tcp_inq_server.pkt | 51 +++++++++++++++ .../tcp_md5_md5-only-on-client-ack.pkt | 28 +++++++++ 8 files changed, 247 insertions(+), 2 deletions(-) create mode 100644 tools/testing/selftests/net/packetdrill/Makefile create mode 100644 tools/testing/selftests/net/packetdrill/config create mode 100755 tools/testing/selftests/net/packetdrill/defaults.sh create mode 100755 tools/testing/selftests/net/packetdrill/ksft_runner.sh create mode 100644 tools/testing/selftests/net/packetdrill/tcp_inq_client.pkt create mode 100644 tools/testing/selftests/net/packetdrill/tcp_inq_server.pkt create mode 100644 tools/testing/selftests/net/packetdrill/tcp_md5_md5-only-on-client-ack.pkt
diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile index a5f1c0c27dff9..3b7df54773170 100644 --- a/tools/testing/selftests/Makefile +++ b/tools/testing/selftests/Makefile @@ -65,10 +65,11 @@ TARGETS += net/af_unix TARGETS += net/forwarding TARGETS += net/hsr TARGETS += net/mptcp -TARGETS += net/openvswitch -TARGETS += net/tcp_ao TARGETS += net/netfilter +TARGETS += net/openvswitch +TARGETS += net/packetdrill TARGETS += net/rds +TARGETS += net/tcp_ao TARGETS += nsfs TARGETS += perf_events TARGETS += pidfd diff --git a/tools/testing/selftests/net/packetdrill/Makefile b/tools/testing/selftests/net/packetdrill/Makefile new file mode 100644 index 0000000000000..870f7258dc8d7 --- /dev/null +++ b/tools/testing/selftests/net/packetdrill/Makefile @@ -0,0 +1,9 @@ +# SPDX-License-Identifier: GPL-2.0 + +TEST_INCLUDES := ksft_runner.sh \ + defaults.sh \ + ../../kselftest/ktap_helpers.sh + +TEST_PROGS := $(wildcard *.pkt) + +include ../../lib.mk diff --git a/tools/testing/selftests/net/packetdrill/config b/tools/testing/selftests/net/packetdrill/config new file mode 100644 index 0000000000000..b92ad2fb56b63 --- /dev/null +++ b/tools/testing/selftests/net/packetdrill/config @@ -0,0 +1 @@ +CONFIG_TCP_MD5SIG=y diff --git a/tools/testing/selftests/net/packetdrill/defaults.sh b/tools/testing/selftests/net/packetdrill/defaults.sh new file mode 100755 index 0000000000000..1095a7b22f44d --- /dev/null +++ b/tools/testing/selftests/net/packetdrill/defaults.sh @@ -0,0 +1,63 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 +# +# Set standard production config values that relate to TCP behavior. + +# Flush old cached data (fastopen cookies). +ip tcp_metrics flush all > /dev/null 2>&1 + +# TCP min, default, and max receive and send buffer sizes. +sysctl -q net.ipv4.tcp_rmem="4096 540000 $((15*1024*1024))" +sysctl -q net.ipv4.tcp_wmem="4096 $((256*1024)) 4194304" + +# TCP timestamps. +sysctl -q net.ipv4.tcp_timestamps=1 + +# TCP SYN(ACK) retry thresholds +sysctl -q net.ipv4.tcp_syn_retries=5 +sysctl -q net.ipv4.tcp_synack_retries=5 + +# TCP Forward RTO-Recovery, RFC 5682. +sysctl -q net.ipv4.tcp_frto=2 + +# TCP Selective Acknowledgements (SACK) +sysctl -q net.ipv4.tcp_sack=1 + +# TCP Duplicate Selective Acknowledgements (DSACK) +sysctl -q net.ipv4.tcp_dsack=1 + +# TCP FACK (Forward Acknowldgement) +sysctl -q net.ipv4.tcp_fack=0 + +# TCP reordering degree ("dupthresh" threshold for entering Fast Recovery). +sysctl -q net.ipv4.tcp_reordering=3 + +# TCP congestion control. +sysctl -q net.ipv4.tcp_congestion_control=cubic + +# TCP slow start after idle. +sysctl -q net.ipv4.tcp_slow_start_after_idle=0 + +# TCP RACK and TLP. +sysctl -q net.ipv4.tcp_early_retrans=4 net.ipv4.tcp_recovery=1 + +# TCP method for deciding when to defer sending to accumulate big TSO packets. +sysctl -q net.ipv4.tcp_tso_win_divisor=3 + +# TCP Explicit Congestion Notification (ECN) +sysctl -q net.ipv4.tcp_ecn=0 + +sysctl -q net.ipv4.tcp_pacing_ss_ratio=200 +sysctl -q net.ipv4.tcp_pacing_ca_ratio=120 +sysctl -q net.ipv4.tcp_notsent_lowat=4294967295 > /dev/null 2>&1 + +sysctl -q net.ipv4.tcp_fastopen=0x70403 +sysctl -q net.ipv4.tcp_fastopen_key=a1a1a1a1-b2b2b2b2-c3c3c3c3-d4d4d4d4 + +sysctl -q net.ipv4.tcp_syncookies=1 + +# Override the default qdisc on the tun device. +# Many tests fail with timing errors if the default +# is FQ and that paces their flows. +tc qdisc add dev tun0 root pfifo + diff --git a/tools/testing/selftests/net/packetdrill/ksft_runner.sh b/tools/testing/selftests/net/packetdrill/ksft_runner.sh new file mode 100755 index 0000000000000..2f62caccbbbc5 --- /dev/null +++ b/tools/testing/selftests/net/packetdrill/ksft_runner.sh @@ -0,0 +1,41 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 + +source "$(dirname $(realpath $0))/../../kselftest/ktap_helpers.sh" + +readonly ipv4_args=('--ip_version=ipv4 ' + '--local_ip=192.168.0.1 ' + '--gateway_ip=192.168.0.1 ' + '--netmask_ip=255.255.0.0 ' + '--remote_ip=192.0.2.1 ' + '-D CMSG_LEVEL_IP=SOL_IP ' + '-D CMSG_TYPE_RECVERR=IP_RECVERR ') + +readonly ipv6_args=('--ip_version=ipv6 ' + '--mtu=1520 ' + '--local_ip=fd3d:0a0b:17d6::1 ' + '--gateway_ip=fd3d:0a0b:17d6:8888::1 ' + '--remote_ip=fd3d:fa7b:d17d::1 ' + '-D CMSG_LEVEL_IP=SOL_IPV6 ' + '-D CMSG_TYPE_RECVERR=IPV6_RECVERR ') + +if [ $# -ne 1 ]; then + ktap_exit_fail_msg "usage: $0 <script>" + exit "$KSFT_FAIL" +fi +script="$1" + +if [ -z "$(which packetdrill)" ]; then + ktap_skip_all "packetdrill not found in PATH" + exit "$KSFT_SKIP" +fi + +ktap_print_header +ktap_set_plan 2 + +packetdrill ${ipv4_args[@]} $(basename $script) > /dev/null \ + && ktap_test_pass "ipv4" || ktap_test_fail "ipv4" +packetdrill ${ipv6_args[@]} $(basename $script) > /dev/null \ + && ktap_test_pass "ipv6" || ktap_test_fail "ipv6" + +ktap_finished diff --git a/tools/testing/selftests/net/packetdrill/tcp_inq_client.pkt b/tools/testing/selftests/net/packetdrill/tcp_inq_client.pkt new file mode 100644 index 0000000000000..df49c67645ac8 --- /dev/null +++ b/tools/testing/selftests/net/packetdrill/tcp_inq_client.pkt @@ -0,0 +1,51 @@ +// SPDX-License-Identifier: GPL-2.0 +// Test TCP_INQ and TCP_CM_INQ on the client side. +`./defaults.sh +` + +// Create a socket and set it to non-blocking. + 0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3 + +0 fcntl(3, F_GETFL) = 0x2 (flags O_RDWR) + +0 fcntl(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 + +// Connect to the server and enable TCP_INQ. + +0 connect(3, ..., ...) = -1 EINPROGRESS (Operation now in progress) + +0 setsockopt(3, SOL_TCP, TCP_INQ, [1], 4) = 0 + + +0 > S 0:0(0) <mss 1460,sackOK,TS val 100 ecr 0,nop,wscale 8> + +.01 < S. 0:0(0) ack 1 win 5792 <mss 1460,sackOK,TS val 700 ecr 100,nop,wscale 7> + +0 > . 1:1(0) ack 1 <nop,nop,TS val 200 ecr 700> + +// Now we have 10K of data ready on the socket. + +0 < . 1:10001(10000) ack 1 win 514 + +0 > . 1:1(0) ack 10001 <nop,nop,TS val 200 ecr 700> + +// We read 1K and we should have 9K ready to read. + +0 recvmsg(3, {msg_name(...)=..., + msg_iov(1)=[{..., 1000}], + msg_flags=0, + msg_control=[{cmsg_level=SOL_TCP, + cmsg_type=TCP_CM_INQ, + cmsg_data=9000}]}, 0) = 1000 +// We read 9K and we should have no further data ready to read. + +0 recvmsg(3, {msg_name(...)=..., + msg_iov(1)=[{..., 9000}], + msg_flags=0, + msg_control=[{cmsg_level=SOL_TCP, + cmsg_type=TCP_CM_INQ, + cmsg_data=0}]}, 0) = 9000 + +// Server sends more data and closes the connections. + +0 < F. 10001:20001(10000) ack 1 win 514 + +0 > . 1:1(0) ack 20002 <nop,nop,TS val 200 ecr 700> + +// We read 10K and we should have one "fake" byte because the connection is +// closed. + +0 recvmsg(3, {msg_name(...)=..., + msg_iov(1)=[{..., 10000}], + msg_flags=0, + msg_control=[{cmsg_level=SOL_TCP, + cmsg_type=TCP_CM_INQ, + cmsg_data=1}]}, 0) = 10000 +// Now, receive EOF. + +0 read(3, ..., 2000) = 0 diff --git a/tools/testing/selftests/net/packetdrill/tcp_inq_server.pkt b/tools/testing/selftests/net/packetdrill/tcp_inq_server.pkt new file mode 100644 index 0000000000000..04a5e2590c62c --- /dev/null +++ b/tools/testing/selftests/net/packetdrill/tcp_inq_server.pkt @@ -0,0 +1,51 @@ +// SPDX-License-Identifier: GPL-2.0 +// Test TCP_INQ and TCP_CM_INQ on the server side. +`./defaults.sh +` + +// Initialize connection + 0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3 + +0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0 + +0 bind(3, ..., ...) = 0 + +0 listen(3, 1) = 0 + + +0 < S 0:0(0) win 32792 <mss 1000,sackOK,nop,nop,nop,wscale 10> + +0 > S. 0:0(0) ack 1 <mss 1460,nop,nop,sackOK,nop,wscale 8> + +.01 < . 1:1(0) ack 1 win 514 + +// Accept the connection and enable TCP_INQ. + +0 accept(3, ..., ...) = 4 + +0 setsockopt(4, SOL_TCP, TCP_INQ, [1], 4) = 0 + +// Now we have 10K of data ready on the socket. + +0 < . 1:10001(10000) ack 1 win 514 + +0 > . 1:1(0) ack 10001 + +// We read 2K and we should have 8K ready to read. + +0 recvmsg(4, {msg_name(...)=..., + msg_iov(1)=[{..., 2000}], + msg_flags=0, + msg_control=[{cmsg_level=SOL_TCP, + cmsg_type=TCP_CM_INQ, + cmsg_data=8000}]}, 0) = 2000 +// We read 8K and we should have no further data ready to read. + +0 recvmsg(4, {msg_name(...)=..., + msg_iov(1)=[{..., 8000}], + msg_flags=0, + msg_control=[{cmsg_level=SOL_TCP, + cmsg_type=TCP_CM_INQ, + cmsg_data=0}]}, 0) = 8000 +// Client sends more data and closes the connections. + +0 < F. 10001:20001(10000) ack 1 win 514 + +0 > . 1:1(0) ack 20002 + +// We read 10K and we should have one "fake" byte because the connection is +// closed. + +0 recvmsg(4, {msg_name(...)=..., + msg_iov(1)=[{..., 10000}], + msg_flags=0, + msg_control=[{cmsg_level=SOL_TCP, + cmsg_type=TCP_CM_INQ, + cmsg_data=1}]}, 0) = 10000 +// Now, receive error. + +0 read(3, ..., 2000) = -1 ENOTCONN (Transport endpoint is not connected) diff --git a/tools/testing/selftests/net/packetdrill/tcp_md5_md5-only-on-client-ack.pkt b/tools/testing/selftests/net/packetdrill/tcp_md5_md5-only-on-client-ack.pkt new file mode 100644 index 0000000000000..25dfef95d3f84 --- /dev/null +++ b/tools/testing/selftests/net/packetdrill/tcp_md5_md5-only-on-client-ack.pkt @@ -0,0 +1,28 @@ +// SPDX-License-Identifier: GPL-2.0 +// Test what happens when client does not provide MD5 on SYN, +// but then does on the ACK that completes the three-way handshake. + +`./defaults.sh` + +// Establish a connection. + 0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3 + +0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0 + +0 bind(3, ..., ...) = 0 + +0 listen(3, 1) = 0 + + +0 < S 0:0(0) win 32792 <mss 1000,sackOK,nop,nop,nop,wscale 10> + +0 > S. 0:0(0) ack 1 <mss 1460,nop,nop,sackOK,nop,wscale 8> +// Ooh, weird: client provides MD5 option on the ACK: + +.01 < . 1:1(0) ack 1 win 514 <md5 000102030405060708090a0b0c0d0e0f,nop,nop> + +.01 < . 1:1(0) ack 1 win 514 <md5 000102030405060708090a0b0c0d0e0f,nop,nop> + +// The TCP listener refcount should be 2, but on buggy kernels it can be 0: + +0 `grep " 0A " /proc/net/tcp /proc/net/tcp6 | grep ":1F90"` + +// Now here comes the legit ACK: + +.01 < . 1:1(0) ack 1 win 514 + +// Make sure the connection is OK: + +0 accept(3, ..., ...) = 4 + + +.01 write(4, ..., 1000) = 1000
On Wed, 4 Sep 2024 23:07:03 -0400 Willem de Bruijn wrote:
+++ b/tools/testing/selftests/net/packetdrill/config @@ -0,0 +1 @@ +CONFIG_TCP_MD5SIG=y
Looks like this is not enough:
# 1..2 # open tun device: No such file or directory # not ok 1 ipv4 # open tun device: No such file or directory
https://netdev-3.bots.linux.dev/vmksft-packetdrill/results/759141/1-tcp-inq-...
Resulting config in the build:
# CONFIG_TUN is not set
https://netdev-3.bots.linux.dev/vmksft-packetdrill/results/759141/config
Keep in mind the "Important" note here:
https://github.com/linux-netdev/nipa/wiki/How-to-run-netdev-selftests-CI-sty...
I recommend using a fresh tree or mrproper for testing vng configs.
Feel free to post v2 without the 24h wait, it's a bit tricky to handle new targets in CI, sooner we merge this the less manual work for me..
Jakub Kicinski wrote:
On Wed, 4 Sep 2024 23:07:03 -0400 Willem de Bruijn wrote:
+++ b/tools/testing/selftests/net/packetdrill/config @@ -0,0 +1 @@ +CONFIG_TCP_MD5SIG=y
Looks like this is not enough:
# 1..2 # open tun device: No such file or directory # not ok 1 ipv4 # open tun device: No such file or directory
https://netdev-3.bots.linux.dev/vmksft-packetdrill/results/759141/1-tcp-inq-...
Resulting config in the build:
# CONFIG_TUN is not set
https://netdev-3.bots.linux.dev/vmksft-packetdrill/results/759141/config
Keep in mind the "Important" note here:
https://github.com/linux-netdev/nipa/wiki/How-to-run-netdev-selftests-CI-sty...
I recommend using a fresh tree or mrproper for testing vng configs.
Feel free to post v2 without the 24h wait, it's a bit tricky to handle new targets in CI, sooner we merge this the less manual work for me..
Oops sorry. Thanks for the pointer.
Sent a v2 with CONFIG_TUN and a few other CONFIGS from reviewing the existing configs and defaults.sh. The above steps work for me now.
Willem de Bruijn wrote:
Jakub Kicinski wrote:
On Wed, 4 Sep 2024 23:07:03 -0400 Willem de Bruijn wrote:
+++ b/tools/testing/selftests/net/packetdrill/config @@ -0,0 +1 @@ +CONFIG_TCP_MD5SIG=y
Looks like this is not enough:
# 1..2 # open tun device: No such file or directory # not ok 1 ipv4 # open tun device: No such file or directory
https://netdev-3.bots.linux.dev/vmksft-packetdrill/results/759141/1-tcp-inq-...
Resulting config in the build:
# CONFIG_TUN is not set
https://netdev-3.bots.linux.dev/vmksft-packetdrill/results/759141/config
Keep in mind the "Important" note here:
https://github.com/linux-netdev/nipa/wiki/How-to-run-netdev-selftests-CI-sty...
I recommend using a fresh tree or mrproper for testing vng configs.
Feel free to post v2 without the 24h wait, it's a bit tricky to handle new targets in CI, sooner we merge this the less manual work for me..
Oops sorry. Thanks for the pointer.
Sent a v2 with CONFIG_TUN and a few other CONFIGS from reviewing the existing configs and defaults.sh. The above steps work for me now.
Packetdrill scripts are sensitive to timing. On the dbg build, I just observe a flaky test.
The tool takes --tolerance_usecs and --tolerance_percent arguments. I may have to update ksft_runner.sh to increase one if a dbg build is detected.
Let me know if I should respin now. Else I can also follow-up.
Need to figure out how best to detect debug builds. It is not in uname, and no proc/config.gz. Existence of /sys/kernel/debug/kmemleak is a proxy for current kernel/configs/debug.config, if a bit crude.
Another config affecting timing may be CONFIG_HZ. I did not observe issues with these specific scripts with CONFIG_HZ=250. It may have to be tackled eventually. Or CONFIG_HZ=1000 hardcoded in config.
Willem de Bruijn wrote:
Willem de Bruijn wrote:
Jakub Kicinski wrote:
On Wed, 4 Sep 2024 23:07:03 -0400 Willem de Bruijn wrote:
+++ b/tools/testing/selftests/net/packetdrill/config @@ -0,0 +1 @@ +CONFIG_TCP_MD5SIG=y
Looks like this is not enough:
# 1..2 # open tun device: No such file or directory # not ok 1 ipv4 # open tun device: No such file or directory
https://netdev-3.bots.linux.dev/vmksft-packetdrill/results/759141/1-tcp-inq-...
Resulting config in the build:
# CONFIG_TUN is not set
https://netdev-3.bots.linux.dev/vmksft-packetdrill/results/759141/config
Keep in mind the "Important" note here:
https://github.com/linux-netdev/nipa/wiki/How-to-run-netdev-selftests-CI-sty...
I recommend using a fresh tree or mrproper for testing vng configs.
Feel free to post v2 without the 24h wait, it's a bit tricky to handle new targets in CI, sooner we merge this the less manual work for me..
Oops sorry. Thanks for the pointer.
Sent a v2 with CONFIG_TUN and a few other CONFIGS from reviewing the existing configs and defaults.sh. The above steps work for me now.
Packetdrill scripts are sensitive to timing. On the dbg build, I just observe a flaky test.
The tool takes --tolerance_usecs and --tolerance_percent arguments. I may have to update ksft_runner.sh to increase one if a dbg build is detected.
Let me know if I should respin now. Else I can also follow-up.
Need to figure out how best to detect debug builds. It is not in uname, and no proc/config.gz. Existence of /sys/kernel/debug/kmemleak is a proxy for current kernel/configs/debug.config, if a bit crude.
Should have kept on reading. Will use KSFT_MACHINE_SLOW:
+declare -a optargs +if [[ "${KSFT_MACHINE_SLOW}" == "yes" ]]; then + optargs+=('--tolerance_usecs=10000') +fi + ktap_print_header ktap_set_plan 2
-packetdrill ${ipv4_args[@]} $(basename $script) > /dev/null \ +packetdrill ${ipv4_args[@]} ${optargs[@]} $(basename $script) > /dev/null \ && ktap_test_pass "ipv4" || ktap_test_fail "ipv4" -packetdrill ${ipv6_args[@]} $(basename $script) > /dev/null \ +packetdrill ${ipv6_args[@]} ${optargs[@]} $(basename $script) > /dev/null \ && ktap_test_pass "ipv6" || ktap_test_fail "ipv6"
Another config affecting timing may be CONFIG_HZ. I did not observe issues with these specific scripts with CONFIG_HZ=250. It may have to be tackled eventually. Or CONFIG_HZ=1000 hardcoded in config.
I will just add the CONFIG for now.
On Thu, 05 Sep 2024 21:31:55 -0400 Willem de Bruijn wrote:
Packetdrill scripts are sensitive to timing. On the dbg build, I just observe a flaky test.
The tool takes --tolerance_usecs and --tolerance_percent arguments. I may have to update ksft_runner.sh to increase one if a dbg build is detected.
Let me know if I should respin now. Else I can also follow-up.
Need to figure out how best to detect debug builds. It is not in uname, and no proc/config.gz. Existence of /sys/kernel/debug/kmemleak is a proxy for current kernel/configs/debug.config, if a bit crude.
Should have kept on reading. Will use KSFT_MACHINE_SLOW:
+declare -a optargs +if [[ "${KSFT_MACHINE_SLOW}" == "yes" ]]; then
optargs+=('--tolerance_usecs=10000')
+fi
ktap_print_header ktap_set_plan 2
-packetdrill ${ipv4_args[@]} $(basename $script) > /dev/null \ +packetdrill ${ipv4_args[@]} ${optargs[@]} $(basename $script) > /dev/null \ && ktap_test_pass "ipv4" || ktap_test_fail "ipv4" -packetdrill ${ipv6_args[@]} $(basename $script) > /dev/null \ +packetdrill ${ipv6_args[@]} ${optargs[@]} $(basename $script) > /dev/null \ && ktap_test_pass "ipv6" || ktap_test_fail "ipv6"
Another config affecting timing may be CONFIG_HZ. I did not observe issues with these specific scripts with CONFIG_HZ=250. It may have to be tackled eventually. Or CONFIG_HZ=1000 hardcoded in config.
I will just add the CONFIG for now.
Not sure I follow the HZ idea, lowering the frequency helps stability?
We can see how well v2 does overnight, so far it's green: https://netdev.bots.linux.dev/contest.html?executor=vmksft-packetdrill-dbg (the net-next-2024-09-05--* branches had v1).
FWIW status page lists two sets of packetdrill runners, probably because I 'reused' an old team-driver runner instead of creating a new one. It should straighten itself out by tomorrow.
Jakub Kicinski wrote:
On Thu, 05 Sep 2024 21:31:55 -0400 Willem de Bruijn wrote:
Packetdrill scripts are sensitive to timing. On the dbg build, I just observe a flaky test.
The tool takes --tolerance_usecs and --tolerance_percent arguments. I may have to update ksft_runner.sh to increase one if a dbg build is detected.
Let me know if I should respin now. Else I can also follow-up.
Need to figure out how best to detect debug builds. It is not in uname, and no proc/config.gz. Existence of /sys/kernel/debug/kmemleak is a proxy for current kernel/configs/debug.config, if a bit crude.
Should have kept on reading. Will use KSFT_MACHINE_SLOW:
+declare -a optargs +if [[ "${KSFT_MACHINE_SLOW}" == "yes" ]]; then
optargs+=('--tolerance_usecs=10000')
+fi
ktap_print_header ktap_set_plan 2
-packetdrill ${ipv4_args[@]} $(basename $script) > /dev/null \ +packetdrill ${ipv4_args[@]} ${optargs[@]} $(basename $script) > /dev/null \ && ktap_test_pass "ipv4" || ktap_test_fail "ipv4" -packetdrill ${ipv6_args[@]} $(basename $script) > /dev/null \ +packetdrill ${ipv6_args[@]} ${optargs[@]} $(basename $script) > /dev/null \ && ktap_test_pass "ipv6" || ktap_test_fail "ipv6"
Another config affecting timing may be CONFIG_HZ. I did not observe issues with these specific scripts with CONFIG_HZ=250. It may have to be tackled eventually. Or CONFIG_HZ=1000 hardcoded in config.
I will just add the CONFIG for now.
Not sure I follow the HZ idea, lowering the frequency helps stability?
We can see how well v2 does overnight, so far it's green: https://netdev.bots.linux.dev/contest.html?executor=vmksft-packetdrill-dbg (the net-next-2024-09-05--* branches had v1).
Great!
I saw one failure in manual runs and was unable to reproduce it with a few more iterations. Let's see how it goes.
We do adjust these internally, to the same value for KASAN.
FWIW status page lists two sets of packetdrill runners, probably because I 'reused' an old team-driver runner instead of creating a new one. It should straighten itself out by tomorrow.
On 9/6/24 03:31, Willem de Bruijn wrote:
Willem de Bruijn wrote:
Willem de Bruijn wrote:
Jakub Kicinski wrote:
On Wed, 4 Sep 2024 23:07:03 -0400 Willem de Bruijn wrote:
+++ b/tools/testing/selftests/net/packetdrill/config @@ -0,0 +1 @@ +CONFIG_TCP_MD5SIG=y
Looks like this is not enough:
# 1..2 # open tun device: No such file or directory # not ok 1 ipv4 # open tun device: No such file or directory
https://netdev-3.bots.linux.dev/vmksft-packetdrill/results/759141/1-tcp-inq-...
Resulting config in the build:
# CONFIG_TUN is not set
https://netdev-3.bots.linux.dev/vmksft-packetdrill/results/759141/config
Keep in mind the "Important" note here:
https://github.com/linux-netdev/nipa/wiki/How-to-run-netdev-selftests-CI-sty...
I recommend using a fresh tree or mrproper for testing vng configs.
Feel free to post v2 without the 24h wait, it's a bit tricky to handle new targets in CI, sooner we merge this the less manual work for me..
Oops sorry. Thanks for the pointer.
Sent a v2 with CONFIG_TUN and a few other CONFIGS from reviewing the existing configs and defaults.sh. The above steps work for me now.
Packetdrill scripts are sensitive to timing. On the dbg build, I just observe a flaky test.
The tool takes --tolerance_usecs and --tolerance_percent arguments. I may have to update ksft_runner.sh to increase one if a dbg build is detected.
Let me know if I should respin now. Else I can also follow-up.
Need to figure out how best to detect debug builds. It is not in uname, and no proc/config.gz. Existence of /sys/kernel/debug/kmemleak is a proxy for current kernel/configs/debug.config, if a bit crude.
Should have kept on reading. Will use KSFT_MACHINE_SLOW:
+declare -a optargs +if [[ "${KSFT_MACHINE_SLOW}" == "yes" ]]; then
optargs+=('--tolerance_usecs=10000')
+fi
- ktap_print_header ktap_set_plan 2
I'm sorry for the late reply. FWIW, for mptcp pktdrill tests on slow machines (VMs over cloud) we use tolerance=100000 - put we have to be careful WRT fast retransmissions and timing in the script themself.
Cheers,
Paolo
linux-kselftest-mirror@lists.linaro.org