The test 'ethtool-features.sh' failed with the below output:
TAP version 13 1..1 # timeout set to 600 # selftests: drivers/net/netdevsim: ethtool-features.sh # Warning: file ethtool-features.sh is not executable # ethtool: bad command line argument(s) # For more information run ethtool -h # ethtool: bad command line argument(s) # For more information run ethtool -h # ethtool: bad command line argument(s) # For more information run ethtool -h # ethtool: bad command line argument(s) # For more information run ethtool -h # ethtool: bad command line argument(s) # For more information run ethtool -h # ethtool: bad command line argument(s) # For more information run ethtool -h # ethtool: bad command line argument(s) # For more information run ethtool -h # ethtool: bad command line argument(s) # For more information run ethtool -h # ethtool: bad command line argument(s) # For more information run ethtool -h # ethtool: bad command line argument(s) # For more information run ethtool -h # FAILED 10/10 checks not ok 1 selftests: drivers/net/netdevsim: ethtool-features.sh # exit=1
Similar to commit 18378b0e49d9 ("selftests/damon: Add executable permission to test scripts"), the script 'ethtool-features.sh' has no executable permission, which leads to the warning 'file ethtool-features.sh is not executable'.
Old version ethtool (my ethtool version is 5.16) does not support command 'ethtool --json -k enp1s0', which leads to the output 'ethtool: bad command line argument(s)'.
This patch adds executable permission to script 'ethtool-features.sh', and check 'ethtool --json -k' support. After this patch:
TAP version 13 1..1 # timeout set to 600 # selftests: drivers/net/netdevsim: ethtool-features.sh # SKIP: No --json -k support in ethtool ok 1 selftests: drivers/net/netdevsim: ethtool-features.sh
Fixes: 0189270117c3 ("selftests: netdevsim: add a test checking ethtool features") Signed-off-by: Wang Liang wangliang74@huawei.com --- .../selftests/drivers/net/netdevsim/ethtool-features.sh | 5 +++++ 1 file changed, 5 insertions(+) mode change 100644 => 100755 tools/testing/selftests/drivers/net/netdevsim/ethtool-features.sh
diff --git a/tools/testing/selftests/drivers/net/netdevsim/ethtool-features.sh b/tools/testing/selftests/drivers/net/netdevsim/ethtool-features.sh old mode 100644 new mode 100755 index bc210dc6ad2d..f771dc6839ea --- a/tools/testing/selftests/drivers/net/netdevsim/ethtool-features.sh +++ b/tools/testing/selftests/drivers/net/netdevsim/ethtool-features.sh @@ -7,6 +7,11 @@ NSIM_NETDEV=$(make_netdev)
set -o pipefail
+if ! ethtool --json -k $NSIM_NETDEV > /dev/null 2>&1; then + echo "SKIP: No --json -k support in ethtool" + exit $ksft_skip +fi + FEATS=" tx-checksum-ip-generic tx-scatter-gather
2025-10-30, 11:22:03 +0800, Wang Liang wrote:
This patch adds executable permission to script 'ethtool-features.sh', and check 'ethtool --json -k' support.
Those are two separate things, probably should be two separate patches.
[...]
@@ -7,6 +7,11 @@ NSIM_NETDEV=$(make_netdev) set -o pipefail +if ! ethtool --json -k $NSIM_NETDEV > /dev/null 2>&1; then
I guess it's improving the situation, but I've got a system with an ethtool that accepts the --json argument, but silently ignores it for -k (ie `ethtool --json -k $DEV` succeeds but doesn't produce a json output), which will still cause the test to fail later.
On Fri, 31 Oct 2025 00:13:59 +0100 Sabrina Dubroca wrote:
set -o pipefail +if ! ethtool --json -k $NSIM_NETDEV > /dev/null 2>&1; then
I guess it's improving the situation, but I've got a system with an ethtool that accepts the --json argument, but silently ignores it for -k (ie `ethtool --json -k $DEV` succeeds but doesn't produce a json output), which will still cause the test to fail later.
And --json was added to -k in Jan 2022, that's pretty long ago. I'm not sure we need this aspect of the patch at all..
2025-10-30, 17:02:17 -0700, Jakub Kicinski wrote:
On Fri, 31 Oct 2025 00:13:59 +0100 Sabrina Dubroca wrote:
set -o pipefail +if ! ethtool --json -k $NSIM_NETDEV > /dev/null 2>&1; then
I guess it's improving the situation, but I've got a system with an ethtool that accepts the --json argument, but silently ignores it for -k (ie `ethtool --json -k $DEV` succeeds but doesn't produce a json output), which will still cause the test to fail later.
And --json was added to -k in Jan 2022, that's pretty long ago. I'm not sure we need this aspect of the patch at all..
Ok. Then maybe a silly idea: for the tests that currently have some form of "$TOOL is too old" check, do we want to remove those after a while? If so, how long after the feature was introduced in $TOOL?
Or should we leave them, but not accept new checks to exclude really-old versions of tools? Do we need to document the cut-off ("we don't support tool versions older than 2 years for networking selftests" [or similar]) somewhere in Documentation/ ?
On Mon, Nov 03, 2025 at 11:13:08AM +0100, Sabrina Dubroca wrote:
2025-10-30, 17:02:17 -0700, Jakub Kicinski wrote:
On Fri, 31 Oct 2025 00:13:59 +0100 Sabrina Dubroca wrote:
set -o pipefail +if ! ethtool --json -k $NSIM_NETDEV > /dev/null 2>&1; then
I guess it's improving the situation, but I've got a system with an ethtool that accepts the --json argument, but silently ignores it for -k (ie `ethtool --json -k $DEV` succeeds but doesn't produce a json output), which will still cause the test to fail later.
And --json was added to -k in Jan 2022, that's pretty long ago. I'm not sure we need this aspect of the patch at all..
Ok. Then maybe a silly idea: for the tests that currently have some form of "$TOOL is too old" check, do we want to remove those after a while? If so, how long after the feature was introduced in $TOOL?
Another option is to turn them into a hard fail, after X years. My guess is, tests which get skipped because the test tools are too old frequently get ignored. Tests which fail are more likely to be looked at, and the tools updated.
Another idea is have a dedicated test which simply tests the versions of all the tools. And it should only pass if the installed tools are sufficiently new that all test can pass. If you have tools which are in the grey zone between too old to cause skips, but not old enough to cause fails, you then just have one failing test you need to turn a blind eye to.
Andrew
2025-11-03, 14:36:00 +0100, Andrew Lunn wrote:
On Mon, Nov 03, 2025 at 11:13:08AM +0100, Sabrina Dubroca wrote:
2025-10-30, 17:02:17 -0700, Jakub Kicinski wrote:
On Fri, 31 Oct 2025 00:13:59 +0100 Sabrina Dubroca wrote:
set -o pipefail +if ! ethtool --json -k $NSIM_NETDEV > /dev/null 2>&1; then
I guess it's improving the situation, but I've got a system with an ethtool that accepts the --json argument, but silently ignores it for -k (ie `ethtool --json -k $DEV` succeeds but doesn't produce a json output), which will still cause the test to fail later.
And --json was added to -k in Jan 2022, that's pretty long ago. I'm not sure we need this aspect of the patch at all..
Ok. Then maybe a silly idea: for the tests that currently have some form of "$TOOL is too old" check, do we want to remove those after a while? If so, how long after the feature was introduced in $TOOL?
Another option is to turn them into a hard fail, after X years.
If the "skip if too old" check is removed, the test will fail when run with old tools (because whatever feature is needed will not be supported, so somewhere in the middle of test execution there will be a failure - but the developer will have to figure out "tool too old" from some random command failing).
"check version + hard fail" makes it clear, but the (minor) benefit of simply dropping the check is removing a few unneeded lines.
My guess is, tests which get skipped because the test tools are too old frequently get ignored. Tests which fail are more likely to be looked at, and the tools updated.
Another idea is have a dedicated test which simply tests the versions of all the tools. And it should only pass if the installed tools are sufficiently new that all test can pass. If you have tools which are in the grey zone between too old to cause skips, but not old enough to cause fails, you then just have one failing test you need to turn a blind eye to.
That's assumming people run all the tests every time. Is that really the case, or do people often run the 2-5 tests that cover the area they care about? For example it doesn't make much sense to run nexthop and TC tests for a macsec patch (and the other way around). If my iproute is too old to run some nexthop or TC tests, I can still run the tests I really need for my patch.
But maybe if the tests are run as "run everything" (rather than manually running a few of them), ensuring all the needed tools are recent enough makes sense.
On Mon, Nov 03, 2025 at 04:01:00PM +0100, Sabrina Dubroca wrote:
2025-11-03, 14:36:00 +0100, Andrew Lunn wrote:
On Mon, Nov 03, 2025 at 11:13:08AM +0100, Sabrina Dubroca wrote:
2025-10-30, 17:02:17 -0700, Jakub Kicinski wrote:
On Fri, 31 Oct 2025 00:13:59 +0100 Sabrina Dubroca wrote:
set -o pipefail +if ! ethtool --json -k $NSIM_NETDEV > /dev/null 2>&1; then
I guess it's improving the situation, but I've got a system with an ethtool that accepts the --json argument, but silently ignores it for -k (ie `ethtool --json -k $DEV` succeeds but doesn't produce a json output), which will still cause the test to fail later.
And --json was added to -k in Jan 2022, that's pretty long ago. I'm not sure we need this aspect of the patch at all..
Ok. Then maybe a silly idea: for the tests that currently have some form of "$TOOL is too old" check, do we want to remove those after a while? If so, how long after the feature was introduced in $TOOL?
Another option is to turn them into a hard fail, after X years.
If the "skip if too old" check is removed, the test will fail when run with old tools (because whatever feature is needed will not be supported, so somewhere in the middle of test execution there will be a failure - but the developer will have to figure out "tool too old" from some random command failing).
Which is not great. It would be much better is the failure message was: 'ethtool: your version is more than $X years old. Please upgrade'
We could also embed the date the requirement was added into the test. So when $X years have past, the test will automatically start failing, no additional work for the test maintainer.
My guess is, tests which get skipped because the test tools are too old frequently get ignored. Tests which fail are more likely to be looked at, and the tools updated.
Another idea is have a dedicated test which simply tests the versions of all the tools. And it should only pass if the installed tools are sufficiently new that all test can pass. If you have tools which are in the grey zone between too old to cause skips, but not old enough to cause fails, you then just have one failing test you need to turn a blind eye to.
That's assumming people run all the tests every time. Is that really the case, or do people often run the 2-5 tests that cover the area they care about? For example it doesn't make much sense to run nexthop and TC tests for a macsec patch (and the other way around). If my iproute is too old to run some nexthop or TC tests, I can still run the tests I really need for my patch.
But maybe if the tests are run as "run everything" (rather than manually running a few of them), ensuring all the needed tools are recent enough makes sense.
I've not do any of this sort of testing for kernel work, but i have for other projects. As a developer i tend to manually run the test of interest to get the feature working. I then throw the code at a Jenkins instance which runs all the tests, just to find if i've accidentally broke something elsewhere. It happens, there is a side effect i did not spot, etc. Regression testing tends to run everything, possibly every day, otherwise on each change set. It costs no developer time, other than looking at the status board the next day.
Andrew
On Mon, 3 Nov 2025 11:13:08 +0100 Sabrina Dubroca wrote:
2025-10-30, 17:02:17 -0700, Jakub Kicinski wrote:
On Fri, 31 Oct 2025 00:13:59 +0100 Sabrina Dubroca wrote:
I guess it's improving the situation, but I've got a system with an ethtool that accepts the --json argument, but silently ignores it for -k (ie `ethtool --json -k $DEV` succeeds but doesn't produce a json output), which will still cause the test to fail later.
And --json was added to -k in Jan 2022, that's pretty long ago. I'm not sure we need this aspect of the patch at all..
Ok. Then maybe a silly idea: for the tests that currently have some form of "$TOOL is too old" check, do we want to remove those after a while? If so, how long after the feature was introduced in $TOOL?
Or should we leave them, but not accept new checks to exclude really-old versions of tools? Do we need to document the cut-off ("we don't support tool versions older than 2 years for networking selftests" [or similar]) somewhere in Documentation/ ?
FWIW my current thinking is to prioritize test development and kernel needs over the ability to run ksft on random old set of tools and have clean skips. IOW avoid complicating writing tests by making the author also responsible for testing versions of all tools.
The list of tools which need to be updated or installed for all networking tests to pass is rather long. My uneducated guess is all these one off SKIP patches don't amount to much. Here for example author is fixing one test, I'm pretty sure that far more tests depend on -k --json.
Integrating with NIPA is not that hard, if someone cares about us ensuring that the tests cleanly pass or skip in their env they should start by reporting results to NIPA..
2025-11-03, 16:01:33 -0800, Jakub Kicinski wrote:
On Mon, 3 Nov 2025 11:13:08 +0100 Sabrina Dubroca wrote:
2025-10-30, 17:02:17 -0700, Jakub Kicinski wrote:
On Fri, 31 Oct 2025 00:13:59 +0100 Sabrina Dubroca wrote:
I guess it's improving the situation, but I've got a system with an ethtool that accepts the --json argument, but silently ignores it for -k (ie `ethtool --json -k $DEV` succeeds but doesn't produce a json output), which will still cause the test to fail later.
And --json was added to -k in Jan 2022, that's pretty long ago. I'm not sure we need this aspect of the patch at all..
Ok. Then maybe a silly idea: for the tests that currently have some form of "$TOOL is too old" check, do we want to remove those after a while? If so, how long after the feature was introduced in $TOOL?
Or should we leave them, but not accept new checks to exclude really-old versions of tools? Do we need to document the cut-off ("we don't support tool versions older than 2 years for networking selftests" [or similar]) somewhere in Documentation/ ?
FWIW my current thinking is to prioritize test development and kernel needs over the ability to run ksft on random old set of tools and have clean skips. IOW avoid complicating writing tests by making the author also responsible for testing versions of all tools.
I see. I liked Andrew's idea ("embed the date the requirement was added into the test"), but it goes completely in the opposite direction.
Figuring out why exactly a test failed in case of an old tool (unexpected output passed to some pipe/parsing, exit with a non-zero code, maybe other issues) is not always obvious. So without version checks on the tools, I think we have to assume that the test requires the latest version of all tools it calls (or at least a very recent one). Which I guess is reasonable for upstream kernel development.
The list of tools which need to be updated or installed for all networking tests to pass is rather long. My uneducated guess is all these one off SKIP patches don't amount to much. Here for example author is fixing one test, I'm pretty sure that far more tests depend on -k --json.
A quick grep found only a few more (in python scripts under drivers/net) for -k. But (also from a quick grep) many tests seem to use jq without checking that the command is present.
So I guess you would lean toward not accepting any such patch, not requiring new tests to have SKIP checks, but leaving any existing checks in? (and I suspect removing all the existing ones wouldn't actually reduce the flow of "add check for too old $tool" patches, so it probably doesn't make sense to do that)
On Tue, 4 Nov 2025 12:04:52 +0100 Sabrina Dubroca wrote:
So I guess you would lean toward not accepting any such patch, not requiring new tests to have SKIP checks, but leaving any existing checks in?
Yes, IOW leave it at the discretion of the test author.
(and I suspect removing all the existing ones wouldn't actually reduce the flow of "add check for too old $tool" patches, so it probably doesn't make sense to do that)
在 2025/10/31 7:13, Sabrina Dubroca 写道:
2025-10-30, 11:22:03 +0800, Wang Liang wrote:
This patch adds executable permission to script 'ethtool-features.sh', and check 'ethtool --json -k' support.
Those are two separate things, probably should be two separate patches.
Ok, I will extract the executable permission change to a new patch.
[...]
@@ -7,6 +7,11 @@ NSIM_NETDEV=$(make_netdev) set -o pipefail +if ! ethtool --json -k $NSIM_NETDEV > /dev/null 2>&1; then
I guess it's improving the situation, but I've got a system with an ethtool that accepts the --json argument, but silently ignores it for -k (ie `ethtool --json -k $DEV` succeeds but doesn't produce a json output), which will still cause the test to fail later.
That is indeed a bit strange.
I'm not sure the best way to handle this situation now. Maybe update ethtool instead of checking the output is not a bad method.
2025-11-03, 16:58:42 +0800, Wang Liang wrote:
在 2025/10/31 7:13, Sabrina Dubroca 写道:
2025-10-30, 11:22:03 +0800, Wang Liang wrote:
This patch adds executable permission to script 'ethtool-features.sh', and check 'ethtool --json -k' support.
Those are two separate things, probably should be two separate patches.
Ok, I will extract the executable permission change to a new patch.
[...]
@@ -7,6 +7,11 @@ NSIM_NETDEV=$(make_netdev) set -o pipefail +if ! ethtool --json -k $NSIM_NETDEV > /dev/null 2>&1; then
I guess it's improving the situation, but I've got a system with an ethtool that accepts the --json argument, but silently ignores it for -k (ie `ethtool --json -k $DEV` succeeds but doesn't produce a json output), which will still cause the test to fail later.
That is indeed a bit strange.
I'm not sure the best way to handle this situation now. Maybe update ethtool instead of checking the output is not a bad method.
That's what Jakub was suggesting in his answer [1]. ethtool has supported json output for -k for almost 4 years, running upstream selftests with a version of ethtool older than that doesn't really make sense, so only the "permission change" patch is really needed.
[1] https://lore.kernel.org/netdev/20251030170217.43e544ad@kernel.org/
linux-kselftest-mirror@lists.linaro.org