On Wed, 31 Jan 2024 10:06:18 -0500 Willem de Bruijn wrote:
Willem, do you still want us to apply this as is or should we do the 10x only if [ x$KSFT_MACHINE_SLOW != x ] ?
If the test passes on all platforms with this change, I think that's still preferable.
The only downside is that it will take 10x runtime. But that will continue on debug and virtualized builds anyway.
On the upside, the awesome dash does indicate that it passes as is on non-debug metal instances:
https://netdev.bots.linux.dev/contest.html?test=txtimestamp-sh
Let me know if you want me to use this as a testcase for $KSFT_MACHINE_SLOW.
Ah, all good, I thought your increasing the acceptance criteria.
Otherwise I'll start with the gro and so-txtime tests. They may not be so easily calibrated. As we cannot control the gro timeout, nor the FQ max horizon.
Paolo also mentioned working on GRO, maybe we need a spreadsheet for people to "reserve" broken tests to avoid duplicating work? :S
In such cases we can use the environment variable to either skip the test entirely or --my preference-- run it to get code coverage, but suppress a failure if due to timing (only). Sounds good?
+1 I also think we should run and ignore failure. I was wondering if we can swap FAIL for XFAIL in those cases:
tools/testing/selftests/kselftest.h #define KSFT_XFAIL 2
Documentation/dev-tools/ktap.rst - "XFAIL", which indicates that a test is expected to fail. This is similar to "TODO", above, and is used by some kselftest tests.
IDK if that's a stretch or not. Or we can just return PASS with a comment?