Jakub Kicinski wrote:
On Fri, 26 Jan 2024 21:31:51 -0500 Willem de Bruijn wrote:
From: Willem de Bruijn willemb@google.com
The test sends packets and compares enqueue, transmit and Ack timestamps with expected values. It installs netem delays to increase latency between these points.
The test proves flaky in virtual environment (vng). Increase the delays to reduce variance. Scale measurement tolerance accordingly.
Time sensitive tests are difficult to calibrate. Increasing delays 10x also increases runtime 10x, for one. And it may still prove flaky at some rate.
Willem, do you still want us to apply this as is or should we do the 10x only if [ x$KSFT_MACHINE_SLOW != x ] ?
If the test passes on all platforms with this change, I think that's still preferable.
The only downside is that it will take 10x runtime. But that will continue on debug and virtualized builds anyway.
On the upside, the awesome dash does indicate that it passes as is on non-debug metal instances:
https://netdev.bots.linux.dev/contest.html?test=txtimestamp-sh
Let me know if you want me to use this as a testcase for $KSFT_MACHINE_SLOW.
Otherwise I'll start with the gro and so-txtime tests. They may not be so easily calibrated. As we cannot control the gro timeout, nor the FQ max horizon.
In such cases we can use the environment variable to either skip the test entirely or --my preference-- run it to get code coverage, but suppress a failure if due to timing (only). Sounds good?