Hi Shuah,
On 6/7/24 01:03, Shuah Khan wrote:
On 6/6/24 03:57, Laura Nao wrote:
Hi Shuah,
On 5/6/24 13:13, Laura Nao wrote:
The watchdog selftest script supports various parameters for testing different IOCTLs. The watchdog ping functionality is validated by starting a loop where the watchdog device is periodically pet, which can only be stopped by the user interrupting the script.
This results in a timeout when running this test using the kselftest runner with no non-oneshot parameters (or no parameters at all):
Sorry for the delay on this.
This test isn't include in the default kselftest run? How are you running this?
The goal of this series is to enable the test to be run using the kselftest runner individually, not as part of the default run. So for example w/out args:
make -C tools/testing/selftests TARGETS=watchdog run_tests
or with args:
KSELFTEST_WATCHDOG_TEST_ARGS='-b -d -e -s -t 12 -T 3 -n 7 -N -L' make -C tools/testing/selftests TARGETS=watchdog run_tests
TAP version 13 1..1 # timeout set to 45 # selftests: watchdog: watchdog-test # Watchdog Ticking Away! # .............................................# not ok 1 selftests: watchdog: watchdog-test # TIMEOUT 45 seconds
To address this issue, the first patch in this series limits the loop to 5 iterations by default and adds support for a new '-c' option to customize the number of pings as required.
The second patch conforms the test output to the KTAP format.
Gentle ping - any thoughts on this series? It would simplify running the watchdog kselftest in CI environments by leveraging the runner.
This test isn't intended to be included in the default run. It requires loading a watchdog driver first. Do you load the driver from the runner?
I get this test requires watchdog drivers to be loaded (which in this case can't be added to a config fragment that goes with the selftest, as they are platform-specific) and therefore cannot be included in the default run. However, having ktap output support and limiting the ping loop would allow the test to be run individually in the same way as other selftests (so through the kselftest runner).
Naturally, driver dependencies must be met for the test to run and produce valid results. From my understanding the runner itself cannot ensure this, so in this case it would be up to the user or CI to enable/load the appropriate drivers before running the test. If these dependencies are not satisfied, the test could just exit with a skip code.
Does this make sense to you? or is the kselftest runner intended to be used to run exclusively a subset of tests in the selftests directory (i.e. the ones that don't have platform-specific driver requirements)?
Thanks,
Laura