Jakub Kicinski kuba@kernel.org writes:
On Thu, 1 Aug 2024 10:36:18 +0200 Petr Machata wrote:
You seem to be right about the exit code. This was discussed some time ago, that SKIP is considered a sort of a failure. As the person running the test you would want to go in and fix whatever configuration issue is preventing the test from running. I'm not sure how it works in practice, whether people look for skips in the test log explicitly or rely on exit codes.
Maybe Jakub can chime in, since he's the one that cajoled me into handling this whole SKIP / XFAIL business properly in bash selftests.
For HW testing there is a lot more variables than just "is there some tool missing in the VM image". Not sure how well we can do in detecting HW capabilities and XFAILing without making the tests super long. And this case itself is not very clear cut. On one hand, you expect the test not to run if it's disruptive and executor can't deal with disruptive - IOW it's an eXpected FAIL. But it is an executor limitation, the device/driver could have been tested if it wasn't for the executor, so not entirely dissimilar to a tool missing.
Either way - no strong opinion as of yet, we need someone to actually continuously run these to get experience :(
After sending my response I realized we talked about this once already. Apparently I forgot.
I think it's odd that SKIP is a fail in one framework but a pass in another. But XFAIL is not a good name for something that was not even run. And if we add something like "omit", nobody will know what it means.
Ho hum.
Let's keep SKIP as passing in Python tests then...