On 7/22/24 11:32, Shuah Khan wrote:
On 7/22/24 09:43, Laura Nao wrote:
Consider skipped tests in addition to passed tests when evaluating the overall result of the test suite in the finished() helper.
Signed-off-by: Laura Nao laura.nao@collabora.com
tools/testing/selftests/kselftest/ksft.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kselftest/ksft.py b/tools/testing/selftests/kselftest/ksft.py index cd89fb2bc10e..bf215790a89d 100644 --- a/tools/testing/selftests/kselftest/ksft.py +++ b/tools/testing/selftests/kselftest/ksft.py @@ -70,7 +70,7 @@ def test_result(condition, description=""): def finished(): - if ksft_cnt["pass"] == ksft_num_tests: + if ksft_cnt["pass"] + ksft_cnt["skip"] == ksft_num_tests:
Please don't. Counting skips in pass or fail isn't accurate reporting. skips need to be reported as skips.
More on this since I keep seeing patches like this one that make the reporting confusing.
There is a reason why you don't want to mark a test passed when there are several skips. Skips are an indication that there are several tests and/or test cases that couldn't not be run because of unmet dependencies. This condition needs to be investigated to see if there are any config options that could be enabled to get a better coverage.
Including skips to determine pass gives a false sense security that all is well when it isn't
thanks, -- Shuah