Consider skipped tests in addition to passed tests when evaluating the overall result of the test suite in the finished() helper.
Signed-off-by: Laura Nao laura.nao@collabora.com --- tools/testing/selftests/kselftest/ksft.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kselftest/ksft.py b/tools/testing/selftests/kselftest/ksft.py index cd89fb2bc10e..bf215790a89d 100644 --- a/tools/testing/selftests/kselftest/ksft.py +++ b/tools/testing/selftests/kselftest/ksft.py @@ -70,7 +70,7 @@ def test_result(condition, description=""):
def finished(): - if ksft_cnt["pass"] == ksft_num_tests: + if ksft_cnt["pass"] + ksft_cnt["skip"] == ksft_num_tests: exit_code = KSFT_PASS else: exit_code = KSFT_FAIL
On 7/22/24 09:43, Laura Nao wrote:
Consider skipped tests in addition to passed tests when evaluating the overall result of the test suite in the finished() helper.
Signed-off-by: Laura Nao laura.nao@collabora.com
tools/testing/selftests/kselftest/ksft.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kselftest/ksft.py b/tools/testing/selftests/kselftest/ksft.py index cd89fb2bc10e..bf215790a89d 100644 --- a/tools/testing/selftests/kselftest/ksft.py +++ b/tools/testing/selftests/kselftest/ksft.py @@ -70,7 +70,7 @@ def test_result(condition, description=""): def finished():
- if ksft_cnt["pass"] == ksft_num_tests:
- if ksft_cnt["pass"] + ksft_cnt["skip"] == ksft_num_tests:
Please don't. Counting skips in pass or fail isn't accurate reporting. skips need to be reported as skips.
thanks, -- Shuah
On 7/22/24 11:32, Shuah Khan wrote:
On 7/22/24 09:43, Laura Nao wrote:
Consider skipped tests in addition to passed tests when evaluating the overall result of the test suite in the finished() helper.
Signed-off-by: Laura Nao laura.nao@collabora.com
tools/testing/selftests/kselftest/ksft.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kselftest/ksft.py b/tools/testing/selftests/kselftest/ksft.py index cd89fb2bc10e..bf215790a89d 100644 --- a/tools/testing/selftests/kselftest/ksft.py +++ b/tools/testing/selftests/kselftest/ksft.py @@ -70,7 +70,7 @@ def test_result(condition, description=""): def finished(): - if ksft_cnt["pass"] == ksft_num_tests: + if ksft_cnt["pass"] + ksft_cnt["skip"] == ksft_num_tests:
Please don't. Counting skips in pass or fail isn't accurate reporting. skips need to be reported as skips.
More on this since I keep seeing patches like this one that make the reporting confusing.
There is a reason why you don't want to mark a test passed when there are several skips. Skips are an indication that there are several tests and/or test cases that couldn't not be run because of unmet dependencies. This condition needs to be investigated to see if there are any config options that could be enabled to get a better coverage.
Including skips to determine pass gives a false sense security that all is well when it isn't
thanks, -- Shuah
On Mon, Jul 22, 2024 at 11:32:35AM -0600, Shuah Khan wrote:
On 7/22/24 09:43, Laura Nao wrote:
Consider skipped tests in addition to passed tests when evaluating the overall result of the test suite in the finished() helper.
Signed-off-by: Laura Nao laura.nao@collabora.com
tools/testing/selftests/kselftest/ksft.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kselftest/ksft.py b/tools/testing/selftests/kselftest/ksft.py index cd89fb2bc10e..bf215790a89d 100644 --- a/tools/testing/selftests/kselftest/ksft.py +++ b/tools/testing/selftests/kselftest/ksft.py @@ -70,7 +70,7 @@ def test_result(condition, description=""): def finished():
- if ksft_cnt["pass"] == ksft_num_tests:
- if ksft_cnt["pass"] + ksft_cnt["skip"] == ksft_num_tests:
Please don't. Counting skips in pass or fail isn't accurate reporting. skips need to be reported as skips.
Hi Shuah,
this won't change the skip count, just allow a test suite that has a mix of pass and skip results to exit with code 0. That's the same behavior as the C ksft_finished() helper in kselftest.h:
#define ksft_finished() \ ksft_exit(ksft_plan == \ ksft_cnt.ksft_pass + \ ksft_cnt.ksft_xfail + \ ksft_cnt.ksft_xskip)
It was my oversight to not do the same in the python helper.
Laura,
I consider this fixing an incorrect behavior, so I'd add this tag:
Fixes: dacf1d7a78bf ("kselftest: Add test to verify probe of devices from discoverable buses")
I think the message is good as is, but maybe it could have mentioned that this matches the behavior of the C helper, just to make the point above clearer.
Reviewed-by: Nícolas F. R. A. Prado nfraprado@collabora.com
And just a note for the maintainers, this patch depends on "kselftest: Move ksft helper module to common directory" https://lore.kernel.org/all/20240705-dev-err-log-selftest-v2-2-163b9cd7b3c1@... which was picked through the usb tree but is queued for 6.11-rc1.
Thanks, Nícolas
On 7/22/24 20:51, Nícolas F. R. A. Prado wrote:
On Mon, Jul 22, 2024 at 11:32:35AM -0600, Shuah Khan wrote:
On 7/22/24 09:43, Laura Nao wrote:
Consider skipped tests in addition to passed tests when evaluating the overall result of the test suite in the finished() helper.
Signed-off-by: Laura Nao laura.nao@collabora.com
tools/testing/selftests/kselftest/ksft.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kselftest/ksft.py b/tools/testing/selftests/kselftest/ksft.py index cd89fb2bc10e..bf215790a89d 100644 --- a/tools/testing/selftests/kselftest/ksft.py +++ b/tools/testing/selftests/kselftest/ksft.py @@ -70,7 +70,7 @@ def test_result(condition, description=""): def finished():
- if ksft_cnt["pass"] == ksft_num_tests:
- if ksft_cnt["pass"] + ksft_cnt["skip"] == ksft_num_tests:
Please don't. Counting skips in pass or fail isn't accurate reporting. skips need to be reported as skips.
Hi Shuah,
this won't change the skip count, just allow a test suite that has a mix of pass and skip results to exit with code 0. That's the same behavior as the C ksft_finished() helper in kselftest.h:
#define ksft_finished() \ ksft_exit(ksft_plan == \ ksft_cnt.ksft_pass + \ ksft_cnt.ksft_xfail + \ ksft_cnt.ksft_xskip)
Correct, this patch aligns the behavior of the python helper with both the existing C and bash counterparts (ksft_finished() and ktap_finished() respectively).
It was my oversight to not do the same in the python helper.
Laura,
I consider this fixing an incorrect behavior, so I'd add this tag:
Fixes: dacf1d7a78bf ("kselftest: Add test to verify probe of devices from discoverable buses")
I think the message is good as is, but maybe it could have mentioned that this matches the behavior of the C helper, just to make the point above clearer.
Reviewed-by: Nícolas F. R. A. Prado nfraprado@collabora.com
And just a note for the maintainers, this patch depends on "kselftest: Move ksft helper module to common directory" https://lore.kernel.org/all/20240705-dev-err-log-selftest-v2-2-163b9cd7b3c1@... which was picked through the usb tree but is queued for 6.11-rc1.
Right, thanks!
Shuah, please let me know if you have any other concerns with this patch.
Best,
Laura
On 7/22/24 09:43, Laura Nao wrote:
Consider skipped tests in addition to passed tests when evaluating the overall result of the test suite in the finished() helper.
Signed-off-by: Laura Nao laura.nao@collabora.com
tools/testing/selftests/kselftest/ksft.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kselftest/ksft.py b/tools/testing/selftests/kselftest/ksft.py index cd89fb2bc10e..bf215790a89d 100644 --- a/tools/testing/selftests/kselftest/ksft.py +++ b/tools/testing/selftests/kselftest/ksft.py @@ -70,7 +70,7 @@ def test_result(condition, description=""): def finished():
- if ksft_cnt["pass"] == ksft_num_tests:
- if ksft_cnt["pass"] + ksft_cnt["skip"] == ksft_num_tests: exit_code = KSFT_PASS
Laura and Nícolas,
I saw both your emails explaining how this fixes the problem in a previous patch.
However looks like you haven't see my response about the implications of the exit_code = KSFT_PASS when tests are skipped.
if ksft_cnt["pass"] + ksft_cnt["skip"] == ksft_num_tests:
exit_code = KSFT_PASS
Let me reiterate in case you missed it:
There is a reason why you don't want to mark all tests passed when there are several skips.Skips are an indication that there are several tests and/or test cases that couldn't not be run because of unmet dependencies. This condition needs to be investigated to see if there are any config options that could be enabled to get a better coverage.
Including skips to determine pass gives a false sense security that all is well when it isn't.
So it is incorrect to set the exit code to KSFT_PASS when there are skipped tests.
+ if ksft_cnt["pass"] + ksft_cnt["skip"] == ksft_num_tests:
else: exit_code = KSFT_FAIL
The logic here seems to not take into account when you have a conditions where you have a mixed results of passed tests, skipped tests, and failed tests.
Please revisit and figure out how to address this and report the status correctly.
thanks, -- Shuah
Hi Shuah,
On 7/23/24 18:17, Shuah Khan wrote:
On 7/22/24 09:43, Laura Nao wrote:
Consider skipped tests in addition to passed tests when evaluating the overall result of the test suite in the finished() helper.
Signed-off-by: Laura Nao laura.nao@collabora.com
tools/testing/selftests/kselftest/ksft.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kselftest/ksft.py b/tools/testing/selftests/kselftest/ksft.py index cd89fb2bc10e..bf215790a89d 100644 --- a/tools/testing/selftests/kselftest/ksft.py +++ b/tools/testing/selftests/kselftest/ksft.py @@ -70,7 +70,7 @@ def test_result(condition, description=""): def finished():
- if ksft_cnt["pass"] == ksft_num_tests:
- if ksft_cnt["pass"] + ksft_cnt["skip"] == ksft_num_tests: exit_code = KSFT_PASS
Laura and Nícolas,
I saw both your emails explaining how this fixes the problem in a previous patch.
However looks like you haven't see my response about the implications of the exit_code = KSFT_PASS when tests are skipped.
if ksft_cnt["pass"] + ksft_cnt["skip"] == ksft_num_tests:
exit_code = KSFT_PASS
Let me reiterate in case you missed it:
There is a reason why you don't want to mark all tests passed when there are several skips.Skips are an indication that there are several tests and/or test cases that couldn't not be run because of unmet dependencies. This condition needs to be investigated to see if there are any config options that could be enabled to get a better coverage.
Including skips to determine pass gives a false sense security that all is well when it isn't.
So it is incorrect to set the exit code to KSFT_PASS when there are skipped tests.
Just to be clear, the logic in ksft_finished() in kselftest.h (the C helper) is to exit with KSFT_PASS when there are only passed and skipped tests. You're suggesting we change it to exit with KSFT_FAIL in that case?
Under this new logic, the runner would effectively treat skips as failures, impacting existing kselftests.
- if ksft_cnt["pass"] + ksft_cnt["skip"] == ksft_num_tests:
else: exit_code = KSFT_FAIL
The logic here seems to not take into account when you have a conditions where you have a mixed results of passed tests, skipped tests, and failed tests.
Please revisit and figure out how to address this and report the status correctly.
The logic ensures that if there’s a mix of passed, skipped, and failed tests, the exit code returned is KSFT_FAIL. I believe that if there is at least one failed test, the overall test should be reported as failed, which is what happens in this case.
Thanks,
Laura
On 7/29/24 08:52, Laura Nao wrote:
Hi Shuah,
On 7/23/24 18:17, Shuah Khan wrote:
On 7/22/24 09:43, Laura Nao wrote:
Consider skipped tests in addition to passed tests when evaluating the overall result of the test suite in the finished() helper.
I am finally with you now. Can you please more information in your short log and commit log.
Isn't this a bug fix? Current logic before this change will report tests failed if there are any skipped tests?
Can you send v2 calling it a fix and explain the problem clearly.
This isn't problem in this patch, but I am concerned about how simply calling tests passed without calling out skipped tests.
This problem could be solved by printing a message at the end of tests for non-zero skipped test conditions to say the coverage could be increased by enabling config options.
thanks, -- Shuah
On 7/29/24 22:06, Shuah Khan wrote:
On 7/29/24 08:52, Laura Nao wrote:
Hi Shuah,
On 7/23/24 18:17, Shuah Khan wrote:
On 7/22/24 09:43, Laura Nao wrote:
Consider skipped tests in addition to passed tests when evaluating the overall result of the test suite in the finished() helper.
I am finally with you now. Can you please more information in your short log and commit log.
Isn't this a bug fix? Current logic before this change will report tests failed if there are any skipped tests?
Can you send v2 calling it a fix and explain the problem clearly.
v2 sent: https://lore.kernel.org/all/20240730102928.33182-1-laura.nao@collabora.com/
This isn't problem in this patch, but I am concerned about how simply calling tests passed without calling out skipped tests.
This problem could be solved by printing a message at the end of tests for non-zero skipped test conditions to say the coverage could be increased by enabling config options.
Right. We can send a separate patch to address this.
Thanks!
Laura
On 7/30/24 04:35, Laura Nao wrote:
On 7/29/24 22:06, Shuah Khan wrote:
On 7/29/24 08:52, Laura Nao wrote:
Hi Shuah,
On 7/23/24 18:17, Shuah Khan wrote:
On 7/22/24 09:43, Laura Nao wrote:
Consider skipped tests in addition to passed tests when evaluating the overall result of the test suite in the finished() helper.
I am finally with you now. Can you please more information in your short log and commit log.
Isn't this a bug fix? Current logic before this change will report tests failed if there are any skipped tests?
Can you send v2 calling it a fix and explain the problem clearly.
v2 sent: https://lore.kernel.org/all/20240730102928.33182-1-laura.nao@collabora.com/
This isn't problem in this patch, but I am concerned about how simply calling tests passed without calling out skipped tests.
This problem could be solved by printing a message at the end of tests for non-zero skipped test conditions to say the coverage could be increased by enabling config options.
Right. We can send a separate patch to address this.
Yes please. Thank you.
thanks, -- Shuah
linux-kselftest-mirror@lists.linaro.org