On Wed, Dec 2, 2020 at 6:06 PM Andy Shevchenko andy.shevchenko@gmail.com wrote:
On Wed, Dec 02, 2020 at 09:51:19AM +0530, Arpitha Raghunandan wrote:
On 01/12/20 4:36 pm, Andy Shevchenko wrote:
On Tue, Dec 1, 2020 at 9:21 AM Arpitha Raghunandan 98.arpi@gmail.com wrote:
...
I ran both the original and converted tests as is to produce the output for success of the test in the two cases. I also ran these tests with a small modification to show the difference in the output for failure of the test in both cases. The modification I made is: static const char * const test_data_4_le[] __initconst = {
"7bdb32be", "b293180a", "24c4ba70", "9b34837d",
"7bdb32be", "b293180a", "24c4ba70", "9b3483d",
The difference in outputs can be seen here: https://gist.github.com/arpi-r/38f53a3c65692bf684a6bf3453884b6e
Looks pretty much good, what I'm sad to see is the absence of the test statistics. Any ideas if we can get it back?
I could again include variable total_tests as was in the original test. Would that be fine?
What I;m talking about is the output. How it will be implemented (using the same variable or differently) is up to you. So the point is I want to see the statistics of success/total at the end.
I think this should be done in KUNIT rather than in the individual test cases.
I tend to agree here that this really is something for KUnit. At the moment, the tools/testing/kunit/kunit.py script will parse the kernel log and generate these sorts of statistics. I know that needing to run it through a script might seem like a step backwards, but there's no formal place for statistics in the KTAP specification[1] being worked on to standardise kselftest/kunit output formats. Note that there are other parsers for TAP-like formats which are being used with KUnit results, so systems like LAVA could also sum up these statistics. It's also possible, as Arpitha alluded to, to have the test dump them out as a comment.
This won't actually work for this test as-is, though, as the KUnit version is running as a single giant test case (so KUnit believes that 1/1 tests have passed, rather than having any more-detailed statistics). It looks like there are a few ways to split it up a bit which would make it neater (a test each for the for() loops in test_hexdump_init() seems sensible to me), but at the moment, there's not really a way of programmatically generating test cases which KUnit then counts
The "Parameterised Tests"[2] work Arpitha has been working on ought to go some way to helping here, though it won't solve this completely in this initial version. The problem there is that parameterised tests are not reported individually in a way the kunit.py parser can report cleanly, yet, so it'll still only be counted as one test until that's changed (though, at least, that shouldn't require any test-specific work).
My suggestion for the ultimate state of the test would be: - Split up the test into separate KUnit tests for the different "categories" of tests: (e.g., test_hexdump_set, test_hexdump_overflow_set_ascii, etc) - Replace the for loops in test_hexdump_init() with parameters, so that KUnit is aware of the original runs. - Once KUnit and the tooling supports it, these will be reported as subtests. (In the meantime, the results will be listed individually, commented out)
Of course, it'll take a while before all of those KUnit pieces are in place. I personally think that a good compromise would be to just do the first of these for now, which would make kunit_tool give at least a 4/4 rather than 1/1 result. Then, once the parameterised testing work is merged (and perhaps the tooling fixes are finished), the tests could be updated to take advantage of that.
Cheres, -- David
[1]: https://lore.kernel.org/linux-kselftest/CY4PR13MB1175B804E31E502221BC8163FD8... [2]: https://lore.kernel.org/linux-kselftest/20201116054035.211498-1-98.arpi@gmai...