Convert test lib/test_hexdump.c to KUnit. More information about KUnit can be found at: https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html. KUnit provides a common framework for unit tests in the kernel. KUnit and kselftest are standardizing around KTAP, converting this test to KUnit makes this test output in KTAP which we are trying to make the standard test result format for the kernel.
I ran both the original and converted tests as is to produce the output for success of the test in the two cases. I also ran these tests with a small modification to show the difference in the output for failure of the test in both cases. The modification I made is: static const char * const test_data_4_le[] __initconst = { - "7bdb32be", "b293180a", "24c4ba70", "9b34837d", + "7bdb32be", "b293180a", "24c4ba70", "9b3483d",
The difference in outputs can be seen here: https://gist.github.com/arpi-r/38f53a3c65692bf684a6bf3453884b6e
Signed-off-by: Arpitha Raghunandan 98.arpi@gmail.com --- Changes v2->v3: - Modify KUNIT_EXPECT macros used - kunittest variable made static - A more detailed commit message
Changes v1->v2: - Replace KUNIT_EXPECT_EQ() with KUNIT_EXPECT_EQ_MSG() - Replace KUNIT_EXPECT_NE() with KUNIT_EXPECT_NE_MSG() - Prints expected and real buffer values in case of test failure
lib/Kconfig.debug | 7 ++- lib/Makefile | 2 +- lib/{test_hexdump.c => hexdump_kunit.c} | 73 +++++++++++-------------- 3 files changed, 36 insertions(+), 46 deletions(-) rename lib/{test_hexdump.c => hexdump_kunit.c} (78%)
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index c789b39ed527..a82ff23b19e7 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -2039,9 +2039,6 @@ config ASYNC_RAID6_TEST
If unsure, say N.
-config TEST_HEXDUMP - tristate "Test functions located in the hexdump module at runtime" - config TEST_STRING_HELPERS tristate "Test functions located in the string_helpers module at runtime"
@@ -2280,6 +2277,10 @@ config BITS_TEST
If unsure, say N.
+config HEXDUMP_KUNIT_TEST + tristate "KUnit test for functions located in the hexdump module at runtime" + depends on KUNIT + config TEST_UDELAY tristate "udelay test driver" help diff --git a/lib/Makefile b/lib/Makefile index dc76e7d8a453..f046ecf2817a 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -55,7 +55,6 @@ obj-$(CONFIG_STRING_SELFTEST) += test_string.o obj-y += string_helpers.o obj-$(CONFIG_TEST_STRING_HELPERS) += test-string_helpers.o obj-y += hexdump.o -obj-$(CONFIG_TEST_HEXDUMP) += test_hexdump.o obj-y += kstrtox.o obj-$(CONFIG_FIND_BIT_BENCHMARK) += find_bit_benchmark.o obj-$(CONFIG_TEST_BPF) += test_bpf.o @@ -352,3 +351,4 @@ obj-$(CONFIG_BITFIELD_KUNIT) += bitfield_kunit.o obj-$(CONFIG_BITS_TEST) += bits_kunit.o obj-$(CONFIG_LINEAR_RANGES_TEST) += linear_ranges_kunit.o obj-$(CONFIG_LIST_KUNIT_TEST) += list_kunit.o +obj-$(CONFIG_HEXDUMP_KUNIT_TEST) += hexdump_kunit.o diff --git a/lib/test_hexdump.c b/lib/hexdump_kunit.c similarity index 78% rename from lib/test_hexdump.c rename to lib/hexdump_kunit.c index 5144899d3c6b..c12f3229485c 100644 --- a/lib/test_hexdump.c +++ b/lib/hexdump_kunit.c @@ -3,6 +3,7 @@ */ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#include <kunit/test.h> #include <linux/init.h> #include <linux/kernel.h> #include <linux/module.h> @@ -61,12 +62,11 @@ static const char * const test_data_8_be[] __initconst = {
#define FILL_CHAR '#'
-static unsigned total_tests __initdata; -static unsigned failed_tests __initdata; +static struct kunit *kunittest;
-static void __init test_hexdump_prepare_test(size_t len, int rowsize, - int groupsize, char *test, - size_t testlen, bool ascii) +static void test_hexdump_prepare_test(size_t len, int rowsize, + int groupsize, char *test, + size_t testlen, bool ascii) { char *p; const char * const *result; @@ -122,14 +122,12 @@ static void __init test_hexdump_prepare_test(size_t len, int rowsize,
#define TEST_HEXDUMP_BUF_SIZE (32 * 3 + 2 + 32 + 1)
-static void __init test_hexdump(size_t len, int rowsize, int groupsize, - bool ascii) +static void test_hexdump(size_t len, int rowsize, int groupsize, + bool ascii) { char test[TEST_HEXDUMP_BUF_SIZE]; char real[TEST_HEXDUMP_BUF_SIZE];
- total_tests++; - memset(real, FILL_CHAR, sizeof(real)); hex_dump_to_buffer(data_b, len, rowsize, groupsize, real, sizeof(real), ascii); @@ -138,15 +136,12 @@ static void __init test_hexdump(size_t len, int rowsize, int groupsize, test_hexdump_prepare_test(len, rowsize, groupsize, test, sizeof(test), ascii);
- if (memcmp(test, real, TEST_HEXDUMP_BUF_SIZE)) { - pr_err("Len: %zu row: %d group: %d\n", len, rowsize, groupsize); - pr_err("Result: '%s'\n", real); - pr_err("Expect: '%s'\n", test); - failed_tests++; - } + KUNIT_EXPECT_FALSE_MSG(kunittest, memcmp(test, real, TEST_HEXDUMP_BUF_SIZE), + "Len: %zu row: %d group: %d\nResult: '%s'\nExpect: '%s'\n", + len, rowsize, groupsize, real, test); }
-static void __init test_hexdump_set(int rowsize, bool ascii) +static void test_hexdump_set(int rowsize, bool ascii) { size_t d = min_t(size_t, sizeof(data_b), rowsize); size_t len = get_random_int() % d + 1; @@ -157,9 +152,9 @@ static void __init test_hexdump_set(int rowsize, bool ascii) test_hexdump(len, rowsize, 1, ascii); }
-static void __init test_hexdump_overflow(size_t buflen, size_t len, - int rowsize, int groupsize, - bool ascii) +static void test_hexdump_overflow(size_t buflen, size_t len, + int rowsize, int groupsize, + bool ascii) { char test[TEST_HEXDUMP_BUF_SIZE]; char buf[TEST_HEXDUMP_BUF_SIZE]; @@ -167,8 +162,6 @@ static void __init test_hexdump_overflow(size_t buflen, size_t len, int ae, he, e, f, r; bool a;
- total_tests++; - memset(buf, FILL_CHAR, sizeof(buf));
r = hex_dump_to_buffer(data_b, len, rs, gs, buf, buflen, ascii); @@ -196,16 +189,13 @@ static void __init test_hexdump_overflow(size_t buflen, size_t len,
buf[sizeof(buf) - 1] = '\0';
- if (!a) { - pr_err("Len: %zu buflen: %zu strlen: %zu\n", - len, buflen, strnlen(buf, sizeof(buf))); - pr_err("Result: %d '%s'\n", r, buf); - pr_err("Expect: %d '%s'\n", e, test); - failed_tests++; - } + KUNIT_EXPECT_TRUE_MSG(kunittest, a, + "Len: %zu buflen: %zu strlen: %zu\nResult: %d '%s'\nExpect: %d '%s'\n", + len, buflen, strnlen(buf, sizeof(buf)), + r, buf, e, test); }
-static void __init test_hexdump_overflow_set(size_t buflen, bool ascii) +static void test_hexdump_overflow_set(size_t buflen, bool ascii) { unsigned int i = 0; int rs = (get_random_int() % 2 + 1) * 16; @@ -218,10 +208,11 @@ static void __init test_hexdump_overflow_set(size_t buflen, bool ascii) } while (i++ < 3); }
-static int __init test_hexdump_init(void) +static void test_hexdump_init(struct kunit *test) { unsigned int i; int rowsize; + kunittest = test;
rowsize = (get_random_int() % 2 + 1) * 16; for (i = 0; i < 16; i++) @@ -236,21 +227,19 @@ static int __init test_hexdump_init(void)
for (i = 0; i <= TEST_HEXDUMP_BUF_SIZE; i++) test_hexdump_overflow_set(i, true); +}
- if (failed_tests == 0) - pr_info("all %u tests passed\n", total_tests); - else - pr_err("failed %u out of %u tests\n", failed_tests, total_tests); +static struct kunit_case hexdump_test_cases[] = { + KUNIT_CASE(test_hexdump_init), + {} +};
- return failed_tests ? -EINVAL : 0; -} -module_init(test_hexdump_init); +static struct kunit_suite hexdump_test_suite = { + .name = "hexdump-kunit-test", + .test_cases = hexdump_test_cases, +};
-static void __exit test_hexdump_exit(void) -{ - /* do nothing */ -} -module_exit(test_hexdump_exit); +kunit_test_suite(hexdump_test_suite);
MODULE_AUTHOR("Andy Shevchenko andriy.shevchenko@linux.intel.com"); MODULE_LICENSE("Dual BSD/GPL");
On Tue, Dec 1, 2020 at 9:21 AM Arpitha Raghunandan 98.arpi@gmail.com wrote:
Convert test lib/test_hexdump.c to KUnit. More information about KUnit can be found at: https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html. KUnit provides a common framework for unit tests in the kernel. KUnit and kselftest are standardizing around KTAP, converting this test to KUnit makes this test output in KTAP which we are trying to make the standard test result format for the kernel.
Below doesn't suit commit message, perhaps adding it after '---' line would be good. In the commit message you can choose one failed case followed by all OK and show the difference.
I ran both the original and converted tests as is to produce the output for success of the test in the two cases. I also ran these tests with a small modification to show the difference in the output for failure of the test in both cases. The modification I made is: static const char * const test_data_4_le[] __initconst = {
"7bdb32be", "b293180a", "24c4ba70", "9b34837d",
"7bdb32be", "b293180a", "24c4ba70", "9b3483d",
The difference in outputs can be seen here: https://gist.github.com/arpi-r/38f53a3c65692bf684a6bf3453884b6e
Looks pretty much good, what I'm sad to see is the absence of the test statistics. Any ideas if we can get it back?
On 01/12/20 4:36 pm, Andy Shevchenko wrote:
On Tue, Dec 1, 2020 at 9:21 AM Arpitha Raghunandan 98.arpi@gmail.com wrote:
Convert test lib/test_hexdump.c to KUnit. More information about KUnit can be found at: https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html. KUnit provides a common framework for unit tests in the kernel. KUnit and kselftest are standardizing around KTAP, converting this test to KUnit makes this test output in KTAP which we are trying to make the standard test result format for the kernel.
Below doesn't suit commit message, perhaps adding it after '---' line would be good. In the commit message you can choose one failed case followed by all OK and show the difference.
Okay, I will make this change.
I ran both the original and converted tests as is to produce the output for success of the test in the two cases. I also ran these tests with a small modification to show the difference in the output for failure of the test in both cases. The modification I made is: static const char * const test_data_4_le[] __initconst = {
"7bdb32be", "b293180a", "24c4ba70", "9b34837d",
"7bdb32be", "b293180a", "24c4ba70", "9b3483d",
The difference in outputs can be seen here: https://gist.github.com/arpi-r/38f53a3c65692bf684a6bf3453884b6e
Looks pretty much good, what I'm sad to see is the absence of the test statistics. Any ideas if we can get it back?
I could again include variable total_tests as was in the original test. Would that be fine?
Thanks!
On Wed, Dec 02, 2020 at 09:51:19AM +0530, Arpitha Raghunandan wrote:
On 01/12/20 4:36 pm, Andy Shevchenko wrote:
On Tue, Dec 1, 2020 at 9:21 AM Arpitha Raghunandan 98.arpi@gmail.com wrote:
...
I ran both the original and converted tests as is to produce the output for success of the test in the two cases. I also ran these tests with a small modification to show the difference in the output for failure of the test in both cases. The modification I made is: static const char * const test_data_4_le[] __initconst = {
"7bdb32be", "b293180a", "24c4ba70", "9b34837d",
"7bdb32be", "b293180a", "24c4ba70", "9b3483d",
The difference in outputs can be seen here: https://gist.github.com/arpi-r/38f53a3c65692bf684a6bf3453884b6e
Looks pretty much good, what I'm sad to see is the absence of the test statistics. Any ideas if we can get it back?
I could again include variable total_tests as was in the original test. Would that be fine?
What I;m talking about is the output. How it will be implemented (using the same variable or differently) is up to you. So the point is I want to see the statistics of success/total at the end.
I think this should be done in KUNIT rather than in the individual test cases.
On Wed, Dec 2, 2020 at 6:06 PM Andy Shevchenko andy.shevchenko@gmail.com wrote:
On Wed, Dec 02, 2020 at 09:51:19AM +0530, Arpitha Raghunandan wrote:
On 01/12/20 4:36 pm, Andy Shevchenko wrote:
On Tue, Dec 1, 2020 at 9:21 AM Arpitha Raghunandan 98.arpi@gmail.com wrote:
...
I ran both the original and converted tests as is to produce the output for success of the test in the two cases. I also ran these tests with a small modification to show the difference in the output for failure of the test in both cases. The modification I made is: static const char * const test_data_4_le[] __initconst = {
"7bdb32be", "b293180a", "24c4ba70", "9b34837d",
"7bdb32be", "b293180a", "24c4ba70", "9b3483d",
The difference in outputs can be seen here: https://gist.github.com/arpi-r/38f53a3c65692bf684a6bf3453884b6e
Looks pretty much good, what I'm sad to see is the absence of the test statistics. Any ideas if we can get it back?
I could again include variable total_tests as was in the original test. Would that be fine?
What I;m talking about is the output. How it will be implemented (using the same variable or differently) is up to you. So the point is I want to see the statistics of success/total at the end.
I think this should be done in KUNIT rather than in the individual test cases.
I tend to agree here that this really is something for KUnit. At the moment, the tools/testing/kunit/kunit.py script will parse the kernel log and generate these sorts of statistics. I know that needing to run it through a script might seem like a step backwards, but there's no formal place for statistics in the KTAP specification[1] being worked on to standardise kselftest/kunit output formats. Note that there are other parsers for TAP-like formats which are being used with KUnit results, so systems like LAVA could also sum up these statistics. It's also possible, as Arpitha alluded to, to have the test dump them out as a comment.
This won't actually work for this test as-is, though, as the KUnit version is running as a single giant test case (so KUnit believes that 1/1 tests have passed, rather than having any more-detailed statistics). It looks like there are a few ways to split it up a bit which would make it neater (a test each for the for() loops in test_hexdump_init() seems sensible to me), but at the moment, there's not really a way of programmatically generating test cases which KUnit then counts
The "Parameterised Tests"[2] work Arpitha has been working on ought to go some way to helping here, though it won't solve this completely in this initial version. The problem there is that parameterised tests are not reported individually in a way the kunit.py parser can report cleanly, yet, so it'll still only be counted as one test until that's changed (though, at least, that shouldn't require any test-specific work).
My suggestion for the ultimate state of the test would be: - Split up the test into separate KUnit tests for the different "categories" of tests: (e.g., test_hexdump_set, test_hexdump_overflow_set_ascii, etc) - Replace the for loops in test_hexdump_init() with parameters, so that KUnit is aware of the original runs. - Once KUnit and the tooling supports it, these will be reported as subtests. (In the meantime, the results will be listed individually, commented out)
Of course, it'll take a while before all of those KUnit pieces are in place. I personally think that a good compromise would be to just do the first of these for now, which would make kunit_tool give at least a 4/4 rather than 1/1 result. Then, once the parameterised testing work is merged (and perhaps the tooling fixes are finished), the tests could be updated to take advantage of that.
Cheres, -- David
[1]: https://lore.kernel.org/linux-kselftest/CY4PR13MB1175B804E31E502221BC8163FD8... [2]: https://lore.kernel.org/linux-kselftest/20201116054035.211498-1-98.arpi@gmai...
On Wed, Dec 2, 2020 at 1:57 PM David Gow davidgow@google.com wrote:
On Wed, Dec 2, 2020 at 6:06 PM Andy Shevchenko andy.shevchenko@gmail.com wrote:
On Wed, Dec 02, 2020 at 09:51:19AM +0530, Arpitha Raghunandan wrote:
...
What I;m talking about is the output. How it will be implemented (using the same variable or differently) is up to you. So the point is I want to see the statistics of success/total at the end.
I think this should be done in KUNIT rather than in the individual test cases.
I tend to agree here that this really is something for KUnit. At the moment, the tools/testing/kunit/kunit.py script will parse the kernel log and generate these sorts of statistics. I know that needing to run it through a script might seem like a step backwards, but there's no formal place for statistics in the KTAP specification[1] being worked on to standardise kselftest/kunit output formats.
Then it sucks. Fix specification (in a long term) and does it have a comment style of messages that we can have this statistics printed (but maybe not parsed)?
Note that there are other parsers for TAP-like formats which are being used with KUnit results, so systems like LAVA could also sum up these statistics. It's also possible, as Arpitha alluded to, to have the test dump them out as a comment.
Fine to me.
This won't actually work for this test as-is, though, as the KUnit version is running as a single giant test case (so KUnit believes that 1/1 tests have passed, rather than having any more-detailed statistics). It looks like there are a few ways to split it up a bit which would make it neater (a test each for the for() loops in test_hexdump_init() seems sensible to me), but at the moment, there's not really a way of programmatically generating test cases which KUnit then counts
Fix it, please. We rely on this statistics pretty much.
The "Parameterised Tests"[2] work Arpitha has been working on ought to go some way to helping here, though it won't solve this completely in this initial version. The problem there is that parameterised tests are not reported individually in a way the kunit.py parser can report cleanly, yet, so it'll still only be counted as one test until that's changed (though, at least, that shouldn't require any test-specific work).
My suggestion for the ultimate state of the test would be:
- Split up the test into separate KUnit tests for the different
"categories" of tests: (e.g., test_hexdump_set, test_hexdump_overflow_set_ascii, etc)
- Replace the for loops in test_hexdump_init() with parameters, so
that KUnit is aware of the original runs.
- Once KUnit and the tooling supports it, these will be reported as
subtests. (In the meantime, the results will be listed individually, commented out)
I'm fine as long as we have this information printed to the user.
Of course, it'll take a while before all of those KUnit pieces are in place. I personally think that a good compromise would be to just do the first of these for now, which would make kunit_tool give at least a 4/4 rather than 1/1 result. Then, once the parameterised testing work is merged (and perhaps the tooling fixes are finished), the tests could be updated to take advantage of that.
How can we guarantee it will be not forgotten?
On Wed, Dec 2, 2020 at 8:21 PM Andy Shevchenko andy.shevchenko@gmail.com wrote:
On Wed, Dec 2, 2020 at 1:57 PM David Gow davidgow@google.com wrote:
On Wed, Dec 2, 2020 at 6:06 PM Andy Shevchenko andy.shevchenko@gmail.com wrote:
On Wed, Dec 02, 2020 at 09:51:19AM +0530, Arpitha Raghunandan wrote:
...
What I;m talking about is the output. How it will be implemented (using the same variable or differently) is up to you. So the point is I want to see the statistics of success/total at the end.
I think this should be done in KUNIT rather than in the individual test cases.
I tend to agree here that this really is something for KUnit. At the moment, the tools/testing/kunit/kunit.py script will parse the kernel log and generate these sorts of statistics. I know that needing to run it through a script might seem like a step backwards, but there's no formal place for statistics in the KTAP specification[1] being worked on to standardise kselftest/kunit output formats.
Then it sucks. Fix specification (in a long term) and does it have a comment style of messages that we can have this statistics printed (but maybe not parsed)?
I should clarify: there's nothing in the spec which explicitly defines a place for such statistics (nor anything which requires them). There are "diagnostic" lines which are not parsed, and so it'd be possible to output statistics there. KUnit itself doesn't at present, but allows individual tests to log diagnostic lines which could be such statistics, particularly in cases like this where the full structure of the tests aren't quite exposed to the framework.
Note that there are other parsers for TAP-like formats which are being used with KUnit results, so systems like LAVA could also sum up these statistics. It's also possible, as Arpitha alluded to, to have the test dump them out as a comment.
Fine to me.
This won't actually work for this test as-is, though, as the KUnit version is running as a single giant test case (so KUnit believes that 1/1 tests have passed, rather than having any more-detailed statistics). It looks like there are a few ways to split it up a bit which would make it neater (a test each for the for() loops in test_hexdump_init() seems sensible to me), but at the moment, there's not really a way of programmatically generating test cases which KUnit then counts
Fix it, please. We rely on this statistics pretty much.
The hope is that the Parameterised Test feature will make this possible (though, as mentioned, there are a few other issues around then making those statistics available, but we should be able to work through those).
It may be a silly question, but what is it that makes these statistics useful in this test? Maybe I'm misunderstanding something, but I'd've thought that the important things were whether or not _all_ tests had passed, and -- if not --- _which_ ones had failed. Is the count of failing cases within a test like this really that useful for debugging, or is it more for comparing against different versions? Either way, we'll try to make sure they're available.
The "Parameterised Tests"[2] work Arpitha has been working on ought to go some way to helping here, though it won't solve this completely in this initial version. The problem there is that parameterised tests are not reported individually in a way the kunit.py parser can report cleanly, yet, so it'll still only be counted as one test until that's changed (though, at least, that shouldn't require any test-specific work).
My suggestion for the ultimate state of the test would be:
- Split up the test into separate KUnit tests for the different
"categories" of tests: (e.g., test_hexdump_set, test_hexdump_overflow_set_ascii, etc)
- Replace the for loops in test_hexdump_init() with parameters, so
that KUnit is aware of the original runs.
- Once KUnit and the tooling supports it, these will be reported as
subtests. (In the meantime, the results will be listed individually, commented out)
I'm fine as long as we have this information printed to the user.
Okay -- Arpitha: does this seem like a sensible approach to you?
If printing it to the kernel log is really essential, I'll have a look into how we can do that in KUnit.
Of course, it'll take a while before all of those KUnit pieces are in place. I personally think that a good compromise would be to just do the first of these for now, which would make kunit_tool give at least a 4/4 rather than 1/1 result. Then, once the parameterised testing work is merged (and perhaps the tooling fixes are finished), the tests could be updated to take advantage of that.
How can we guarantee it will be not forgotten?
Thinking about it further, if we can get the parameterised testing stuff in 5.11 as planned, then any tooling/output fixes done later would automatically apply. Maybe rather than doing an intermediate version with just the first splitting up of tests, we try it with the current parameterised test patches, and we can possibly prototype some kernel-side statistics reporting which should work. To be honest, though, the subtest support on the kunit_tool side is likely to take quite a while, so it would be nice to have something (like statistics in the kernel log) which ameliorate the problem in the meantime.
I'll have a bit of a play around with the test output and parsing code this week and see if there's a simple change that can get us most of the way there. I think something should be possible if the test uses the Parameterised testing feature.
Cheers, -- David
linux-kselftest-mirror@lists.linaro.org