On Tue, Feb 18, 2025 at 04:20:06PM +0800, David Gow wrote:
On Mon, 17 Feb 2025 at 19:00, Thomas Weißschuh thomas.weissschuh@linutronix.de wrote:
Currently testing of userspace and in-kernel API use two different frameworks. kselftests for the userspace ones and Kunit for the in-kernel ones. Besides their different scopes, both have different strengths and limitations:
Kunit:
- Tests are normal kernel code.
- They use the regular kernel toolchain.
- They can be packaged and distributed as modules conveniently.
Kselftests:
- Tests are normal userspace code
- They need a userspace toolchain. A kernel cross toolchain is likely not enough.
- A fair amout of userland is required to run the tests, which means a full distro or handcrafted rootfs.
- There is no way to conveniently package and run kselftests with a given kernel image.
- The kselftests makefiles are not as powerful as regular kbuild. For example they are missing proper header dependency tracking or more complex compiler option modifications.
Therefore kunit is much easier to run against different kernel configurations and architectures. This series aims to combine kselftests and kunit, avoiding both their limitations. It works by compiling the userspace kselftests as part of the regular kernel build, embedding them into the kunit kernel or module and executing them from there. If the kernel toolchain is not fit to produce userspace because of a missing libc, the kernel's own nolibc can be used instead. The structured TAP output from the kselftest is integrated into the kunit KTAP output transparently, the kunit parser can parse the combined logs together.
Wow -- this is really neat! Thanks for putting this together.
I haven't had a chance to play with it in detail yet, but here are a few initial / random thoughts:
- Having support for running things from userspace within a KUnit test
seems like it's something that could be really useful for testing syscalls (and maybe other mm / exec code as well).
That's the target :-)
I'm also looking for more descriptive naming ideas.
- I don't think we can totally combine kselftests and KUnit for all
tests (some of the selftests definitely require more complicated dependencies than I think KUnit would want to reasonably support or require).
Agreed, though I somewhat expect that some complex selftests would be simplified to work with this scheme as it should improve test coverage from the bots.
- The in-kernel KUnit framework doesn't have any knowledge of the
structure or results of a uapi test. It'd be nice to at least be able to get the process exit status, and bubble up a basic 'passed'/'skipped'/'failed' so that we're not reporting success for failed tests (and so that simple test executables could run without needing to output their own KTAP if they only run one test).
Currently any exitcode != 0 fails the test. I'll add some proper handling for exit(KSFT_SKIP).
- Equally, for some selftests, it's probably a pain to have to write a
kernel module if there's nothing that needs to be done in the kernel. Maybe such tests could still be built with nolibc and a kernel toolchain, but be triggered directly from the python tooling (e.g. as the 'init' process).
Some autodiscovery based on linker sections could be done. However that would not yet define how to group them into suites. Having one explicit reference in a module makes everything easier to understand. What about a helper macro for the test case definition: KUNIT_CASE_UAPI(symbol)?
All UAPI tests of a subsystem can share the same module, so the overhead should be limited. I'd like to keep it usable without needing the python tooling.
Note in case it was not clear: All test executables are available as normal files in the build directory and can also be executed from there.
- There still seems to be some increased requirements over plain KUnit
at the moment: I'm definitely seeing issues from not having the right libgcc installed for all architectures. (Though it's working for most of them, which is very neat!)
I'll look into that.
- This is a great example of how having standardised result formats is useful!
Indeed, it was surprisingly compatible.
- If this is going to change or blur the boundary between "this is a
ksefltest" and "this is a kunit test", we probably will need to update Documentation/dev-tools/testing-overview.rst -- it probably needs some clarifications there _anyway_, so this is probably a good point to ensure everyone's on the same page.
Agreed.
Do you have a particular non-example test you'd like to either write or port to use this? I think it'd be great to see some real-world examples of where this'd be most useful.
I want to use it for the vDSO selftests. To be usable for that another series is necessary[0]. I tested the whole thing locally with one selftest and promptly found a bug in the selftests [1].
Either way, I'll keep playing with this a bit over the next few days. I'd love to hear what Shuah and Rae think, as well, as this involves kselftest and KTAP a lot.
Thanks! I'm also looking forward to their feedback.
Thomas
<snip>
[0] https://lore.kernel.org/lkml/20250203-parse_vdso-nolibc-v1-0-9cb6268d77be@li... [1] https://lore.kernel.org/lkml/20250217-selftests-vdso-s390-gnu-hash-v2-1-f6c2...