On Tue, Jun 3, 2025 at 9:26 PM Peter Zijlstra peterz@infradead.org wrote:
On Mon, Jun 02, 2025 at 01:13:29PM +0200, Maxime Ripard wrote:
I can't operate kunit
Why not?
Too complicated. People have even wrecked tools/testing/selftests/ to the point that it is now nearly impossible to run the simple selftests :-(
And while I don't mind tests -- they're quite useful. Kunit just looks to make it all more complicated that it needs to be. Not to mention there seems to be snakes involved -- and I never can remember how that works.
I've been out of the loop for a while, but I'm curious. What parts in particular are the most annoying, or is it basically all of them?
Is it that adding a new test file requires editing at least 3 files (Makefile, Kconfig, the actual test.c file)? Is it editing/writing new tests because the C API is hard to use? (Has too many funcs to know how to do simple things, etc.?)
For me personally, it's the first part about all the additional edits you have to make. I _hate_ doing it, but can't think of a good alternative that feels it makes the right tradeoffs (implementation complexity, requiring users to learn some new system or not, etc.)
Basically, if the stuff takes more effort to make run, than the time it runs for, its a loss. And in that respect much of the kernel testing stuff is a fail. Just too damn hard to make work.
I want to: make; ./run.sh or something similarly trivial. But clearly that is too much to task these days :-(
Agreed that ultimately, it would be nice if it was as simple as one of these $ run_kunit_tests --suite=test_suite_name $ run_kunit_tests --in_file=lib/my_test.c or something similar.
But while I don't see a way to get there, if you've set your new test config to `default KUNIT_ALL_TESTS` and are fine running on UML, then it could be as simple as $ ./tools/testing/kunit/kunit.py run
It should basically do what you want: `make` to regen the .config and build and then execute the tests.
But I can see these pain points with it, a) it'll run a bunch of other tests too by default but they shouldn't be failing, nor should they add too much build or test runtime overhead. b) if the new test you're adding doesn't work on UML, you'd have to fiddle with enabling more kconfig options or switch arches c) it can be confusing how it has multiple subcommands in addition to `run` and it's not immediately clear when/why you'd ever use them d) the fact it's not like kselftest and just part of make, i.e. `make TARGETS="foo" kselftest` * even if kunit.py was dead simple (and it's not, but I don't think it's _that_ complex), it's another tool to learn and keep in your head.
Do these cover what you've experienced? Or are there others?
I spent almost a full day trying to get kvm selftests working a couple of weeks ago; that's time I don't have. And it makes me want to go hulk and smash things.
Stepping back, I think there'll always be relatively simple things that take a bit too much effort to do in KUnit.
But I'd like to get to the point where anyone can feel comfortable doing the very simple things. And I don't want it to be with the caveat of "after they've read 10 pages of docs", because none of us have the time for that, as you say.
E.g. If someone is introducing a new data structure, it should be easy to ask them to learn a enough KUnit so that they write _basic_ sanity tests for it and add it to their patch series. Maybe it's annoying to cover all edge cases properly and very difficult to try and test concurrent read/writes, but those are inherently harder problems.
Cheers, Daniel