On Mon, Nov 7, 2022 at 10:38 AM MichaĆ Winiarski michal.winiarski@intel.com wrote:
On Thu, Nov 03, 2022 at 04:23:02PM +0100, Mauro Carvalho Chehab wrote:
Hi,
I'm facing a couple of issues when testing KUnit with the i915 driver.
The DRM subsystem and the i915 driver has, for a long time, his own way to do unit tests, which seems to be added before KUnit.
I'm now checking if it is worth start using KUnit at i915. So, I wrote a RFC with some patches adding support for the tests we have to be reported using Kernel TAP and KUnit.
There are basically 3 groups of tests there:
- mock tests - check i915 hardware-independent logic;
- live tests - run some hardware-specific tests;
- perf tests - check perf support - also hardware-dependent.
As they depend on i915 driver, they run only on x86, with PCI stack enabled, but the mock tests run nicely via qemu.
The live and perf tests require a real hardware. As we run them together with our CI, which, among other things, test module unload/reload and test loading i915 driver with different modprobe parameters, the KUnit tests should be able to run as a module.
Note that KUnit tests that are doing more of a functional/integration testing (on "live" hardware) rather than unit testing (where hardware interactions are mocked) are not very common. Do we have other KUnit tests like this merged?
I don't think we have other tests like this.
Some of the "live tests" are not even that, being more of a pure hardware tests (e.g. live_workarounds, which is checking whether values in MMIO regs stick over various HW state transitions).
I'm wondering, is KUnit the right tool for this job?
The main focus of KUnit is for hw-independent tests. So in theory: no.
But I can imagine it could be easier to write the validation via KUNIT_EXPECT_EQ and friends as opposed to writing your own kernel module w/ its own set of macros, etc.
So my first thought is: "if it works, then you can try using it." (Might want to take steps like make sure they don't get enabled by CONFIG_KUNIT_ALL_TESTS=y).
Talking with David, he seems to have echoed my thoughts. David also suggested that maybe the test could use a fake of the hw by default, but have an option to run against real hw when available. I think that sounds like a good chunk of work, so I don't know if you need to worry about that.
Daniel