On Tue, 28 Oct 2025 03:05:03 +0100 Andrew Lunn wrote:
On Mon, Oct 27, 2025 at 05:15:39PM -0700, Jakub Kicinski wrote:
On Mon, 27 Oct 2025 20:46:00 +0100 Andrew Lunn wrote:
We expect failure to leave the configuration unchanged. So i would actually do:
try: before = get() set() except: after = get() fail(after != before)
Please allow me to introduce you to the magic of defer() ;)
That is why i don't like magic, especially in tests which have no documentation of the expected results. For me, tests should be dumb, often boringly repetitive, and at least 50% comments, explaining what is being tested, what the expected outcome is, and most importantly, why that is the expected outcome.
We actively avoid creating framework-y stuff.
defer() is one of the few things we added. Single config change is fine but undoing a sequence of actions quickly becomes gnarly. And defer() itself is very straightforward.. once you know about it ;)
https://github.com/linux-netdev/nipa/wiki/Guidance-for-test-authors
This registers a command to run after the test completely exits:
- defer(cfg.eth.channels_set, ehdr | restore)
Also, does nlError contain the error code?
fail(e.errcode not in (EINVAL, EOPNOTSUPP))It would be good to detect and fail ENOTSUPP, which does appear every so often, when it should not.
Dunno, checkpatch warns about ENOTSUPP. I don't that think checking for funny error codes in every test scales :(
How about in the nlError constructor? That gives you a single location, and you can accept EINVAL, EOPNOTSUPP, ENODEV, ENOMEM, maybe ETOOBIG. Cause the test to fail for everything else. If anybody reports test failures with other errno values, the list can be expanded, if they are sensible.
Maybe it's just my subjective preference but I think we need to be judicious about what we test. I see no value in checking errno here. This is mostly a "crash test", IOW must common issues it will find will be crashes, WARN()s or broken device. Nothing subtle.
Is 0 ever valid? I would actually test 0 and make sure it fails with EINVAL, or EOPNOTSUPP. Getting range checks wrong is a typical bug, so it is good to test them. The happy days cases are boring because developers tend to test those, so they are hardly worth testings. Its the edge cases which should be tested.
I believe that 0 is a valid settings. I don't have much experience with devices which support it. But presumably using 0 to disable mini/jumbo rings would make sense for example? And max validation is done by the core so nothing interesting to explore there at the driver level :(
Looking at the code, it seems to cost nothing to actually test 0, if you say it could be valid. That might also find an off-by-one error, if it causes something to go negative/large positive.
Okay. But for the record I'd strongly prefer it the level of nit picking was correlated with reviewer's authorship of tests :/