On Tue, Sep 14, 2021 at 11:30 AM Linus Torvalds torvalds@linux-foundation.org wrote:
On Tue, Sep 14, 2021 at 11:14 AM Linus Torvalds torvalds@linux-foundation.org wrote:
All this pain could have been trivially avoided with just people writing better code, knowing that multiplies and divides are expensive, and that shift counts are small and cheap.
IOW, maybe the fix is just this attached trivial patch.
I didn't bother to change the order of the 'struct ndb_config' structure. It would pack better if you put the (now 32-bit) blksize_bits field next to the 'atomic_t' field, I think. But I wanted to just see how a minimal patch looks.
I did make the debugfs interface reflect the change to blocksize_bits, so this is visible in user space. But it's debugfs.
If people care, it could be made to use a DEFINE_SHOW_ATTRIBUTE() function the way it already does for 'flags', so that's not a fundamental issue, I just didn't bother.
Hmm?
Btw, I really think more of the block layer should perhaps think about use bit shifts more, not expanded values. Can things like the queue 'discard_alignment' really be non-powers-of-two?
Linus
Any issues passing an loff_t (aka long long) to __ffs which expects an unsigned long for ilp32 targets? (I hate the whole family of ffs()...why did ffs() ever accept just an int?!)
Any issues modifying the sysfs interface? Perhaps something in userspace relies on parsing those strings?
Other than that LGTM, and I like your new overflow check. :)