On Mon, Jun 08, 2020 at 03:03:05PM +0300, Andy Shevchenko wrote:
On Mon, Jun 8, 2020 at 1:26 PM Alexander Gordeev agordeev@linux.ibm.com wrote:
Commit 2d6261583be0 ("lib: rework bitmap_parse()") does not take into account order of halfwords on 64-bit big endian architectures. As result (at least) Receive Packet Steering, IRQ affinity masks and runtime kernel test "test_bitmap" get broken on s390.
...
+#if defined(__BIG_ENDIAN) && defined(CONFIG_64BIT)
I think it's better to re-use existing patterns.
ipc/sem.c:1682:#if defined(CONFIG_64BIT) && defined(__BIG_ENDIAN)
+static void save_x32_chunk(unsigned long *maskp, u32 chunk, int chunk_idx) +{
maskp += (chunk_idx / 2);
((u32 *)maskp)[(chunk_idx & 1) ^ 1] = chunk;
+} +#else +static void save_x32_chunk(unsigned long *maskp, u32 chunk, int chunk_idx) +{
((u32 *)maskp)[chunk_idx] = chunk;
+} +#endif
See below.
...
end = bitmap_get_x32_reverse(start, end, bitmap++);
end = bitmap_get_x32_reverse(start, end, &chunk); if (IS_ERR(end)) return PTR_ERR(end);
save_x32_chunk(maskp, chunk, chunk_idx++);
Can't we simple do
int chunk_index = 0; ... do {
#if defined(CONFIG_64BIT) && defined(__BIG_ENDIAN) end = bitmap_get_x32_reverse(start, end, bitmap[chunk_index ^ 1]); #else end = bitmap_get_x32_reverse(start, end, bitmap[chunk_index]); #endif ... } while (++chunk_index);
?
Well, unless we ignore coding style 21) Conditional Compilation we could. Do you still insist it would be better?
Thanks for the review!
-- With Best Regards, Andy Shevchenko