On Mon, Jan 8, 2024 at 12:53 PM Maxim Mikityanskiy maxtram95@gmail.com wrote:
From: Maxim Mikityanskiy maxim@isovalent.com
The previous commit allowed to preserve boundaries and track IDs of scalars on narrowing fills. Add test cases for that pattern.
Signed-off-by: Maxim Mikityanskiy maxim@isovalent.com Acked-by: Eduard Zingerman eddyz87@gmail.com
.../selftests/bpf/progs/verifier_spill_fill.c | 108 ++++++++++++++++++ 1 file changed, 108 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c index fab8ae9fe947..3764111d190d 100644 --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c @@ -936,4 +936,112 @@ l0_%=: r0 = 0; \ : __clobber_all); }
+SEC("xdp") +__description("32-bit fill after 64-bit spill") +__success __retval(0) +__naked void fill_32bit_after_spill_64bit(void)
I guess these tests are an answer for my question about mixing spill/fill sizes on earlier patch (so disregard those)
+{
asm volatile(" \
/* Randomize the upper 32 bits. */ \
call %[bpf_get_prandom_u32]; \
r0 <<= 32; \
/* 64-bit spill r0 to stack. */ \
*(u64*)(r10 - 8) = r0; \
/* 32-bit fill r0 from stack. */ \
r0 = *(u32*)(r10 - %[offset]); \
have you considered doing the BYTE_ORDER check right here and have offset embedded in assembly instruction directly:
#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ r0 = *(u32*)(r10 - 8); #else r0 = *(u32*)(r10 - 4); #endif
It's a bit less jumping around the code when reading. And it's kind of obviously that this is endianness-dependent without jumping to definition of %[offset]?
/* Boundary check on r0 with predetermined result. */\
if r0 == 0 goto l0_%=; \
/* Dead branch: the verifier should prune it. Do an invalid memory\
* access if the verifier follows it. \
*/ \
r0 = *(u64*)(r9 + 0); \
+l0_%=: exit; \ +" :
: __imm(bpf_get_prandom_u32),
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
__imm_const(offset, 8)
+#else
__imm_const(offset, 4)
+#endif
: __clobber_all);
+}
[...]