On Mon, Dec 25, 2023 at 1:11 PM Maxim Mikityanskiy maxtram95@gmail.com wrote:
On Sun, 24 Dec 2023 at 19:15:42 -0800, Alexei Starovoitov wrote:
On Wed, Dec 20, 2023 at 1:40 PM Maxim Mikityanskiy maxtram95@gmail.com wrote:
From: Maxim Mikityanskiy maxim@isovalent.com
Currently, when a scalar bounded register is spilled to the stack, its ID is preserved, but only if was already assigned, i.e. if this register was MOVed before.
Assign an ID on spill if none is set, so that equal scalars could be tracked if a register is spilled to the stack and filled into another register.
One test is adjusted to reflect the change in register IDs.
Signed-off-by: Maxim Mikityanskiy maxim@isovalent.com
kernel/bpf/verifier.c | 8 +++++++- .../selftests/bpf/progs/verifier_direct_packet_access.c | 2 +- 2 files changed, 8 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index b757fdbbbdd2..caa768f1e369 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -4503,9 +4503,15 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
mark_stack_slot_scratched(env, spi); if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && env->bpf_capable) {
bool reg_value_fits;
reg_value_fits = get_reg_width(reg) <= BITS_PER_BYTE * size;
/* Make sure that reg had an ID to build a relation on spill. */
if (reg_value_fits)
assign_scalar_id_before_mov(env, reg);
Thanks. I just debugged this issue as part of my bpf_cmp series.
llvm generated:
1093: (7b) *(u64 *)(r10 -96) = r0 ; R0_w=scalar(smin=smin32=-4095,smax=smax32=256) R10=fp0 fp-96_w=scalar(smin=smin32=-4095,smax=smax32=256) ; if (bpf_cmp(filepart_length, >, MAX_PATH)) 1094: (25) if r0 > 0x100 goto pc+903 ; R0_w=scalar(id=53,smin=smin32=0,smax=umax=smax32=umax32=256,var_off=(0x0; 0x1ff))
the verifier refined the range of 'r0' here, but the code just read spilled value from stack:
1116: (79) r1 = *(u64 *)(r10 -64) ; R1_w=map_value ; payload += filepart_length; 1117: (79) r2 = *(u64 *)(r10 -96) ; R2_w=scalar(smin=smin32=-4095,smax=smax32=256) R10=fp0 fp-96=scalar(smin=smin32=-4095,smax=smax32=256) 1118: (0f) r1 += r2 ; R1_w=map_value(map=data_heap,ks=4,vs=23040,off=148,smin=smin32=-4095,smax=smax32=3344)
And later errors as: "R1 min value is negative, either use unsigned index or do a if (index
=0) check."
This verifier improvement is certainly necessary.
Glad that you found it useful!
Since you've analyzed this issue did you figure out a workaround for C code on existing and older kernels?
Uhm... in my case (Cilium, it was a while ago) I did some big change (reorganized function calls and revalidate_data() calls) that changed codegen significantly, and the problematic pattern disappeared.
I can suggest trying to play with volatile, e.g., declare filepart_length as volatile; if it doesn't help, create another volatile variable and copy filepart_length to it before doing bpf_cmp (copying reg->reg will assign an ID, but I'm not sure if they'll still be in registers after being declared as volatile).
Unfortunately, I couldn't reproduce your issue locally, so I couldn't try these suggestions myself.
No worries.
What LLVM version do you see the issue on? I can try to look for a specific C workaround if I reproduce it locally.
BTW, the asm workaround is obvious (copy reg to another reg to assign an ID), so maybe an inline asm like this would do the thing?
asm volatile("r8 = %0" :: "r"(filepart_length) : "r8");
Right. I tried: asm volatile("%[reg]=%[reg]"::[reg]"r"((short)filepart_length));
and it forces ID assignment, but depending on the code it might still be too late. I've seen the pattern: call ... *(u64 *)(r10 -96) = r0 r0 = r0 // asm trick above if r0 > 0x100 goto pc+903
So it may or may not help, but it was good to understand this issue.