The edge range checking for the registers is supported by the verifier now, so we can activate the extended logic in tools/testing/selftests/bpf/prog_tests/reg_bounds.c/range_cond() to test such logic.
Besides, I added some cases to the "crafted_cases" array for this logic. These cases are mainly used to test the edge of the src reg and dst reg.
All reg bounds testings has passed in the SLOW_TESTS mode:
$ export SLOW_TESTS=1 && ./test_progs -t reg_bounds -j Summary: 65/18959832 PASSED, 0 SKIPPED, 0 FAILED
Signed-off-by: Menglong Dong menglong8.dong@gmail.com --- v5: - add "{U32, U32, {0, U32_MAX}, {U32_MAX, U32_MAX}}" v4: - remove reduplicated s32 casting v3: - do some adjustment to the crafted cases that we added v2: - add some cases to the "crafted_cases" --- .../selftests/bpf/prog_tests/reg_bounds.c | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/reg_bounds.c b/tools/testing/selftests/bpf/prog_tests/reg_bounds.c index 3bf4ddd720a8..820d0bcfc474 100644 --- a/tools/testing/selftests/bpf/prog_tests/reg_bounds.c +++ b/tools/testing/selftests/bpf/prog_tests/reg_bounds.c @@ -590,12 +590,7 @@ static void range_cond(enum num_t t, struct range x, struct range y, *newy = range(t, max_t(t, x.a, y.a), min_t(t, x.b, y.b)); break; case OP_NE: - /* generic case, can't derive more information */ - *newx = range(t, x.a, x.b); - *newy = range(t, y.a, y.b); - break; - - /* below extended logic is not supported by verifier just yet */ + /* below logic is supported by the verifier now */ if (x.a == x.b && x.a == y.a) { /* X is a constant matching left side of Y */ *newx = range(t, x.a, x.b); @@ -2101,6 +2096,18 @@ static struct subtest_case crafted_cases[] = { {S32, S64, {(u32)S32_MIN, (u32)(s32)-255}, {(u32)(s32)-2, 0}}, {S32, S64, {0, 1}, {(u32)S32_MIN, (u32)S32_MIN}}, {S32, U32, {(u32)S32_MIN, (u32)S32_MIN}, {(u32)S32_MIN, (u32)S32_MIN}}, + + /* edge overlap testings for BPF_NE */ + {U64, U64, {0, U64_MAX}, {U64_MAX, U64_MAX}}, + {U64, U64, {0, U64_MAX}, {0, 0}}, + {S64, U64, {S64_MIN, 0}, {S64_MIN, S64_MIN}}, + {S64, U64, {S64_MIN, 0}, {0, 0}}, + {S64, U64, {S64_MIN, S64_MAX}, {S64_MAX, S64_MAX}}, + {U32, U32, {0, U32_MAX}, {0, 0}}, + {U32, U32, {0, U32_MAX}, {U32_MAX, U32_MAX}}, + {S32, U32, {(u32)S32_MIN, 0}, {0, 0}}, + {S32, U32, {(u32)S32_MIN, 0}, {(u32)S32_MIN, (u32)S32_MIN}}, + {S32, U32, {(u32)S32_MIN, S32_MAX}, {S32_MAX, S32_MAX}}, };
/* Go over crafted hard-coded cases. This is fast, so we do it as part of