For now, the BPF program of type BPF_PROG_TYPE_TRACING is not allowed to be attached to multiple hooks, and we have to create a BPF program for each kernel function, for which we want to trace, even through all the program have the same (or similar) logic. This can consume extra memory, and make the program loading slow if we have plenty of kernel function to trace.
In the commit 4a1e7c0c63e0 ("bpf: Support attaching freplace programs to multiple attach points"), the freplace BPF program is made to support attach to multiple attach points. And in this series, we extend it to fentry/fexit/raw_tp/...
In the 1st patch, we add the support to record index of the accessed function args of the target for tracing program. Meanwhile, we add the function btf_check_func_part_match() to compare the accessed function args of two function prototype. This function will be used in the next commit.
In the 2nd patch, we do some adjust to bpf_tracing_prog_attach() to make it support multiple attaching.
In the 3rd patch, we allow to set bpf cookie in bpf_link_create() even if target_btf_id is set, as we are allowed to attach the tracing program to new target.
In the 4th patch, we introduce the function libbpf_find_kernel_btf_id() to libbpf to find the btf type id of the kernel function, and this function will be used in the next commit.
In the 5th patch, we add the testcases for this series.
Menglong Dong (5): bpf: tracing: add support to record and check the accessed args bpf: tracing: support to attach program to multi hooks libbpf: allow to set coookie when target_btf_id is set in bpf_link_create libbpf: add the function libbpf_find_kernel_btf_id() selftests/bpf: add test cases for multiple attach of tracing program
include/linux/bpf.h | 6 + include/uapi/linux/bpf.h | 1 + kernel/bpf/btf.c | 121 ++++++++++++++ kernel/bpf/syscall.c | 118 +++++++++++--- tools/lib/bpf/bpf.c | 17 +- tools/lib/bpf/libbpf.c | 83 ++++++++++ tools/lib/bpf/libbpf.h | 3 + tools/lib/bpf/libbpf.map | 1 + .../selftests/bpf/bpf_testmod/bpf_testmod.c | 49 ++++++ .../bpf/prog_tests/tracing_multi_attach.c | 153 ++++++++++++++++++ .../selftests/bpf/progs/tracing_multi_test.c | 66 ++++++++ 11 files changed, 583 insertions(+), 35 deletions(-) create mode 100644 tools/testing/selftests/bpf/prog_tests/tracing_multi_attach.c create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_test.c
In this commit, we add the 'accessed_args' field to struct bpf_prog_aux, which is used to record the accessed index of the function args in btf_ctx_access().
Meanwhile, we add the function btf_check_func_part_match() to compare the accessed function args of two function prototype. This function will be used in the following commit.
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com --- include/linux/bpf.h | 4 ++ kernel/bpf/btf.c | 121 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 125 insertions(+)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index c7aa99b44dbd..0225b8dbdd9d 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1464,6 +1464,7 @@ struct bpf_prog_aux { const struct btf_type *attach_func_proto; /* function name for valid attach_btf_id */ const char *attach_func_name; + u64 accessed_args; struct bpf_prog **func; void *jit_data; /* JIT specific data. arch dependent */ struct bpf_jit_poke_descriptor *poke_tab; @@ -2566,6 +2567,9 @@ struct bpf_reg_state; int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog); int btf_check_type_match(struct bpf_verifier_log *log, const struct bpf_prog *prog, struct btf *btf, const struct btf_type *t); +int btf_check_func_part_match(struct btf *btf1, const struct btf_type *t1, + struct btf *btf2, const struct btf_type *t2, + u64 func_args); const char *btf_find_decl_tag_value(const struct btf *btf, const struct btf_type *pt, int comp_idx, const char *tag_key); int btf_find_next_decl_tag(const struct btf *btf, const struct btf_type *pt, diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 6ff0bd1a91d5..3a6931402fe4 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6203,6 +6203,9 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type, /* skip first 'void *__data' argument in btf_trace_##name typedef */ args++; nr_args--; + prog->aux->accessed_args |= (1 << (arg + 1)); + } else { + prog->aux->accessed_args |= (1 << arg); }
if (arg > nr_args) { @@ -7010,6 +7013,124 @@ int btf_check_type_match(struct bpf_verifier_log *log, const struct bpf_prog *pr return btf_check_func_type_match(log, btf1, t1, btf2, t2); }
+static u32 get_ctx_arg_total_size(struct btf *btf, const struct btf_type *t) +{ + const struct btf_param *args; + u32 size = 0, nr_args; + int i; + + nr_args = btf_type_vlen(t); + args = (const struct btf_param *)(t + 1); + for (i = 0; i < nr_args; i++) { + t = btf_type_skip_modifiers(btf, args[i].type, NULL); + size += btf_type_is_ptr(t) ? 8 : roundup(t->size, 8); + } + + return size; +} + +static int get_ctx_arg_idx_aligned(struct btf *btf, const struct btf_type *t, + int off) +{ + const struct btf_param *args; + u32 offset = 0, nr_args; + int i; + + nr_args = btf_type_vlen(t); + args = (const struct btf_param *)(t + 1); + for (i = 0; i < nr_args; i++) { + if (offset == off) + return i; + + t = btf_type_skip_modifiers(btf, args[i].type, NULL); + offset += btf_type_is_ptr(t) ? 8 : roundup(t->size, 8); + if (offset > off) + return -1; + } + return -1; +} + +/* This function is similar to btf_check_func_type_match(), except that it + * only compare some function args of the function prototype t1 and t2. + */ +int btf_check_func_part_match(struct btf *btf1, const struct btf_type *func1, + struct btf *btf2, const struct btf_type *func2, + u64 func_args) +{ + const struct btf_param *args1, *args2; + u32 nargs1, i, offset = 0; + const char *s1, *s2; + + if (!btf_type_is_func_proto(func1) || !btf_type_is_func_proto(func2)) + return -EINVAL; + + args1 = (const struct btf_param *)(func1 + 1); + args2 = (const struct btf_param *)(func2 + 1); + nargs1 = btf_type_vlen(func1); + + for (i = 0; i <= nargs1; i++) { + const struct btf_type *t1, *t2; + + if (!(func_args & (1 << i))) + goto next; + + if (i < nargs1) { + int t2_index; + + /* get the index of the arg corresponding to args1[i] + * by the offset. + */ + t2_index = get_ctx_arg_idx_aligned(btf2, func2, + offset); + if (t2_index < 0) + return -EINVAL; + + t1 = btf_type_skip_modifiers(btf1, args1[i].type, NULL); + t2 = btf_type_skip_modifiers(btf2, args2[t2_index].type, + NULL); + } else { + /* i == nargs1, this is the index of return value of t1 */ + if (get_ctx_arg_total_size(btf1, func1) != + get_ctx_arg_total_size(btf2, func2)) + return -EINVAL; + + /* check the return type of t1 and t2 */ + t1 = btf_type_skip_modifiers(btf1, func1->type, NULL); + t2 = btf_type_skip_modifiers(btf2, func2->type, NULL); + } + + if (t1->info != t2->info || + (btf_type_has_size(t1) && t1->size != t2->size)) + return -EINVAL; + if (btf_type_is_int(t1) || btf_is_any_enum(t1)) + goto next; + + if (btf_type_is_struct(t1)) + goto on_struct; + + if (!btf_type_is_ptr(t1)) + return -EINVAL; + + t1 = btf_type_skip_modifiers(btf1, t1->type, NULL); + t2 = btf_type_skip_modifiers(btf2, t2->type, NULL); + if (!btf_type_is_struct(t1) || !btf_type_is_struct(t2)) + return -EINVAL; + +on_struct: + s1 = btf_name_by_offset(btf1, t1->name_off); + s2 = btf_name_by_offset(btf2, t2->name_off); + if (strcmp(s1, s2)) + return -EINVAL; +next: + if (i < nargs1) { + t1 = btf_type_skip_modifiers(btf1, args1[i].type, NULL); + offset += btf_type_is_ptr(t1) ? 8 : roundup(t1->size, 8); + } + } + + return 0; +} + static bool btf_is_dynptr_ptr(const struct btf *btf, const struct btf_type *t) { const char *name;
On Tue, Feb 20, 2024 at 11:51:01AM +0800, Menglong Dong wrote:
SNIP
+static int get_ctx_arg_idx_aligned(struct btf *btf, const struct btf_type *t,
int off)
+{
- const struct btf_param *args;
- u32 offset = 0, nr_args;
- int i;
- nr_args = btf_type_vlen(t);
- args = (const struct btf_param *)(t + 1);
- for (i = 0; i < nr_args; i++) {
if (offset == off)
return i;
t = btf_type_skip_modifiers(btf, args[i].type, NULL);
offset += btf_type_is_ptr(t) ? 8 : roundup(t->size, 8);
if (offset > off)
return -1;
- }
- return -1;
+}
+/* This function is similar to btf_check_func_type_match(), except that it
- only compare some function args of the function prototype t1 and t2.
- */
could we reuse btf_check_func_type_match instead? perhaps just adding extra argument with arguments bitmap to it?
jirka
+int btf_check_func_part_match(struct btf *btf1, const struct btf_type *func1,
struct btf *btf2, const struct btf_type *func2,
u64 func_args)
+{
- const struct btf_param *args1, *args2;
- u32 nargs1, i, offset = 0;
- const char *s1, *s2;
- if (!btf_type_is_func_proto(func1) || !btf_type_is_func_proto(func2))
return -EINVAL;
- args1 = (const struct btf_param *)(func1 + 1);
- args2 = (const struct btf_param *)(func2 + 1);
- nargs1 = btf_type_vlen(func1);
- for (i = 0; i <= nargs1; i++) {
const struct btf_type *t1, *t2;
if (!(func_args & (1 << i)))
goto next;
if (i < nargs1) {
int t2_index;
/* get the index of the arg corresponding to args1[i]
* by the offset.
*/
t2_index = get_ctx_arg_idx_aligned(btf2, func2,
offset);
if (t2_index < 0)
return -EINVAL;
t1 = btf_type_skip_modifiers(btf1, args1[i].type, NULL);
t2 = btf_type_skip_modifiers(btf2, args2[t2_index].type,
NULL);
} else {
/* i == nargs1, this is the index of return value of t1 */
if (get_ctx_arg_total_size(btf1, func1) !=
get_ctx_arg_total_size(btf2, func2))
return -EINVAL;
/* check the return type of t1 and t2 */
t1 = btf_type_skip_modifiers(btf1, func1->type, NULL);
t2 = btf_type_skip_modifiers(btf2, func2->type, NULL);
}
if (t1->info != t2->info ||
(btf_type_has_size(t1) && t1->size != t2->size))
return -EINVAL;
if (btf_type_is_int(t1) || btf_is_any_enum(t1))
goto next;
if (btf_type_is_struct(t1))
goto on_struct;
if (!btf_type_is_ptr(t1))
return -EINVAL;
t1 = btf_type_skip_modifiers(btf1, t1->type, NULL);
t2 = btf_type_skip_modifiers(btf2, t2->type, NULL);
if (!btf_type_is_struct(t1) || !btf_type_is_struct(t2))
return -EINVAL;
+on_struct:
s1 = btf_name_by_offset(btf1, t1->name_off);
s2 = btf_name_by_offset(btf2, t2->name_off);
if (strcmp(s1, s2))
return -EINVAL;
+next:
if (i < nargs1) {
t1 = btf_type_skip_modifiers(btf1, args1[i].type, NULL);
offset += btf_type_is_ptr(t1) ? 8 : roundup(t1->size, 8);
}
- }
- return 0;
+}
static bool btf_is_dynptr_ptr(const struct btf *btf, const struct btf_type *t) { const char *name; -- 2.39.2
On Wed, Feb 21, 2024 at 1:18 AM Jiri Olsa olsajiri@gmail.com wrote:
On Tue, Feb 20, 2024 at 11:51:01AM +0800, Menglong Dong wrote:
SNIP
+static int get_ctx_arg_idx_aligned(struct btf *btf, const struct btf_type *t,
int off)
+{
const struct btf_param *args;
u32 offset = 0, nr_args;
int i;
nr_args = btf_type_vlen(t);
args = (const struct btf_param *)(t + 1);
for (i = 0; i < nr_args; i++) {
if (offset == off)
return i;
t = btf_type_skip_modifiers(btf, args[i].type, NULL);
offset += btf_type_is_ptr(t) ? 8 : roundup(t->size, 8);
if (offset > off)
return -1;
}
return -1;
+}
+/* This function is similar to btf_check_func_type_match(), except that it
- only compare some function args of the function prototype t1 and t2.
- */
could we reuse btf_check_func_type_match instead? perhaps just adding extra argument with arguments bitmap to it?
This is a little difficult, as the way we check the consistency of t1 and t2 is a little different.
in btf_check_func_type_match(), we check the args of t1 and t2 by index. But in btf_check_func_part_match(), we check the args of t1 and t2 by offset. Reusing can make btf_check_func_type_match become complex and hard to understand.
Anyway, let me have a try to see if it works to reuse btf_check_func_type_match().
Thanks! Menglong Dong
jirka
+int btf_check_func_part_match(struct btf *btf1, const struct btf_type *func1,
struct btf *btf2, const struct btf_type *func2,
u64 func_args)
+{
const struct btf_param *args1, *args2;
u32 nargs1, i, offset = 0;
const char *s1, *s2;
if (!btf_type_is_func_proto(func1) || !btf_type_is_func_proto(func2))
return -EINVAL;
args1 = (const struct btf_param *)(func1 + 1);
args2 = (const struct btf_param *)(func2 + 1);
nargs1 = btf_type_vlen(func1);
for (i = 0; i <= nargs1; i++) {
const struct btf_type *t1, *t2;
if (!(func_args & (1 << i)))
goto next;
if (i < nargs1) {
int t2_index;
/* get the index of the arg corresponding to args1[i]
* by the offset.
*/
t2_index = get_ctx_arg_idx_aligned(btf2, func2,
offset);
if (t2_index < 0)
return -EINVAL;
t1 = btf_type_skip_modifiers(btf1, args1[i].type, NULL);
t2 = btf_type_skip_modifiers(btf2, args2[t2_index].type,
NULL);
} else {
/* i == nargs1, this is the index of return value of t1 */
if (get_ctx_arg_total_size(btf1, func1) !=
get_ctx_arg_total_size(btf2, func2))
return -EINVAL;
/* check the return type of t1 and t2 */
t1 = btf_type_skip_modifiers(btf1, func1->type, NULL);
t2 = btf_type_skip_modifiers(btf2, func2->type, NULL);
}
if (t1->info != t2->info ||
(btf_type_has_size(t1) && t1->size != t2->size))
return -EINVAL;
if (btf_type_is_int(t1) || btf_is_any_enum(t1))
goto next;
if (btf_type_is_struct(t1))
goto on_struct;
if (!btf_type_is_ptr(t1))
return -EINVAL;
t1 = btf_type_skip_modifiers(btf1, t1->type, NULL);
t2 = btf_type_skip_modifiers(btf2, t2->type, NULL);
if (!btf_type_is_struct(t1) || !btf_type_is_struct(t2))
return -EINVAL;
+on_struct:
s1 = btf_name_by_offset(btf1, t1->name_off);
s2 = btf_name_by_offset(btf2, t2->name_off);
if (strcmp(s1, s2))
return -EINVAL;
+next:
if (i < nargs1) {
t1 = btf_type_skip_modifiers(btf1, args1[i].type, NULL);
offset += btf_type_is_ptr(t1) ? 8 : roundup(t1->size, 8);
}
}
return 0;
+}
static bool btf_is_dynptr_ptr(const struct btf *btf, const struct btf_type *t) { const char *name; -- 2.39.2
On 2/19/24 19:51, Menglong Dong wrote:
In this commit, we add the 'accessed_args' field to struct bpf_prog_aux, which is used to record the accessed index of the function args in btf_ctx_access().
Meanwhile, we add the function btf_check_func_part_match() to compare the accessed function args of two function prototype. This function will be used in the following commit.
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com
include/linux/bpf.h | 4 ++ kernel/bpf/btf.c | 121 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 125 insertions(+)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index c7aa99b44dbd..0225b8dbdd9d 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1464,6 +1464,7 @@ struct bpf_prog_aux { const struct btf_type *attach_func_proto; /* function name for valid attach_btf_id */ const char *attach_func_name;
- u64 accessed_args; struct bpf_prog **func; void *jit_data; /* JIT specific data. arch dependent */ struct bpf_jit_poke_descriptor *poke_tab;
@@ -2566,6 +2567,9 @@ struct bpf_reg_state; int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog); int btf_check_type_match(struct bpf_verifier_log *log, const struct bpf_prog *prog, struct btf *btf, const struct btf_type *t); +int btf_check_func_part_match(struct btf *btf1, const struct btf_type *t1,
struct btf *btf2, const struct btf_type *t2,
const char *btf_find_decl_tag_value(const struct btf *btf, const struct btf_type *pt, int comp_idx, const char *tag_key); int btf_find_next_decl_tag(const struct btf *btf, const struct btf_type *pt,u64 func_args);
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 6ff0bd1a91d5..3a6931402fe4 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6203,6 +6203,9 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type, /* skip first 'void *__data' argument in btf_trace_##name typedef */ args++; nr_args--;
prog->aux->accessed_args |= (1 << (arg + 1));
- } else {
}prog->aux->accessed_args |= (1 << arg);
if (arg > nr_args) { @@ -7010,6 +7013,124 @@ int btf_check_type_match(struct bpf_verifier_log *log, const struct bpf_prog *pr return btf_check_func_type_match(log, btf1, t1, btf2, t2); } +static u32 get_ctx_arg_total_size(struct btf *btf, const struct btf_type *t) +{
- const struct btf_param *args;
- u32 size = 0, nr_args;
- int i;
- nr_args = btf_type_vlen(t);
- args = (const struct btf_param *)(t + 1);
- for (i = 0; i < nr_args; i++) {
t = btf_type_skip_modifiers(btf, args[i].type, NULL);
size += btf_type_is_ptr(t) ? 8 : roundup(t->size, 8);
- }
- return size;
+}
+static int get_ctx_arg_idx_aligned(struct btf *btf, const struct btf_type *t,
int off)
+{
- const struct btf_param *args;
- u32 offset = 0, nr_args;
- int i;
- nr_args = btf_type_vlen(t);
- args = (const struct btf_param *)(t + 1);
- for (i = 0; i < nr_args; i++) {
if (offset == off)
return i;
t = btf_type_skip_modifiers(btf, args[i].type, NULL);
offset += btf_type_is_ptr(t) ? 8 : roundup(t->size, 8);
if (offset > off)
return -1;
- }
- return -1;
+}
This one is very similar to get_ctx_arg_idx(). How about to refactor get_ctx_arg_idx() and share the code between get_ctx_arg_idx() and get_ctx_arg_idx_aligned().
For example,
-static u32 get_ctx_arg_idx(struct btf *btf, const struct btf_type *func_proto, - int off) +static u32 _get_ctx_arg_idx(struct btf *btf, const struct btf_type *func_proto, + int off, u32 *arg_off) { const struct btf_param *args; const struct btf_type *t; u32 offset = 0, nr_args; int i;
if (!func_proto) return off / 8;
nr_args = btf_type_vlen(func_proto); args = (const struct btf_param *)(func_proto + 1); for (i = 0; i < nr_args; i++) { + if (arg_off) + *arg_off = offset; t = btf_type_skip_modifiers(btf, args[i].type, NULL); offset += btf_type_is_ptr(t) ? 8 : roundup(t->size, 8); if (off < offset) return i; }
+ if (arg_off) + *arg_off = offset; t = btf_type_skip_modifiers(btf, func_proto->type, NULL); offset += btf_type_is_ptr(t) ? 8 : roundup(t->size, 8); if (off < offset) return nr_args;
return nr_args + 1; }
+static u32 get_ctx_arg_idx(struct btf *btf, const struct btf_type *func_proto, + int off) +{ + return _get_ctx_arg_idx(btf, func_proto, off, NULL); +} + +static u32 get_ctx_arg_idx_aligned(struct btf *btf, + const struct btf_type *func_proto, + int off) +{ + u32 arg_off; + u32 arg_idx = _get_ctx_arg_idx(btf, func_proto, off, &arg_off); + return arg_off == off ? arg_idx : -1; +} +
+/* This function is similar to btf_check_func_type_match(), except that it
- only compare some function args of the function prototype t1 and t2.
- */
+int btf_check_func_part_match(struct btf *btf1, const struct btf_type *func1,
struct btf *btf2, const struct btf_type *func2,
u64 func_args)
+{
- const struct btf_param *args1, *args2;
- u32 nargs1, i, offset = 0;
- const char *s1, *s2;
- if (!btf_type_is_func_proto(func1) || !btf_type_is_func_proto(func2))
return -EINVAL;
- args1 = (const struct btf_param *)(func1 + 1);
- args2 = (const struct btf_param *)(func2 + 1);
- nargs1 = btf_type_vlen(func1);
- for (i = 0; i <= nargs1; i++) {
const struct btf_type *t1, *t2;
if (!(func_args & (1 << i)))
goto next;
if (i < nargs1) {
int t2_index;
/* get the index of the arg corresponding to args1[i]
* by the offset.
*/
t2_index = get_ctx_arg_idx_aligned(btf2, func2,
offset);
if (t2_index < 0)
return -EINVAL;
t1 = btf_type_skip_modifiers(btf1, args1[i].type, NULL);
t2 = btf_type_skip_modifiers(btf2, args2[t2_index].type,
NULL);
} else {
/* i == nargs1, this is the index of return value of t1 */
if (get_ctx_arg_total_size(btf1, func1) !=
get_ctx_arg_total_size(btf2, func2))
return -EINVAL;
/* check the return type of t1 and t2 */
t1 = btf_type_skip_modifiers(btf1, func1->type, NULL);
t2 = btf_type_skip_modifiers(btf2, func2->type, NULL);
}
if (t1->info != t2->info ||
(btf_type_has_size(t1) && t1->size != t2->size))
return -EINVAL;
if (btf_type_is_int(t1) || btf_is_any_enum(t1))
goto next;
if (btf_type_is_struct(t1))
goto on_struct;
if (!btf_type_is_ptr(t1))
return -EINVAL;
t1 = btf_type_skip_modifiers(btf1, t1->type, NULL);
t2 = btf_type_skip_modifiers(btf2, t2->type, NULL);
if (!btf_type_is_struct(t1) || !btf_type_is_struct(t2))
return -EINVAL;
+on_struct:
s1 = btf_name_by_offset(btf1, t1->name_off);
s2 = btf_name_by_offset(btf2, t2->name_off);
if (strcmp(s1, s2))
return -EINVAL;
+next:
if (i < nargs1) {
t1 = btf_type_skip_modifiers(btf1, args1[i].type, NULL);
offset += btf_type_is_ptr(t1) ? 8 : roundup(t1->size, 8);
}
- }
- return 0;
+}
- static bool btf_is_dynptr_ptr(const struct btf *btf, const struct btf_type *t) { const char *name;
On Wed, Feb 21, 2024 at 2:22 AM Kui-Feng Lee sinquersw@gmail.com wrote:
On 2/19/24 19:51, Menglong Dong wrote:
In this commit, we add the 'accessed_args' field to struct bpf_prog_aux, which is used to record the accessed index of the function args in btf_ctx_access().
Meanwhile, we add the function btf_check_func_part_match() to compare the accessed function args of two function prototype. This function will be used in the following commit.
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com
include/linux/bpf.h | 4 ++ kernel/bpf/btf.c | 121 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 125 insertions(+)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index c7aa99b44dbd..0225b8dbdd9d 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1464,6 +1464,7 @@ struct bpf_prog_aux { const struct btf_type *attach_func_proto; /* function name for valid attach_btf_id */ const char *attach_func_name;
u64 accessed_args; struct bpf_prog **func; void *jit_data; /* JIT specific data. arch dependent */ struct bpf_jit_poke_descriptor *poke_tab;
@@ -2566,6 +2567,9 @@ struct bpf_reg_state; int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog); int btf_check_type_match(struct bpf_verifier_log *log, const struct bpf_prog *prog, struct btf *btf, const struct btf_type *t); +int btf_check_func_part_match(struct btf *btf1, const struct btf_type *t1,
struct btf *btf2, const struct btf_type *t2,
const char *btf_find_decl_tag_value(const struct btf *btf, const struct btf_type *pt, int comp_idx, const char *tag_key); int btf_find_next_decl_tag(const struct btf *btf, const struct btf_type *pt,u64 func_args);
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 6ff0bd1a91d5..3a6931402fe4 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6203,6 +6203,9 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type, /* skip first 'void *__data' argument in btf_trace_##name typedef */ args++; nr_args--;
prog->aux->accessed_args |= (1 << (arg + 1));
} else {
prog->aux->accessed_args |= (1 << arg); } if (arg > nr_args) {
@@ -7010,6 +7013,124 @@ int btf_check_type_match(struct bpf_verifier_log *log, const struct bpf_prog *pr return btf_check_func_type_match(log, btf1, t1, btf2, t2); }
+static u32 get_ctx_arg_total_size(struct btf *btf, const struct btf_type *t) +{
const struct btf_param *args;
u32 size = 0, nr_args;
int i;
nr_args = btf_type_vlen(t);
args = (const struct btf_param *)(t + 1);
for (i = 0; i < nr_args; i++) {
t = btf_type_skip_modifiers(btf, args[i].type, NULL);
size += btf_type_is_ptr(t) ? 8 : roundup(t->size, 8);
}
return size;
+}
+static int get_ctx_arg_idx_aligned(struct btf *btf, const struct btf_type *t,
int off)
+{
const struct btf_param *args;
u32 offset = 0, nr_args;
int i;
nr_args = btf_type_vlen(t);
args = (const struct btf_param *)(t + 1);
for (i = 0; i < nr_args; i++) {
if (offset == off)
return i;
t = btf_type_skip_modifiers(btf, args[i].type, NULL);
offset += btf_type_is_ptr(t) ? 8 : roundup(t->size, 8);
if (offset > off)
return -1;
}
return -1;
+}
This one is very similar to get_ctx_arg_idx(). How about to refactor get_ctx_arg_idx() and share the code between get_ctx_arg_idx() and get_ctx_arg_idx_aligned().
This seems to work, I'll combine them in the next version.
Thanks for the example code! Menglong Dong
For example,
-static u32 get_ctx_arg_idx(struct btf *btf, const struct btf_type *func_proto,
int off)
+static u32 _get_ctx_arg_idx(struct btf *btf, const struct btf_type *func_proto,
int off, u32 *arg_off)
{ const struct btf_param *args; const struct btf_type *t; u32 offset = 0, nr_args; int i;
if (!func_proto) return off / 8; nr_args = btf_type_vlen(func_proto); args = (const struct btf_param *)(func_proto + 1); for (i = 0; i < nr_args; i++) {
if (arg_off)
*arg_off = offset; t = btf_type_skip_modifiers(btf, args[i].type, NULL); offset += btf_type_is_ptr(t) ? 8 : roundup(t->size, 8); if (off < offset) return i; }
if (arg_off)
*arg_off = offset; t = btf_type_skip_modifiers(btf, func_proto->type, NULL); offset += btf_type_is_ptr(t) ? 8 : roundup(t->size, 8); if (off < offset) return nr_args; return nr_args + 1;
}
+static u32 get_ctx_arg_idx(struct btf *btf, const struct btf_type *func_proto,
int off)
+{
return _get_ctx_arg_idx(btf, func_proto, off, NULL);
+}
+static u32 get_ctx_arg_idx_aligned(struct btf *btf,
const struct btf_type *func_proto,
int off)
+{
u32 arg_off;
u32 arg_idx = _get_ctx_arg_idx(btf, func_proto, off, &arg_off);
return arg_off == off ? arg_idx : -1;
+}
+/* This function is similar to btf_check_func_type_match(), except that it
- only compare some function args of the function prototype t1 and t2.
- */
+int btf_check_func_part_match(struct btf *btf1, const struct btf_type *func1,
struct btf *btf2, const struct btf_type *func2,
u64 func_args)
+{
const struct btf_param *args1, *args2;
u32 nargs1, i, offset = 0;
const char *s1, *s2;
if (!btf_type_is_func_proto(func1) || !btf_type_is_func_proto(func2))
return -EINVAL;
args1 = (const struct btf_param *)(func1 + 1);
args2 = (const struct btf_param *)(func2 + 1);
nargs1 = btf_type_vlen(func1);
for (i = 0; i <= nargs1; i++) {
const struct btf_type *t1, *t2;
if (!(func_args & (1 << i)))
goto next;
if (i < nargs1) {
int t2_index;
/* get the index of the arg corresponding to args1[i]
* by the offset.
*/
t2_index = get_ctx_arg_idx_aligned(btf2, func2,
offset);
if (t2_index < 0)
return -EINVAL;
t1 = btf_type_skip_modifiers(btf1, args1[i].type, NULL);
t2 = btf_type_skip_modifiers(btf2, args2[t2_index].type,
NULL);
} else {
/* i == nargs1, this is the index of return value of t1 */
if (get_ctx_arg_total_size(btf1, func1) !=
get_ctx_arg_total_size(btf2, func2))
return -EINVAL;
/* check the return type of t1 and t2 */
t1 = btf_type_skip_modifiers(btf1, func1->type, NULL);
t2 = btf_type_skip_modifiers(btf2, func2->type, NULL);
}
if (t1->info != t2->info ||
(btf_type_has_size(t1) && t1->size != t2->size))
return -EINVAL;
if (btf_type_is_int(t1) || btf_is_any_enum(t1))
goto next;
if (btf_type_is_struct(t1))
goto on_struct;
if (!btf_type_is_ptr(t1))
return -EINVAL;
t1 = btf_type_skip_modifiers(btf1, t1->type, NULL);
t2 = btf_type_skip_modifiers(btf2, t2->type, NULL);
if (!btf_type_is_struct(t1) || !btf_type_is_struct(t2))
return -EINVAL;
+on_struct:
s1 = btf_name_by_offset(btf1, t1->name_off);
s2 = btf_name_by_offset(btf2, t2->name_off);
if (strcmp(s1, s2))
return -EINVAL;
+next:
if (i < nargs1) {
t1 = btf_type_skip_modifiers(btf1, args1[i].type, NULL);
offset += btf_type_is_ptr(t1) ? 8 : roundup(t1->size, 8);
}
}
return 0;
+}
- static bool btf_is_dynptr_ptr(const struct btf *btf, const struct btf_type *t) { const char *name;
In this commit, we add the support to allow attaching a tracing BPF program to multi hooks.
In the commit 4a1e7c0c63e0 ("bpf: Support attaching freplace programs to multiple attach points"), the freplace BPF program is made to support attach to multiple attach points. And in this commit, we extend it to fentry/fexit/raw_tp/...
The use case is obvious. For now, we have to create a BPF program for each kernel function, for which we want to trace, even through all the program have the same (or similar logic). This can consume extra memory, and make the program loading slow if we have plenty of kernel function to trace. The KPROBE_MULTI maybe a alternative, but it can't what TRACING do. For example, the kretprobe can't obtain the function args, but the FEXIT can.
Now, we need to hold the reference for the target btf and kernel module in the bpf link, as a program can have multiple target. Therefore, we introduce the attach_btf and mod field to the struct bpf_tracing_link. During the attach, we will check the target is compatible with the program, which means that the function args that the program accessed in the target function prototype should be the same as the origin target.
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com --- include/linux/bpf.h | 2 + include/uapi/linux/bpf.h | 1 + kernel/bpf/syscall.c | 117 +++++++++++++++++++++++++++++++-------- 3 files changed, 98 insertions(+), 22 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 0225b8dbdd9d..cf8f2df9afb9 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1606,6 +1606,8 @@ struct bpf_tracing_link { enum bpf_attach_type attach_type; struct bpf_trampoline *trampoline; struct bpf_prog *tgt_prog; + struct btf *attach_btf; + struct module *mod; };
struct bpf_link_primer { diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index d96708380e52..0ded10a85bfe 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -1668,6 +1668,7 @@ union bpf_attr { union { __u32 target_fd; /* target object to attach to or ... */ __u32 target_ifindex; /* target ifindex */ + __u32 target_btf_obj_fd; }; __u32 attach_type; /* attach type */ __u32 flags; /* extra flags */ diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index b2750b79ac80..3b432fcd5bdb 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -3178,6 +3178,9 @@ static void bpf_tracing_link_dealloc(struct bpf_link *link) struct bpf_tracing_link *tr_link = container_of(link, struct bpf_tracing_link, link.link);
+ + btf_put(tr_link->attach_btf); + module_put(tr_link->mod); kfree(tr_link); }
@@ -3220,6 +3223,35 @@ static const struct bpf_link_ops bpf_tracing_link_lops = { .fill_link_info = bpf_tracing_link_fill_link_info, };
+static int bpf_tracing_check_multi(struct bpf_prog *prog, + struct bpf_prog *tgt_prog, + struct btf *btf2, + const struct btf_type *t2) +{ + struct btf *btf1 = prog->aux->attach_btf; + const struct btf_type *t1; + + /* this case is already valided in bpf_check_attach_target() */ + if (prog->type == BPF_PROG_TYPE_EXT) + return 0; + + /* For now, noly support multi attach for kernel function attach + * point. + */ + if (!btf1) + return -EOPNOTSUPP; + + btf2 = btf2 ?: tgt_prog->aux->btf; + t1 = prog->aux->attach_func_proto; + + /* the target is the same as the origin one, this is a re-attach */ + if (t1 == t2) + return 0; + + return btf_check_func_part_match(btf1, t1, btf2, t2, + prog->aux->accessed_args); +} + static int bpf_tracing_prog_attach(struct bpf_prog *prog, int tgt_prog_fd, u32 btf_id, @@ -3228,7 +3260,9 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog, struct bpf_link_primer link_primer; struct bpf_prog *tgt_prog = NULL; struct bpf_trampoline *tr = NULL; + struct btf *attach_btf = NULL; struct bpf_tracing_link *link; + struct module *mod = NULL; u64 key = 0; int err;
@@ -3258,31 +3292,50 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog, goto out_put_prog; }
- if (!!tgt_prog_fd != !!btf_id) { - err = -EINVAL; - goto out_put_prog; - } - if (tgt_prog_fd) { - /* - * For now we only allow new targets for BPF_PROG_TYPE_EXT. If this - * part would be changed to implement the same for - * BPF_PROG_TYPE_TRACING, do not forget to update the way how - * attach_tracing_prog flag is set. - */ - if (prog->type != BPF_PROG_TYPE_EXT) { + if (!btf_id) { err = -EINVAL; goto out_put_prog; } - tgt_prog = bpf_prog_get(tgt_prog_fd); if (IS_ERR(tgt_prog)) { - err = PTR_ERR(tgt_prog); tgt_prog = NULL; - goto out_put_prog; + /* tgt_prog_fd is the fd of the kernel module BTF */ + attach_btf = btf_get_by_fd(tgt_prog_fd); + if (IS_ERR(attach_btf)) { + attach_btf = NULL; + err = -EINVAL; + goto out_put_prog; + } + if (!btf_is_kernel(attach_btf)) { + btf_put(attach_btf); + err = -EOPNOTSUPP; + goto out_put_prog; + } + } else if (prog->type == BPF_PROG_TYPE_TRACING && + tgt_prog->type == BPF_PROG_TYPE_TRACING) { + prog->aux->attach_tracing_prog = true; } - - key = bpf_trampoline_compute_key(tgt_prog, NULL, btf_id); + key = bpf_trampoline_compute_key(tgt_prog, attach_btf, + btf_id); + } else if (btf_id) { + attach_btf = bpf_get_btf_vmlinux(); + if (IS_ERR(attach_btf)) { + attach_btf = NULL; + err = PTR_ERR(attach_btf); + goto out_unlock; + } + if (!attach_btf) { + err = -EINVAL; + goto out_unlock; + } + btf_get(attach_btf); + key = bpf_trampoline_compute_key(NULL, attach_btf, btf_id); + } else { + attach_btf = prog->aux->attach_btf; + /* get the reference of the btf for bpf link */ + if (attach_btf) + btf_get(attach_btf); }
link = kzalloc(sizeof(*link), GFP_USER); @@ -3319,7 +3372,7 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog, * are NULL, then program was already attached and user did not provide * tgt_prog_fd so we have no way to find out or create trampoline */ - if (!prog->aux->dst_trampoline && !tgt_prog) { + if (!prog->aux->dst_trampoline && !tgt_prog && !btf_id) { /* * Allow re-attach for TRACING and LSM programs. If it's * currently linked, bpf_trampoline_link_prog will fail. @@ -3346,17 +3399,27 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog, * different from the destination specified at load time, we * need a new trampoline and a check for compatibility */ + struct btf *origin_btf = prog->aux->attach_btf; struct bpf_attach_target_info tgt_info = {};
+ /* use the new attach_btf to check the target */ + prog->aux->attach_btf = attach_btf; err = bpf_check_attach_target(NULL, prog, tgt_prog, btf_id, &tgt_info); + prog->aux->attach_btf = origin_btf; if (err) goto out_unlock;
- if (tgt_info.tgt_mod) { - module_put(prog->aux->mod); - prog->aux->mod = tgt_info.tgt_mod; - } + mod = tgt_info.tgt_mod; + /* the new target and the previous target are in the same + * module, release the reference once. + */ + if (mod && mod == prog->aux->mod) + module_put(mod); + err = bpf_tracing_check_multi(prog, tgt_prog, attach_btf, + tgt_info.tgt_type); + if (err) + goto out_unlock;
tr = bpf_trampoline_get(key, &tgt_info); if (!tr) { @@ -3373,6 +3436,7 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog, */ tr = prog->aux->dst_trampoline; tgt_prog = prog->aux->dst_prog; + mod = prog->aux->mod; }
err = bpf_link_prime(&link->link.link, &link_primer); @@ -3388,6 +3452,8 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
link->tgt_prog = tgt_prog; link->trampoline = tr; + link->attach_btf = attach_btf; + link->mod = mod;
/* Always clear the trampoline and target prog from prog->aux to make * sure the original attach destination is not kept alive after a @@ -3400,20 +3466,27 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog, if (prog->aux->dst_trampoline && tr != prog->aux->dst_trampoline) /* we allocated a new trampoline, so free the old one */ bpf_trampoline_put(prog->aux->dst_trampoline); + if (prog->aux->mod && mod != prog->aux->mod) + /* the mod in prog is not used anywhere, move it to link */ + module_put(prog->aux->mod);
prog->aux->dst_prog = NULL; prog->aux->dst_trampoline = NULL; + prog->aux->mod = NULL; mutex_unlock(&prog->aux->dst_mutex);
return bpf_link_settle(&link_primer); out_unlock: if (tr && tr != prog->aux->dst_trampoline) bpf_trampoline_put(tr); + if (mod && mod != prog->aux->mod) + module_put(mod); mutex_unlock(&prog->aux->dst_mutex); kfree(link); out_put_prog: if (tgt_prog_fd && tgt_prog) bpf_prog_put(tgt_prog); + btf_put(attach_btf); return err; }
On Tue, Feb 20, 2024 at 11:51:02AM +0800, Menglong Dong wrote:
SNIP
@@ -3228,7 +3260,9 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog, struct bpf_link_primer link_primer; struct bpf_prog *tgt_prog = NULL; struct bpf_trampoline *tr = NULL;
- struct btf *attach_btf = NULL; struct bpf_tracing_link *link;
- struct module *mod = NULL; u64 key = 0; int err;
@@ -3258,31 +3292,50 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog, goto out_put_prog; }
- if (!!tgt_prog_fd != !!btf_id) {
err = -EINVAL;
goto out_put_prog;
- }
- if (tgt_prog_fd) {
/*
* For now we only allow new targets for BPF_PROG_TYPE_EXT. If this
* part would be changed to implement the same for
* BPF_PROG_TYPE_TRACING, do not forget to update the way how
* attach_tracing_prog flag is set.
*/
if (prog->type != BPF_PROG_TYPE_EXT) {
}if (!btf_id) { err = -EINVAL; goto out_put_prog;
- tgt_prog = bpf_prog_get(tgt_prog_fd); if (IS_ERR(tgt_prog)) {
err = PTR_ERR(tgt_prog); tgt_prog = NULL;
goto out_put_prog;
/* tgt_prog_fd is the fd of the kernel module BTF */
attach_btf = btf_get_by_fd(tgt_prog_fd);
I think we should pass the btf_fd through attr, like add link_create.tracing_btf_fd instead, this seems confusing
if (IS_ERR(attach_btf)) {
attach_btf = NULL;
err = -EINVAL;
goto out_put_prog;
}
if (!btf_is_kernel(attach_btf)) {
btf_put(attach_btf);
err = -EOPNOTSUPP;
goto out_put_prog;
}
} else if (prog->type == BPF_PROG_TYPE_TRACING &&
tgt_prog->type == BPF_PROG_TYPE_TRACING) {
}prog->aux->attach_tracing_prog = true;
could you please add comment on why this check is in here?
key = bpf_trampoline_compute_key(tgt_prog, NULL, btf_id);
key = bpf_trampoline_compute_key(tgt_prog, attach_btf,
btf_id);
- } else if (btf_id) {
attach_btf = bpf_get_btf_vmlinux();
if (IS_ERR(attach_btf)) {
attach_btf = NULL;
err = PTR_ERR(attach_btf);
goto out_unlock;
}
if (!attach_btf) {
err = -EINVAL;
goto out_unlock;
}
btf_get(attach_btf);
key = bpf_trampoline_compute_key(NULL, attach_btf, btf_id);
- } else {
attach_btf = prog->aux->attach_btf;
/* get the reference of the btf for bpf link */
if (attach_btf)
}btf_get(attach_btf);
link = kzalloc(sizeof(*link), GFP_USER); @@ -3319,7 +3372,7 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog, * are NULL, then program was already attached and user did not provide * tgt_prog_fd so we have no way to find out or create trampoline */
- if (!prog->aux->dst_trampoline && !tgt_prog) {
- if (!prog->aux->dst_trampoline && !tgt_prog && !btf_id) { /*
- Allow re-attach for TRACING and LSM programs. If it's
- currently linked, bpf_trampoline_link_prog will fail.
@@ -3346,17 +3399,27 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog, * different from the destination specified at load time, we * need a new trampoline and a check for compatibility */
struct bpf_attach_target_info tgt_info = {};struct btf *origin_btf = prog->aux->attach_btf;
/* use the new attach_btf to check the target */
err = bpf_check_attach_target(NULL, prog, tgt_prog, btf_id, &tgt_info);prog->aux->attach_btf = attach_btf;
prog->aux->attach_btf = origin_btf;
could we pass the attach_btf as argument then?
jirka
if (err) goto out_unlock;
if (tgt_info.tgt_mod) {
module_put(prog->aux->mod);
prog->aux->mod = tgt_info.tgt_mod;
}
mod = tgt_info.tgt_mod;
/* the new target and the previous target are in the same
* module, release the reference once.
*/
if (mod && mod == prog->aux->mod)
module_put(mod);
err = bpf_tracing_check_multi(prog, tgt_prog, attach_btf,
tgt_info.tgt_type);
if (err)
goto out_unlock;
tr = bpf_trampoline_get(key, &tgt_info); if (!tr) { @@ -3373,6 +3436,7 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog, */ tr = prog->aux->dst_trampoline; tgt_prog = prog->aux->dst_prog;
}mod = prog->aux->mod;
err = bpf_link_prime(&link->link.link, &link_primer); @@ -3388,6 +3452,8 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog, link->tgt_prog = tgt_prog; link->trampoline = tr;
- link->attach_btf = attach_btf;
- link->mod = mod;
/* Always clear the trampoline and target prog from prog->aux to make * sure the original attach destination is not kept alive after a @@ -3400,20 +3466,27 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog, if (prog->aux->dst_trampoline && tr != prog->aux->dst_trampoline) /* we allocated a new trampoline, so free the old one */ bpf_trampoline_put(prog->aux->dst_trampoline);
- if (prog->aux->mod && mod != prog->aux->mod)
/* the mod in prog is not used anywhere, move it to link */
module_put(prog->aux->mod);
prog->aux->dst_prog = NULL; prog->aux->dst_trampoline = NULL;
- prog->aux->mod = NULL; mutex_unlock(&prog->aux->dst_mutex);
return bpf_link_settle(&link_primer); out_unlock: if (tr && tr != prog->aux->dst_trampoline) bpf_trampoline_put(tr);
- if (mod && mod != prog->aux->mod)
mutex_unlock(&prog->aux->dst_mutex); kfree(link);module_put(mod);
out_put_prog: if (tgt_prog_fd && tgt_prog) bpf_prog_put(tgt_prog);
- btf_put(attach_btf); return err;
} -- 2.39.2
Hi Menglong,
kernel test robot noticed the following build warnings:
url: https://github.com/intel-lab-lkp/linux/commits/Menglong-Dong/bpf-tracing-add... base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master patch link: https://lore.kernel.org/r/20240220035105.34626-3-dongmenglong.8%40bytedance.... patch subject: [PATCH bpf-next 2/5] bpf: tracing: support to attach program to multi hooks config: m68k-randconfig-r071-20240220 (https://download.01.org/0day-ci/archive/20240221/202402210534.siGKEfus-lkp@i...) compiler: m68k-linux-gcc (GCC) 13.2.0
If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot lkp@intel.com | Reported-by: Dan Carpenter dan.carpenter@linaro.org | Closes: https://lore.kernel.org/r/202402210534.siGKEfus-lkp@intel.com/
smatch warnings: kernel/bpf/syscall.c:3325 bpf_tracing_prog_attach() warn: passing zero to 'PTR_ERR' kernel/bpf/syscall.c:3485 bpf_tracing_prog_attach() error: uninitialized symbol 'link'.
vim +/PTR_ERR +3325 kernel/bpf/syscall.c
4a1e7c0c63e02d Toke Høiland-Jørgensen 2020-09-29 3255 static int bpf_tracing_prog_attach(struct bpf_prog *prog, 4a1e7c0c63e02d Toke Høiland-Jørgensen 2020-09-29 3256 int tgt_prog_fd, 2fcc82411e74e5 Kui-Feng Lee 2022-05-10 3257 u32 btf_id, 2fcc82411e74e5 Kui-Feng Lee 2022-05-10 3258 u64 bpf_cookie) fec56f5890d93f Alexei Starovoitov 2019-11-14 3259 { a3b80e1078943d Andrii Nakryiko 2020-04-28 3260 struct bpf_link_primer link_primer; 3aac1ead5eb6b7 Toke Høiland-Jørgensen 2020-09-29 3261 struct bpf_prog *tgt_prog = NULL; 4a1e7c0c63e02d Toke Høiland-Jørgensen 2020-09-29 3262 struct bpf_trampoline *tr = NULL; 5f80eb32851d7a Menglong Dong 2024-02-20 3263 struct btf *attach_btf = NULL; 70ed506c3bbcfa Andrii Nakryiko 2020-03-02 3264 struct bpf_tracing_link *link; 5f80eb32851d7a Menglong Dong 2024-02-20 3265 struct module *mod = NULL; 4a1e7c0c63e02d Toke Høiland-Jørgensen 2020-09-29 3266 u64 key = 0; a3b80e1078943d Andrii Nakryiko 2020-04-28 3267 int err; fec56f5890d93f Alexei Starovoitov 2019-11-14 3268 9e4e01dfd3254c KP Singh 2020-03-29 3269 switch (prog->type) { 9e4e01dfd3254c KP Singh 2020-03-29 3270 case BPF_PROG_TYPE_TRACING: fec56f5890d93f Alexei Starovoitov 2019-11-14 3271 if (prog->expected_attach_type != BPF_TRACE_FENTRY && be8704ff07d237 Alexei Starovoitov 2020-01-20 3272 prog->expected_attach_type != BPF_TRACE_FEXIT && 9e4e01dfd3254c KP Singh 2020-03-29 3273 prog->expected_attach_type != BPF_MODIFY_RETURN) { 9e4e01dfd3254c KP Singh 2020-03-29 3274 err = -EINVAL; 9e4e01dfd3254c KP Singh 2020-03-29 3275 goto out_put_prog; 9e4e01dfd3254c KP Singh 2020-03-29 3276 } 9e4e01dfd3254c KP Singh 2020-03-29 3277 break; 9e4e01dfd3254c KP Singh 2020-03-29 3278 case BPF_PROG_TYPE_EXT: 9e4e01dfd3254c KP Singh 2020-03-29 3279 if (prog->expected_attach_type != 0) { 9e4e01dfd3254c KP Singh 2020-03-29 3280 err = -EINVAL; 9e4e01dfd3254c KP Singh 2020-03-29 3281 goto out_put_prog; 9e4e01dfd3254c KP Singh 2020-03-29 3282 } 9e4e01dfd3254c KP Singh 2020-03-29 3283 break; 9e4e01dfd3254c KP Singh 2020-03-29 3284 case BPF_PROG_TYPE_LSM: 9e4e01dfd3254c KP Singh 2020-03-29 3285 if (prog->expected_attach_type != BPF_LSM_MAC) { 9e4e01dfd3254c KP Singh 2020-03-29 3286 err = -EINVAL; 9e4e01dfd3254c KP Singh 2020-03-29 3287 goto out_put_prog; 9e4e01dfd3254c KP Singh 2020-03-29 3288 } 9e4e01dfd3254c KP Singh 2020-03-29 3289 break; 9e4e01dfd3254c KP Singh 2020-03-29 3290 default: fec56f5890d93f Alexei Starovoitov 2019-11-14 3291 err = -EINVAL; fec56f5890d93f Alexei Starovoitov 2019-11-14 3292 goto out_put_prog; fec56f5890d93f Alexei Starovoitov 2019-11-14 3293 } fec56f5890d93f Alexei Starovoitov 2019-11-14 3294 4a1e7c0c63e02d Toke Høiland-Jørgensen 2020-09-29 3295 if (tgt_prog_fd) { 5f80eb32851d7a Menglong Dong 2024-02-20 3296 if (!btf_id) { 4a1e7c0c63e02d Toke Høiland-Jørgensen 2020-09-29 3297 err = -EINVAL; 4a1e7c0c63e02d Toke Høiland-Jørgensen 2020-09-29 3298 goto out_put_prog; 4a1e7c0c63e02d Toke Høiland-Jørgensen 2020-09-29 3299 } 4a1e7c0c63e02d Toke Høiland-Jørgensen 2020-09-29 3300 tgt_prog = bpf_prog_get(tgt_prog_fd); 4a1e7c0c63e02d Toke Høiland-Jørgensen 2020-09-29 3301 if (IS_ERR(tgt_prog)) { 4a1e7c0c63e02d Toke Høiland-Jørgensen 2020-09-29 3302 tgt_prog = NULL; 5f80eb32851d7a Menglong Dong 2024-02-20 3303 /* tgt_prog_fd is the fd of the kernel module BTF */ 5f80eb32851d7a Menglong Dong 2024-02-20 3304 attach_btf = btf_get_by_fd(tgt_prog_fd); 5f80eb32851d7a Menglong Dong 2024-02-20 3305 if (IS_ERR(attach_btf)) { 5f80eb32851d7a Menglong Dong 2024-02-20 3306 attach_btf = NULL; 5f80eb32851d7a Menglong Dong 2024-02-20 3307 err = -EINVAL; 4a1e7c0c63e02d Toke Høiland-Jørgensen 2020-09-29 3308 goto out_put_prog; 4a1e7c0c63e02d Toke Høiland-Jørgensen 2020-09-29 3309 } 5f80eb32851d7a Menglong Dong 2024-02-20 3310 if (!btf_is_kernel(attach_btf)) { 5f80eb32851d7a Menglong Dong 2024-02-20 3311 btf_put(attach_btf); 5f80eb32851d7a Menglong Dong 2024-02-20 3312 err = -EOPNOTSUPP; 5f80eb32851d7a Menglong Dong 2024-02-20 3313 goto out_put_prog; 5f80eb32851d7a Menglong Dong 2024-02-20 3314 } 5f80eb32851d7a Menglong Dong 2024-02-20 3315 } else if (prog->type == BPF_PROG_TYPE_TRACING && 5f80eb32851d7a Menglong Dong 2024-02-20 3316 tgt_prog->type == BPF_PROG_TYPE_TRACING) { 5f80eb32851d7a Menglong Dong 2024-02-20 3317 prog->aux->attach_tracing_prog = true; 5f80eb32851d7a Menglong Dong 2024-02-20 3318 } 5f80eb32851d7a Menglong Dong 2024-02-20 3319 key = bpf_trampoline_compute_key(tgt_prog, attach_btf, 5f80eb32851d7a Menglong Dong 2024-02-20 3320 btf_id); 5f80eb32851d7a Menglong Dong 2024-02-20 3321 } else if (btf_id) { 5f80eb32851d7a Menglong Dong 2024-02-20 3322 attach_btf = bpf_get_btf_vmlinux(); 5f80eb32851d7a Menglong Dong 2024-02-20 3323 if (IS_ERR(attach_btf)) { 5f80eb32851d7a Menglong Dong 2024-02-20 3324 attach_btf = NULL; ^^^^^^^^^^^^^^^^^^ This needs to be done after the "err = " assignment on the next line.
5f80eb32851d7a Menglong Dong 2024-02-20 @3325 err = PTR_ERR(attach_btf); ^^^^^^^^^^^^^^^^^^^^^^^^^^ Here.
5f80eb32851d7a Menglong Dong 2024-02-20 3326 goto out_unlock; 5f80eb32851d7a Menglong Dong 2024-02-20 3327 } 5f80eb32851d7a Menglong Dong 2024-02-20 3328 if (!attach_btf) { 5f80eb32851d7a Menglong Dong 2024-02-20 3329 err = -EINVAL; 5f80eb32851d7a Menglong Dong 2024-02-20 3330 goto out_unlock;
"link" is not initialized on this goto path so it leads to an uninitialized variable warning.
5f80eb32851d7a Menglong Dong 2024-02-20 3331 } 5f80eb32851d7a Menglong Dong 2024-02-20 3332 btf_get(attach_btf); 5f80eb32851d7a Menglong Dong 2024-02-20 3333 key = bpf_trampoline_compute_key(NULL, attach_btf, btf_id); 5f80eb32851d7a Menglong Dong 2024-02-20 3334 } else { 5f80eb32851d7a Menglong Dong 2024-02-20 3335 attach_btf = prog->aux->attach_btf; 5f80eb32851d7a Menglong Dong 2024-02-20 3336 /* get the reference of the btf for bpf link */ 5f80eb32851d7a Menglong Dong 2024-02-20 3337 if (attach_btf) 5f80eb32851d7a Menglong Dong 2024-02-20 3338 btf_get(attach_btf); 4a1e7c0c63e02d Toke Høiland-Jørgensen 2020-09-29 3339 } 4a1e7c0c63e02d Toke Høiland-Jørgensen 2020-09-29 3340 70ed506c3bbcfa Andrii Nakryiko 2020-03-02 3341 link = kzalloc(sizeof(*link), GFP_USER); 70ed506c3bbcfa Andrii Nakryiko 2020-03-02 3342 if (!link) { 70ed506c3bbcfa Andrii Nakryiko 2020-03-02 3343 err = -ENOMEM; 70ed506c3bbcfa Andrii Nakryiko 2020-03-02 3344 goto out_put_prog; 70ed506c3bbcfa Andrii Nakryiko 2020-03-02 3345 } f7e0beaf39d386 Kui-Feng Lee 2022-05-10 3346 bpf_link_init(&link->link.link, BPF_LINK_TYPE_TRACING, f2e10bff16a0fd Andrii Nakryiko 2020-04-28 3347 &bpf_tracing_link_lops, prog); f2e10bff16a0fd Andrii Nakryiko 2020-04-28 3348 link->attach_type = prog->expected_attach_type; 2fcc82411e74e5 Kui-Feng Lee 2022-05-10 3349 link->link.cookie = bpf_cookie; 70ed506c3bbcfa Andrii Nakryiko 2020-03-02 3350
[ snip ]
3aac1ead5eb6b7 Toke Høiland-Jørgensen 2020-09-29 3474 prog->aux->dst_trampoline = NULL; 5f80eb32851d7a Menglong Dong 2024-02-20 3475 prog->aux->mod = NULL; 3aac1ead5eb6b7 Toke Høiland-Jørgensen 2020-09-29 3476 mutex_unlock(&prog->aux->dst_mutex); 3aac1ead5eb6b7 Toke Høiland-Jørgensen 2020-09-29 3477 a3b80e1078943d Andrii Nakryiko 2020-04-28 3478 return bpf_link_settle(&link_primer); 3aac1ead5eb6b7 Toke Høiland-Jørgensen 2020-09-29 3479 out_unlock: 4a1e7c0c63e02d Toke Høiland-Jørgensen 2020-09-29 3480 if (tr && tr != prog->aux->dst_trampoline) 4a1e7c0c63e02d Toke Høiland-Jørgensen 2020-09-29 3481 bpf_trampoline_put(tr); 5f80eb32851d7a Menglong Dong 2024-02-20 3482 if (mod && mod != prog->aux->mod) 5f80eb32851d7a Menglong Dong 2024-02-20 3483 module_put(mod); 3aac1ead5eb6b7 Toke Høiland-Jørgensen 2020-09-29 3484 mutex_unlock(&prog->aux->dst_mutex); 3aac1ead5eb6b7 Toke Høiland-Jørgensen 2020-09-29 @3485 kfree(link); ^^^^
fec56f5890d93f Alexei Starovoitov 2019-11-14 3486 out_put_prog: 4a1e7c0c63e02d Toke Høiland-Jørgensen 2020-09-29 3487 if (tgt_prog_fd && tgt_prog) 4a1e7c0c63e02d Toke Høiland-Jørgensen 2020-09-29 3488 bpf_prog_put(tgt_prog); 5f80eb32851d7a Menglong Dong 2024-02-20 3489 btf_put(attach_btf); fec56f5890d93f Alexei Starovoitov 2019-11-14 3490 return err; fec56f5890d93f Alexei Starovoitov 2019-11-14 3491 }
As now we support to attach the tracing program to multiple target, we can set the bpf cookie even if the target btf id is offered in bpf_link_create().
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com --- tools/lib/bpf/bpf.c | 17 ++++------------- 1 file changed, 4 insertions(+), 13 deletions(-)
diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c index 97ec005c3c47..0ca7c8375b40 100644 --- a/tools/lib/bpf/bpf.c +++ b/tools/lib/bpf/bpf.c @@ -737,23 +737,15 @@ int bpf_link_create(int prog_fd, int target_fd, target_btf_id = OPTS_GET(opts, target_btf_id, 0);
/* validate we don't have unexpected combinations of non-zero fields */ - if (iter_info_len || target_btf_id) { - if (iter_info_len && target_btf_id) - return libbpf_err(-EINVAL); - if (!OPTS_ZEROED(opts, target_btf_id)) - return libbpf_err(-EINVAL); - } + if (iter_info_len && target_btf_id) + return libbpf_err(-EINVAL);
memset(&attr, 0, attr_sz); attr.link_create.prog_fd = prog_fd; attr.link_create.target_fd = target_fd; attr.link_create.attach_type = attach_type; attr.link_create.flags = OPTS_GET(opts, flags, 0); - - if (target_btf_id) { - attr.link_create.target_btf_id = target_btf_id; - goto proceed; - } + attr.link_create.target_btf_id = target_btf_id;
switch (attach_type) { case BPF_TRACE_ITER: @@ -834,11 +826,10 @@ int bpf_link_create(int prog_fd, int target_fd, return libbpf_err(-EINVAL); break; default: - if (!OPTS_ZEROED(opts, flags)) + if (!target_btf_id && !OPTS_ZEROED(opts, flags)) return libbpf_err(-EINVAL); break; } -proceed: fd = sys_bpf_fd(BPF_LINK_CREATE, &attr, attr_sz); if (fd >= 0) return fd;
Add new function libbpf_find_kernel_btf_id() to find the btf type id of the kernel, including vmlinux and modules.
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com --- tools/lib/bpf/libbpf.c | 83 ++++++++++++++++++++++++++++++++++++++++ tools/lib/bpf/libbpf.h | 3 ++ tools/lib/bpf/libbpf.map | 1 + 3 files changed, 87 insertions(+)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 01f407591a92..44e34007de8c 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -9500,6 +9500,89 @@ int libbpf_find_vmlinux_btf_id(const char *name, return libbpf_err(err); }
+int libbpf_find_kernel_btf_id(const char *name, + enum bpf_attach_type attach_type, + int *btf_obj_fd, int *btf_type_id) +{ + struct btf *btf, *vmlinux_btf; + struct bpf_btf_info info; + __u32 btf_id = 0, len; + char btf_name[64]; + int err, fd; + + vmlinux_btf = btf__load_vmlinux_btf(); + err = libbpf_get_error(vmlinux_btf); + if (err) + return libbpf_err(err); + + err = find_attach_btf_id(vmlinux_btf, name, attach_type); + if (err > 0) { + *btf_type_id = err; + *btf_obj_fd = 0; + err = 0; + goto out; + } + + /* kernel too old to support module BTFs */ + if (!feat_supported(NULL, FEAT_MODULE_BTF)) { + err = -EOPNOTSUPP; + goto out; + } + + while (true) { + err = bpf_btf_get_next_id(btf_id, &btf_id); + if (err) { + err = -errno; + goto out; + } + + fd = bpf_btf_get_fd_by_id(btf_id); + if (fd < 0) { + if (errno == ENOENT) + continue; + err = -errno; + goto out; + } + + len = sizeof(info); + memset(&info, 0, sizeof(info)); + info.name = ptr_to_u64(btf_name); + info.name_len = sizeof(btf_name); + + err = bpf_btf_get_info_by_fd(fd, &info, &len); + if (err) { + err = -errno; + goto fd_out; + } + + if (!info.kernel_btf || strcmp(btf_name, "vmlinux") == 0) { + close(fd); + continue; + } + + btf = btf_get_from_fd(fd, vmlinux_btf); + err = libbpf_get_error(btf); + if (err) + goto fd_out; + + err = find_attach_btf_id(btf, name, attach_type); + if (err > 0) { + *btf_type_id = err; + *btf_obj_fd = fd; + err = 0; + break; + } + close(fd); + continue; +fd_out: + close(fd); + break; + } +out: + btf__free(vmlinux_btf); + return err; +} + static int libbpf_find_prog_btf_id(const char *name, __u32 attach_prog_fd) { struct bpf_prog_info info; diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h index 5723cbbfcc41..ca151bbec833 100644 --- a/tools/lib/bpf/libbpf.h +++ b/tools/lib/bpf/libbpf.h @@ -306,6 +306,9 @@ LIBBPF_API int libbpf_attach_type_by_name(const char *name, enum bpf_attach_type *attach_type); LIBBPF_API int libbpf_find_vmlinux_btf_id(const char *name, enum bpf_attach_type attach_type); +LIBBPF_API int libbpf_find_kernel_btf_id(const char *name, + enum bpf_attach_type attach_type, + int *btf_obj_fd, int *btf_type_id);
/* Accessors of bpf_program */ struct bpf_program; diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map index 86804fd90dd1..73c60f47b4bb 100644 --- a/tools/lib/bpf/libbpf.map +++ b/tools/lib/bpf/libbpf.map @@ -413,4 +413,5 @@ LIBBPF_1.4.0 { bpf_token_create; btf__new_split; btf_ext__raw_data; + libbpf_find_kernel_btf_id; } LIBBPF_1.3.0;
In this commit, we add the testcases for multiple attaching of tracing, include FENTRY, FEXIT, MODIFY_RETURN.
Signed-off-by: Menglong Dong dongmenglong.8@bytedance.com --- .../selftests/bpf/bpf_testmod/bpf_testmod.c | 49 ++++++ .../bpf/prog_tests/tracing_multi_attach.c | 155 ++++++++++++++++++ .../selftests/bpf/progs/tracing_multi_test.c | 72 ++++++++ 3 files changed, 276 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/tracing_multi_attach.c create mode 100644 tools/testing/selftests/bpf/progs/tracing_multi_test.c
diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c index 66787e99ba1b..237eeb7daa07 100644 --- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c +++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c @@ -98,12 +98,61 @@ bpf_testmod_test_struct_arg_8(u64 a, void *b, short c, int d, void *e, return bpf_testmod_test_struct_arg_result; }
+noinline int +bpf_testmod_test_struct_arg_9(struct bpf_testmod_struct_arg_2 a, + struct bpf_testmod_struct_arg_1 b) { + bpf_testmod_test_struct_arg_result = a.a + a.b + b.a; + return bpf_testmod_test_struct_arg_result; +} + +noinline int +bpf_testmod_test_struct_arg_10(int a, struct bpf_testmod_struct_arg_2 b) { + bpf_testmod_test_struct_arg_result = a + b.a + b.b; + return bpf_testmod_test_struct_arg_result; +} + +noinline struct bpf_testmod_struct_arg_2 * +bpf_testmod_test_struct_arg_11(int a, struct bpf_testmod_struct_arg_2 b, int c) { + bpf_testmod_test_struct_arg_result = a + b.a + b.b + c; + return (void *)bpf_testmod_test_struct_arg_result; +} + +noinline int +bpf_testmod_test_struct_arg_12(int a, struct bpf_testmod_struct_arg_2 b, int *c) { + bpf_testmod_test_struct_arg_result = a + b.a + b.b + *c; + return bpf_testmod_test_struct_arg_result; +} + noinline int bpf_testmod_test_arg_ptr_to_struct(struct bpf_testmod_struct_arg_1 *a) { bpf_testmod_test_struct_arg_result = a->a; return bpf_testmod_test_struct_arg_result; }
+noinline int +bpf_testmod_test_arg_ptr_1(struct bpf_testmod_struct_arg_1 *a) { + bpf_testmod_test_struct_arg_result = a->a; + return bpf_testmod_test_struct_arg_result; +} + +noinline int +bpf_testmod_test_arg_ptr_2(struct bpf_testmod_struct_arg_2 *a) { + bpf_testmod_test_struct_arg_result = a->a + a->b; + return bpf_testmod_test_struct_arg_result; +} + +noinline int +bpf_testmod_test_arg_ptr_3(int a, struct bpf_testmod_struct_arg_2 *b) { + bpf_testmod_test_struct_arg_result = a + b->a + b->b; + return bpf_testmod_test_struct_arg_result; +} + +noinline int +bpf_testmod_test_arg_ptr_4(struct bpf_testmod_struct_arg_2 *a, int b) { + bpf_testmod_test_struct_arg_result = a->a + a->b + b; + return bpf_testmod_test_struct_arg_result; +} + __bpf_kfunc void bpf_testmod_test_mod_kfunc(int i) { diff --git a/tools/testing/selftests/bpf/prog_tests/tracing_multi_attach.c b/tools/testing/selftests/bpf/prog_tests/tracing_multi_attach.c new file mode 100644 index 000000000000..6162d41cca9e --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/tracing_multi_attach.c @@ -0,0 +1,155 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2024 Bytedance. */ + +#include <test_progs.h> +#include "tracing_multi_test.skel.h" + +struct test_item { + char *prog; + char *target; + int attach_type; + bool success; + int link_fd; +}; + +static struct test_item test_items[] = { + { + .prog = "fentry_test1", .target = "bpf_testmod_test_struct_arg_9", + .attach_type = BPF_TRACE_FENTRY, .success = true, + }, + { + .prog = "fentry_test1", .target = "bpf_testmod_test_struct_arg_1", + .attach_type = BPF_TRACE_FENTRY, .success = false, + }, + { + .prog = "fentry_test1", .target = "bpf_testmod_test_struct_arg_2", + .attach_type = BPF_TRACE_FENTRY, .success = false, + }, + { + .prog = "fentry_test1", .target = "bpf_testmod_test_arg_ptr_2", + .attach_type = BPF_TRACE_FENTRY, .success = false, + }, + { + .prog = "fentry_test2", .target = "bpf_testmod_test_struct_arg_2", + .attach_type = BPF_TRACE_FENTRY, .success = false, + }, + { + .prog = "fentry_test2", .target = "bpf_testmod_test_struct_arg_10", + .attach_type = BPF_TRACE_FENTRY, .success = true, + }, + { + .prog = "fentry_test2", .target = "bpf_testmod_test_struct_arg_9", + .attach_type = BPF_TRACE_FENTRY, .success = false, + }, + { + .prog = "fentry_test2", .target = "bpf_testmod_test_arg_ptr_3", + .attach_type = BPF_TRACE_FENTRY, .success = false, + }, + { + .prog = "fentry_test3", .target = "bpf_testmod_test_arg_ptr_3", + .attach_type = BPF_TRACE_FENTRY, .success = false, + }, + { + .prog = "fentry_test3", .target = "bpf_testmod_test_arg_ptr_4", + .attach_type = BPF_TRACE_FENTRY, .success = true, + }, + { + .prog = "fentry_test4", .target = "bpf_testmod_test_struct_arg_4", + .attach_type = BPF_TRACE_FENTRY, .success = true, + }, + { + .prog = "fentry_test4", .target = "bpf_testmod_test_struct_arg_2", + .attach_type = BPF_TRACE_FENTRY, .success = true, + }, + { + .prog = "fentry_test4", .target = "bpf_testmod_test_struct_arg_12", + .attach_type = BPF_TRACE_FENTRY, .success = false, + }, + { + .prog = "fexit_test1", .target = "bpf_testmod_test_struct_arg_2", + .attach_type = BPF_TRACE_FEXIT, .success = true, + }, + { + .prog = "fexit_test1", .target = "bpf_testmod_test_struct_arg_3", + .attach_type = BPF_TRACE_FEXIT, .success = true, + }, + { + .prog = "fexit_test1", .target = "bpf_testmod_test_struct_arg_4", + .attach_type = BPF_TRACE_FEXIT, .success = false, + }, + { + .prog = "fexit_test2", .target = "bpf_testmod_test_struct_arg_10", + .attach_type = BPF_TRACE_FEXIT, .success = false, + }, + { + .prog = "fexit_test2", .target = "bpf_testmod_test_struct_arg_11", + .attach_type = BPF_TRACE_FEXIT, .success = false, + }, + { + .prog = "fexit_test2", .target = "bpf_testmod_test_struct_arg_12", + .attach_type = BPF_TRACE_FEXIT, .success = true, + }, + { + .prog = "fmod_ret_test1", .target = "bpf_modify_return_test2", + .attach_type = BPF_MODIFY_RETURN, .success = true, + }, +}; + +static int do_test_item(struct tracing_multi_test *skel, struct test_item *item) +{ + LIBBPF_OPTS(bpf_link_create_opts, link_opts); + struct bpf_program *prog; + int err, btf_fd = 0, btf_type_id; + + err = libbpf_find_kernel_btf_id(item->target, item->attach_type, + &btf_fd, &btf_type_id); + if (!ASSERT_OK(err, "find_vmlinux_btf_id")) + return -1; + + link_opts.target_btf_id = btf_type_id; + prog = bpf_object__find_program_by_name(skel->obj, item->prog); + if (!ASSERT_OK_PTR(prog, "find_program_by_name")) + return -1; + + err = bpf_link_create(bpf_program__fd(prog), btf_fd, item->attach_type, + &link_opts); + item->link_fd = err; + if (item->success) { + if (!ASSERT_GE(err, 0, "link_create")) + return -1; + } else { + if (!ASSERT_LT(err, 0, "link_create")) + return -1; + } + + return 0; +} + +void test_tracing_multi_attach(void) +{ + struct tracing_multi_test *skel; + int i = 0, err, fd; + + skel = tracing_multi_test__open_and_load(); + if (!ASSERT_OK_PTR(skel, "tracing_multi_test__open_and_load")) + return; + + err = tracing_multi_test__attach(skel); + if (!ASSERT_OK(err, "tracing_multi_test__attach")) + goto destroy_skel; + + for (; i < ARRAY_SIZE(test_items); i++) { + if (do_test_item(skel, &test_items[i])) + break; + } + + for (i = 0; i < ARRAY_SIZE(test_items); i++) { + fd = test_items[i].link_fd; + if (fd >= 0) + close(fd); + } + + tracing_multi_test__detach(skel); +destroy_skel: + tracing_multi_test__destroy(skel); +} diff --git a/tools/testing/selftests/bpf/progs/tracing_multi_test.c b/tools/testing/selftests/bpf/progs/tracing_multi_test.c new file mode 100644 index 000000000000..f1ca8b64ed16 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/tracing_multi_test.c @@ -0,0 +1,72 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2024 ByteDance */ +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include <bpf/bpf_tracing.h> + +char _license[] SEC("license") = "GPL"; + +struct bpf_testmod_struct_arg_1 { + int a; +}; +struct bpf_testmod_struct_arg_2 { + long a; + long b; +}; + +__u64 fentry_test1_result = 0; +SEC("fentry/bpf_testmod_test_struct_arg_1") +int BPF_PROG2(fentry_test1, struct bpf_testmod_struct_arg_2, a) +{ + fentry_test1_result = a.a + a.b; + return 0; +} + +__u64 fentry_test2_result = 0; +SEC("fentry/bpf_testmod_test_struct_arg_2") +int BPF_PROG2(fentry_test2, int, a, struct bpf_testmod_struct_arg_2, b) +{ + fentry_test2_result = a + b.a + b.b; + return 0; +} + +__u64 fentry_test3_result = 0; +SEC("fentry/bpf_testmod_test_arg_ptr_2") +int BPF_PROG(fentry_test3, struct bpf_testmod_struct_arg_2 *a) +{ + fentry_test3_result = a->a + a->b; + return 0; +} + +__u64 fentry_test4_result = 0; +SEC("fentry/bpf_testmod_test_struct_arg_1") +int BPF_PROG2(fentry_test4, struct bpf_testmod_struct_arg_2, a, int, b, + int, c) +{ + fentry_test3_result = c; + return 0; +} + +__u64 fexit_test1_result = 0; +SEC("fexit/bpf_testmod_test_struct_arg_1") +int BPF_PROG2(fexit_test1, struct bpf_testmod_struct_arg_2, a, int, b, + int, c, int, retval) +{ + fexit_test1_result = retval; + return 0; +} + +__u64 fexit_test2_result = 0; +SEC("fexit/bpf_testmod_test_struct_arg_2") +int BPF_PROG2(fexit_test2, int, a, struct bpf_testmod_struct_arg_2, b, + int, c, int, retval) +{ + fexit_test2_result = a + b.a + b.b + retval; + return 0; +} + +SEC("fmod_ret/bpf_modify_return_test") +int BPF_PROG(fmod_ret_test1, int a, int *b) +{ + return 0; +}
On Mon, Feb 19, 2024 at 7:51 PM Menglong Dong dongmenglong.8@bytedance.com wrote:
For now, the BPF program of type BPF_PROG_TYPE_TRACING is not allowed to be attached to multiple hooks, and we have to create a BPF program for each kernel function, for which we want to trace, even through all the program have the same (or similar) logic. This can consume extra memory, and make the program loading slow if we have plenty of kernel function to trace.
Should this be combined with multi link ? (As was recently done for kprobe_multi and uprobe_multi). Loading fentry prog once and attaching it through many bpf_links to multiple places is a nice addition, but we should probably add a multi link right away too.
Hello,
On Wed, Feb 21, 2024 at 9:24 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Mon, Feb 19, 2024 at 7:51 PM Menglong Dong dongmenglong.8@bytedance.com wrote:
For now, the BPF program of type BPF_PROG_TYPE_TRACING is not allowed to be attached to multiple hooks, and we have to create a BPF program for each kernel function, for which we want to trace, even through all the program have the same (or similar) logic. This can consume extra memory, and make the program loading slow if we have plenty of kernel function to trace.
Should this be combined with multi link ? (As was recently done for kprobe_multi and uprobe_multi). Loading fentry prog once and attaching it through many bpf_links to multiple places is a nice addition, but we should probably add a multi link right away too.
I was planning to implement the multi link for tracing after this series in another series. I can do it together with this series if you prefer.
Thanks! Menglong Dong
On Wed, Feb 21, 2024 at 10:35 AM 梦龙董 dongmenglong.8@bytedance.com wrote:
Hello,
On Wed, Feb 21, 2024 at 9:24 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Mon, Feb 19, 2024 at 7:51 PM Menglong Dong dongmenglong.8@bytedance.com wrote:
For now, the BPF program of type BPF_PROG_TYPE_TRACING is not allowed to be attached to multiple hooks, and we have to create a BPF program for each kernel function, for which we want to trace, even through all the program have the same (or similar) logic. This can consume extra memory, and make the program loading slow if we have plenty of kernel function to trace.
Should this be combined with multi link ? (As was recently done for kprobe_multi and uprobe_multi). Loading fentry prog once and attaching it through many bpf_links to multiple places is a nice addition, but we should probably add a multi link right away too.
I was planning to implement the multi link for tracing after this series in another series. I can do it together with this series if you prefer.
Should I introduce the multi link for tracing first, then this series? (Furthermore, this series seems not necessary.)
Thanks! Menglong Dong
On Tue, Feb 20, 2024 at 6:45 PM 梦龙董 dongmenglong.8@bytedance.com wrote:
On Wed, Feb 21, 2024 at 10:35 AM 梦龙董 dongmenglong.8@bytedance.com wrote:
Hello,
On Wed, Feb 21, 2024 at 9:24 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Mon, Feb 19, 2024 at 7:51 PM Menglong Dong dongmenglong.8@bytedance.com wrote:
For now, the BPF program of type BPF_PROG_TYPE_TRACING is not allowed to be attached to multiple hooks, and we have to create a BPF program for each kernel function, for which we want to trace, even through all the program have the same (or similar) logic. This can consume extra memory, and make the program loading slow if we have plenty of kernel function to trace.
Should this be combined with multi link ? (As was recently done for kprobe_multi and uprobe_multi). Loading fentry prog once and attaching it through many bpf_links to multiple places is a nice addition, but we should probably add a multi link right away too.
I was planning to implement the multi link for tracing after this series in another series. I can do it together with this series if you prefer.
Should I introduce the multi link for tracing first, then this series? (Furthermore, this series seems not necessary.)
What do you mean "not necessary" ? Don't you want to still check that bpf prog access only N args and BTF for these args matches across all attach points ?
On Wed, Feb 21, 2024 at 11:02 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Tue, Feb 20, 2024 at 6:45 PM 梦龙董 dongmenglong.8@bytedance.com wrote:
On Wed, Feb 21, 2024 at 10:35 AM 梦龙董 dongmenglong.8@bytedance.com wrote:
Hello,
On Wed, Feb 21, 2024 at 9:24 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Mon, Feb 19, 2024 at 7:51 PM Menglong Dong dongmenglong.8@bytedance.com wrote:
For now, the BPF program of type BPF_PROG_TYPE_TRACING is not allowed to be attached to multiple hooks, and we have to create a BPF program for each kernel function, for which we want to trace, even through all the program have the same (or similar) logic. This can consume extra memory, and make the program loading slow if we have plenty of kernel function to trace.
Should this be combined with multi link ? (As was recently done for kprobe_multi and uprobe_multi). Loading fentry prog once and attaching it through many bpf_links to multiple places is a nice addition, but we should probably add a multi link right away too.
I was planning to implement the multi link for tracing after this series in another series. I can do it together with this series if you prefer.
Should I introduce the multi link for tracing first, then this series? (Furthermore, this series seems not necessary.)
What do you mean "not necessary" ? Don't you want to still check that bpf prog access only N args and BTF for these args matches across all attach points ?
No, I means that if we should keep the
"Loading fentry prog once and attaching it through many bpf_links to multiple places"
and only keep the multi link.
Both methods need to check the accessed args of the target.
On Tue, Feb 20, 2024 at 7:06 PM 梦龙董 dongmenglong.8@bytedance.com wrote:
On Wed, Feb 21, 2024 at 11:02 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Tue, Feb 20, 2024 at 6:45 PM 梦龙董 dongmenglong.8@bytedance.com wrote:
On Wed, Feb 21, 2024 at 10:35 AM 梦龙董 dongmenglong.8@bytedance.com wrote:
Hello,
On Wed, Feb 21, 2024 at 9:24 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Mon, Feb 19, 2024 at 7:51 PM Menglong Dong dongmenglong.8@bytedance.com wrote:
For now, the BPF program of type BPF_PROG_TYPE_TRACING is not allowed to be attached to multiple hooks, and we have to create a BPF program for each kernel function, for which we want to trace, even through all the program have the same (or similar) logic. This can consume extra memory, and make the program loading slow if we have plenty of kernel function to trace.
Should this be combined with multi link ? (As was recently done for kprobe_multi and uprobe_multi). Loading fentry prog once and attaching it through many bpf_links to multiple places is a nice addition, but we should probably add a multi link right away too.
I was planning to implement the multi link for tracing after this series in another series. I can do it together with this series if you prefer.
Should I introduce the multi link for tracing first, then this series? (Furthermore, this series seems not necessary.)
What do you mean "not necessary" ? Don't you want to still check that bpf prog access only N args and BTF for these args matches across all attach points ?
No, I means that if we should keep the
"Loading fentry prog once and attaching it through many bpf_links to multiple places"
and only keep the multi link.
I suspect supporting multi link only is better, since the amount of kernel code to maintain will be less.
On Wed, Feb 21, 2024 at 11:18 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Tue, Feb 20, 2024 at 7:06 PM 梦龙董 dongmenglong.8@bytedance.com wrote:
On Wed, Feb 21, 2024 at 11:02 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Tue, Feb 20, 2024 at 6:45 PM 梦龙董 dongmenglong.8@bytedance.com wrote:
On Wed, Feb 21, 2024 at 10:35 AM 梦龙董 dongmenglong.8@bytedance.com wrote:
Hello,
On Wed, Feb 21, 2024 at 9:24 AM Alexei Starovoitov alexei.starovoitov@gmail.com wrote:
On Mon, Feb 19, 2024 at 7:51 PM Menglong Dong dongmenglong.8@bytedance.com wrote: > > For now, the BPF program of type BPF_PROG_TYPE_TRACING is not allowed to > be attached to multiple hooks, and we have to create a BPF program for > each kernel function, for which we want to trace, even through all the > program have the same (or similar) logic. This can consume extra memory, > and make the program loading slow if we have plenty of kernel function to > trace.
Should this be combined with multi link ? (As was recently done for kprobe_multi and uprobe_multi). Loading fentry prog once and attaching it through many bpf_links to multiple places is a nice addition, but we should probably add a multi link right away too.
I was planning to implement the multi link for tracing after this series in another series. I can do it together with this series if you prefer.
Should I introduce the multi link for tracing first, then this series? (Furthermore, this series seems not necessary.)
What do you mean "not necessary" ? Don't you want to still check that bpf prog access only N args and BTF for these args matches across all attach points ?
No, I means that if we should keep the
"Loading fentry prog once and attaching it through many bpf_links to multiple places"
and only keep the multi link.
I suspect supporting multi link only is better, since the amount of kernel code to maintain will be less.
Okay!
linux-kselftest-mirror@lists.linaro.org