The perf subsystem today unifies various tracing and monitoring features, from both software and hardware. One benefit of the perf subsystem is automatically inheriting events to child tasks, which enables process-wide events monitoring with low overheads. By default perf events are non-intrusive, not affecting behaviour of the tasks being monitored.
For certain use-cases, however, it makes sense to leverage the generality of the perf events subsystem and optionally allow the tasks being monitored to receive signals on events they are interested in. This patch series adds the option to synchronously signal user space on events.
To better support process-wide synchronous self-monitoring, without events propagating to children that do not share the current process's shared environment, two pre-requisite patches are added to optionally restrict inheritance to CLONE_THREAD, and remove events on exec (without affecting the parent).
Examples how to use these features can be found in the two kselftests at the end of the series. The kselftests verify and stress test the basic functionality.
The discussion at [1] led to the changes proposed in this series. The approach taken in patch "Add support for SIGTRAP on perf events" to use 'event_limit' to trigger the signal was kindly suggested by Peter Zijlstra in [2].
[1] https://lore.kernel.org/lkml/CACT4Y+YPrXGw+AtESxAgPyZ84TYkNZdP0xpocX2jwVAbZD... [2] https://lore.kernel.org/lkml/YBv3rAT566k+6zjg@hirez.programming.kicks-ass.ne...
Motivation and example uses:
1. Our immediate motivation is low-overhead sampling-based race detection for user space [3]. By using perf_event_open() at process initialization, we can create hardware breakpoint/watchpoint events that are propagated automatically to all threads in a process. As far as we are aware, today no existing kernel facility (such as ptrace) allows us to set up process-wide watchpoints with minimal overheads (that are comparable to mprotect() of whole pages).
[3] https://llvm.org/devmtg/2020-09/slides/Morehouse-GWP-Tsan.pdf
2. Other low-overhead error detectors that rely on detecting accesses to certain memory locations or code, process-wide and also only in a specific set of subtasks or threads.
Other ideas for use-cases we found interesting, but should only illustrate the range of potential to further motivate the utility (we're sure there are more):
3. Code hot patching without full stop-the-world. Specifically, by setting a code breakpoint to entry to the patched routine, then send signals to threads and check that they are not in the routine, but without stopping them further. If any of the threads will enter the routine, it will receive SIGTRAP and pause.
4. Safepoints without mprotect(). Some Java implementations use "load from a known memory location" as a safepoint. When threads need to be stopped, the page containing the location is mprotect()ed and threads get a signal. This could be replaced with a watchpoint, which does not require a whole page nor DTLB shootdowns.
5. Threads receiving signals on performance events to throttle/unthrottle themselves.
6. Tracking data flow globally.
--- v2: * Patch "Support only inheriting events if cloned with CLONE_THREAD" added to series. * Patch "Add support for event removal on exec" added to series. * Patch "Add kselftest for process-wide sigtrap handling" added to series. * Patch "Add kselftest for remove_on_exec" added to series. * Implicitly restrict inheriting events if sigtrap, but the child was cloned with CLONE_CLEAR_SIGHAND, because it is not generally safe if the child cleared all signal handlers to continue sending SIGTRAP. * Various minor fixes (see details in patches).
v1: https://lkml.kernel.org/r/20210223143426.2412737-1-elver@google.com
Marco Elver (8): perf/core: Apply PERF_EVENT_IOC_MODIFY_ATTRIBUTES to children perf/core: Support only inheriting events if cloned with CLONE_THREAD perf/core: Add support for event removal on exec signal: Introduce TRAP_PERF si_code and si_perf to siginfo perf/core: Add support for SIGTRAP on perf events perf/core: Add breakpoint information to siginfo on SIGTRAP selftests/perf: Add kselftest for process-wide sigtrap handling selftests/perf: Add kselftest for remove_on_exec
arch/m68k/kernel/signal.c | 3 + arch/x86/kernel/signal_compat.c | 5 +- fs/signalfd.c | 4 + include/linux/compat.h | 2 + include/linux/perf_event.h | 5 +- include/linux/signal.h | 1 + include/uapi/asm-generic/siginfo.h | 6 +- include/uapi/linux/perf_event.h | 5 +- include/uapi/linux/signalfd.h | 4 +- kernel/events/core.c | 130 ++++++++- kernel/fork.c | 2 +- kernel/signal.c | 11 + .../testing/selftests/perf_events/.gitignore | 3 + tools/testing/selftests/perf_events/Makefile | 6 + tools/testing/selftests/perf_events/config | 1 + .../selftests/perf_events/remove_on_exec.c | 256 ++++++++++++++++++ tools/testing/selftests/perf_events/settings | 1 + .../selftests/perf_events/sigtrap_threads.c | 202 ++++++++++++++ 18 files changed, 632 insertions(+), 15 deletions(-) create mode 100644 tools/testing/selftests/perf_events/.gitignore create mode 100644 tools/testing/selftests/perf_events/Makefile create mode 100644 tools/testing/selftests/perf_events/config create mode 100644 tools/testing/selftests/perf_events/remove_on_exec.c create mode 100644 tools/testing/selftests/perf_events/settings create mode 100644 tools/testing/selftests/perf_events/sigtrap_threads.c
As with other ioctls (such as PERF_EVENT_IOC_{ENABLE,DISABLE}), fix up handling of PERF_EVENT_IOC_MODIFY_ATTRIBUTES to also apply to children.
Link: https://lkml.kernel.org/r/YBqVaY8aTMYtoUnX@hirez.programming.kicks-ass.net Suggested-by: Dmitry Vyukov dvyukov@google.com Reviewed-by: Dmitry Vyukov dvyukov@google.com Signed-off-by: Marco Elver elver@google.com --- kernel/events/core.c | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c index 0aeca5f3c0ac..bff498766065 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -3179,16 +3179,36 @@ static int perf_event_modify_breakpoint(struct perf_event *bp, static int perf_event_modify_attr(struct perf_event *event, struct perf_event_attr *attr) { + int (*func)(struct perf_event *, struct perf_event_attr *); + struct perf_event *child; + int err; + if (event->attr.type != attr->type) return -EINVAL;
switch (event->attr.type) { case PERF_TYPE_BREAKPOINT: - return perf_event_modify_breakpoint(event, attr); + func = perf_event_modify_breakpoint; + break; default: /* Place holder for future additions. */ return -EOPNOTSUPP; } + + WARN_ON_ONCE(event->ctx->parent_ctx); + + mutex_lock(&event->child_mutex); + err = func(event, attr); + if (err) + goto out; + list_for_each_entry(child, &event->child_list, child_list) { + err = func(child, attr); + if (err) + goto out; + } +out: + mutex_unlock(&event->child_mutex); + return err; }
static void ctx_sched_out(struct perf_event_context *ctx,
Adds bit perf_event_attr::inherit_thread, to restricting inheriting events only if the child was cloned with CLONE_THREAD.
This option supports the case where an event is supposed to be process-wide only (including subthreads), but should not propagate beyond the current process's shared environment.
Link: https://lore.kernel.org/lkml/YBvj6eJR%2FDY2TsEB@hirez.programming.kicks-ass.... Suggested-by: Peter Zijlstra peterz@infradead.org Signed-off-by: Marco Elver elver@google.com --- v2: * Add patch to series. --- include/linux/perf_event.h | 5 +++-- include/uapi/linux/perf_event.h | 3 ++- kernel/events/core.c | 21 ++++++++++++++------- kernel/fork.c | 2 +- 4 files changed, 20 insertions(+), 11 deletions(-)
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index fab42cfbd350..982ad61c653a 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -955,7 +955,7 @@ extern void __perf_event_task_sched_in(struct task_struct *prev, struct task_struct *task); extern void __perf_event_task_sched_out(struct task_struct *prev, struct task_struct *next); -extern int perf_event_init_task(struct task_struct *child); +extern int perf_event_init_task(struct task_struct *child, u64 clone_flags); extern void perf_event_exit_task(struct task_struct *child); extern void perf_event_free_task(struct task_struct *task); extern void perf_event_delayed_put(struct task_struct *task); @@ -1446,7 +1446,8 @@ perf_event_task_sched_in(struct task_struct *prev, static inline void perf_event_task_sched_out(struct task_struct *prev, struct task_struct *next) { } -static inline int perf_event_init_task(struct task_struct *child) { return 0; } +static inline int perf_event_init_task(struct task_struct *child, + u64 clone_flags) { return 0; } static inline void perf_event_exit_task(struct task_struct *child) { } static inline void perf_event_free_task(struct task_struct *task) { } static inline void perf_event_delayed_put(struct task_struct *task) { } diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h index ad15e40d7f5d..813efb65fea8 100644 --- a/include/uapi/linux/perf_event.h +++ b/include/uapi/linux/perf_event.h @@ -389,7 +389,8 @@ struct perf_event_attr { cgroup : 1, /* include cgroup events */ text_poke : 1, /* include text poke events */ build_id : 1, /* use build id in mmap2 events */ - __reserved_1 : 29; + inherit_thread : 1, /* children only inherit if cloned with CLONE_THREAD */ + __reserved_1 : 28;
union { __u32 wakeup_events; /* wakeup every n events */ diff --git a/kernel/events/core.c b/kernel/events/core.c index bff498766065..a8382e6c907c 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -11597,6 +11597,9 @@ static int perf_copy_attr(struct perf_event_attr __user *uattr, (attr->sample_type & PERF_SAMPLE_WEIGHT_STRUCT)) return -EINVAL;
+ if (!attr->inherit && attr->inherit_thread) + return -EINVAL; + out: return ret;
@@ -12820,12 +12823,13 @@ static int inherit_task_group(struct perf_event *event, struct task_struct *parent, struct perf_event_context *parent_ctx, struct task_struct *child, int ctxn, - int *inherited_all) + u64 clone_flags, int *inherited_all) { int ret; struct perf_event_context *child_ctx;
- if (!event->attr.inherit) { + if (!event->attr.inherit || + (event->attr.inherit_thread && !(clone_flags & CLONE_THREAD))) { *inherited_all = 0; return 0; } @@ -12857,7 +12861,8 @@ inherit_task_group(struct perf_event *event, struct task_struct *parent, /* * Initialize the perf_event context in task_struct */ -static int perf_event_init_context(struct task_struct *child, int ctxn) +static int perf_event_init_context(struct task_struct *child, int ctxn, + u64 clone_flags) { struct perf_event_context *child_ctx, *parent_ctx; struct perf_event_context *cloned_ctx; @@ -12897,7 +12902,8 @@ static int perf_event_init_context(struct task_struct *child, int ctxn) */ perf_event_groups_for_each(event, &parent_ctx->pinned_groups) { ret = inherit_task_group(event, parent, parent_ctx, - child, ctxn, &inherited_all); + child, ctxn, clone_flags, + &inherited_all); if (ret) goto out_unlock; } @@ -12913,7 +12919,8 @@ static int perf_event_init_context(struct task_struct *child, int ctxn)
perf_event_groups_for_each(event, &parent_ctx->flexible_groups) { ret = inherit_task_group(event, parent, parent_ctx, - child, ctxn, &inherited_all); + child, ctxn, clone_flags, + &inherited_all); if (ret) goto out_unlock; } @@ -12955,7 +12962,7 @@ static int perf_event_init_context(struct task_struct *child, int ctxn) /* * Initialize the perf_event context in task_struct */ -int perf_event_init_task(struct task_struct *child) +int perf_event_init_task(struct task_struct *child, u64 clone_flags) { int ctxn, ret;
@@ -12964,7 +12971,7 @@ int perf_event_init_task(struct task_struct *child) INIT_LIST_HEAD(&child->perf_event_list);
for_each_task_context_nr(ctxn) { - ret = perf_event_init_context(child, ctxn); + ret = perf_event_init_context(child, ctxn, clone_flags); if (ret) { perf_event_free_task(child); return ret; diff --git a/kernel/fork.c b/kernel/fork.c index d3171e8e88e5..d090366d1206 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2070,7 +2070,7 @@ static __latent_entropy struct task_struct *copy_process( if (retval) goto bad_fork_cleanup_policy;
- retval = perf_event_init_task(p); + retval = perf_event_init_task(p, clone_flags); if (retval) goto bad_fork_cleanup_policy; retval = audit_alloc(p);
Adds bit perf_event_attr::remove_on_exec, to support removing an event from a task on exec.
This option supports the case where an event is supposed to be process-wide only, and should not propagate beyond exec, to limit monitoring to the original process image only.
Signed-off-by: Marco Elver elver@google.com --- v2: * Add patch to series. --- include/uapi/linux/perf_event.h | 3 ++- kernel/events/core.c | 45 +++++++++++++++++++++++++++++++++ 2 files changed, 47 insertions(+), 1 deletion(-)
diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h index 813efb65fea8..8c5b9f5ad63f 100644 --- a/include/uapi/linux/perf_event.h +++ b/include/uapi/linux/perf_event.h @@ -390,7 +390,8 @@ struct perf_event_attr { text_poke : 1, /* include text poke events */ build_id : 1, /* use build id in mmap2 events */ inherit_thread : 1, /* children only inherit if cloned with CLONE_THREAD */ - __reserved_1 : 28; + remove_on_exec : 1, /* event is removed from task on exec */ + __reserved_1 : 27;
union { __u32 wakeup_events; /* wakeup every n events */ diff --git a/kernel/events/core.c b/kernel/events/core.c index a8382e6c907c..bc9e6e35e414 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -4195,6 +4195,46 @@ static void perf_event_enable_on_exec(int ctxn) put_ctx(clone_ctx); }
+static void perf_remove_from_owner(struct perf_event *event); +static void perf_event_exit_event(struct perf_event *child_event, + struct perf_event_context *child_ctx, + struct task_struct *child); + +/* + * Removes all events from the current task that have been marked + * remove-on-exec, and feeds their values back to parent events. + */ +static void perf_event_remove_on_exec(void) +{ + int ctxn; + + for_each_task_context_nr(ctxn) { + struct perf_event_context *ctx; + struct perf_event *event, *next; + + ctx = perf_pin_task_context(current, ctxn); + if (!ctx) + continue; + mutex_lock(&ctx->mutex); + + list_for_each_entry_safe(event, next, &ctx->event_list, event_entry) { + if (!event->attr.remove_on_exec) + continue; + + if (!is_kernel_event(event)) + perf_remove_from_owner(event); + perf_remove_from_context(event, DETACH_GROUP); + /* + * Remove the event and feed back its values to the + * parent event. + */ + perf_event_exit_event(event, ctx, current); + } + mutex_unlock(&ctx->mutex); + put_ctx(ctx); + } +} + struct perf_read_data { struct perf_event *event; bool group; @@ -7519,6 +7559,8 @@ void perf_event_exec(void) true); } rcu_read_unlock(); + + perf_event_remove_on_exec(); }
struct remote_output { @@ -11600,6 +11642,9 @@ static int perf_copy_attr(struct perf_event_attr __user *uattr, if (!attr->inherit && attr->inherit_thread) return -EINVAL;
+ if (attr->remove_on_exec && attr->enable_on_exec) + return -EINVAL; + out: return ret;
On Wed, Mar 10, 2021 at 11:41AM +0100, Marco Elver wrote:
Adds bit perf_event_attr::remove_on_exec, to support removing an event from a task on exec.
This option supports the case where an event is supposed to be process-wide only, and should not propagate beyond exec, to limit monitoring to the original process image only.
[...]
+static void perf_remove_from_owner(struct perf_event *event); +static void perf_event_exit_event(struct perf_event *child_event,
struct perf_event_context *child_ctx,
struct task_struct *child);
+/*
- Removes all events from the current task that have been marked
- remove-on-exec, and feeds their values back to parent events.
- */
+static void perf_event_remove_on_exec(void) +{
- int ctxn;
- for_each_task_context_nr(ctxn) {
struct perf_event_context *ctx;
struct perf_event *event, *next;
ctx = perf_pin_task_context(current, ctxn);
if (!ctx)
continue;
mutex_lock(&ctx->mutex);
list_for_each_entry_safe(event, next, &ctx->event_list, event_entry) {
if (!event->attr.remove_on_exec)
continue;
if (!is_kernel_event(event))
perf_remove_from_owner(event);
perf_remove_from_context(event, DETACH_GROUP);
/*
* Remove the event and feed back its values to the
* parent event.
*/
perf_event_exit_event(event, ctx, current);
}
mutex_unlock(&ctx->mutex);
put_ctx(ctx);
- }
+}
Yikes; it seems this is somehow broken. I just decided to run the remove_on_exec kselftest in a loop like so:
for x in {1..10}; do ( tools/testing/selftests/perf_events/remove_on_exec & ) ; done
While the kselftest runs pass, I see a number of kernel warnings (below).
Any suggestions?
I'll go and try to debug this...
Thanks, -- Marco
------ >8 ------
hardirqs last disabled at (4150): [<ffffffffa633219b>] sysvec_call_function_single+0xb/0xc0 arch/x86/kernel/smp.c:243 softirqs last enabled at (3846): [<ffffffffa566f621>] invoke_softirq kernel/softirq.c:221 [inline] softirqs last enabled at (3846): [<ffffffffa566f621>] __irq_exit_rcu kernel/softirq.c:422 [inline] softirqs last enabled at (3846): [<ffffffffa566f621>] irq_exit_rcu+0xe1/0x120 kernel/softirq.c:434 softirqs last disabled at (3839): [<ffffffffa566f621>] invoke_softirq kernel/softirq.c:221 [inline] softirqs last disabled at (3839): [<ffffffffa566f621>] __irq_exit_rcu kernel/softirq.c:422 [inline] softirqs last disabled at (3839): [<ffffffffa566f621>] irq_exit_rcu+0xe1/0x120 kernel/softirq.c:434 ---[ end trace 74c79be9940ec2d1 ]--- ------------[ cut here ]------------ WARNING: CPU: 3 PID: 1369 at kernel/events/core.c:247 event_function+0xef/0x100 kernel/events/core.c:249 Modules linked in: CPU: 3 PID: 1369 Comm: exe Tainted: G W 5.12.0-rc2+ #19 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 RIP: 0010:event_function+0xef/0x100 kernel/events/core.c:247 Code: 5b 5d 41 5c 41 5d 41 5e 41 5f c3 65 8b 05 a5 79 88 5a 85 c0 0f 84 6e ff ff ff 0f 0b e9 67 ff ff ff 4c 39 f5 74 a7 0f 0b eb a3 <0f> 0b eb 9f 0f 0b eb 96 41 bd fd ff ff ff eb ac 90 48 8b 47 10 48 RSP: 0000:ffff980880158f70 EFLAGS: 00010086 RAX: 0000000000000000 RBX: ffff98088111fde0 RCX: 944f9e9405e234a1 RDX: ffff8a5d4d2ac340 RSI: ffffffffa6b4ccef RDI: ffff8a606fcf0c08 RBP: ffff8a606fcf0c00 R08: 0000000000000001 R09: 0000000000000000 R10: 0000000000000000 R11: ffff8a5d4d2accb8 R12: 0000000000000000 R13: ffff8a5d4e6db800 R14: ffff8a5d46534a00 R15: ffff8a606fcf0c08 FS: 0000000000000000(0000) GS:ffff8a606fcc0000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007fd2b331e225 CR3: 00000001e0e22006 CR4: 0000000000770ee0 DR0: 0000564596006388 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000600 PKRU: 55555554 Call Trace: <IRQ> remote_function kernel/events/core.c:91 [inline] remote_function+0x44/0x50 kernel/events/core.c:71 flush_smp_call_function_queue+0x13a/0x1d0 kernel/smp.c:395 __sysvec_call_function_single+0x3e/0x1c0 arch/x86/kernel/smp.c:248 sysvec_call_function_single+0x89/0xc0 arch/x86/kernel/smp.c:243 </IRQ> asm_sysvec_call_function_single+0x12/0x20 arch/x86/include/asm/idtentry.h:640 RIP: 0010:lock_page_memcg+0xc7/0x170 mm/memcontrol.c:2157 Code: 00 00 e8 6c ae e9 ff 48 c7 c6 d3 07 83 a5 58 4c 89 f7 e8 6c ab e9 ff 48 85 db 74 06 e8 22 e1 f3 ff fb 41 8b 84 24 00 0b 00 00 <85> c0 7e a7 4d 8d b4 24 70 06 00 00 4c 89 f7 e8 85 b2 b0 00 48 89 RSP: 0000:ffff980881bc7b38 EFLAGS: 00000206 RAX: 0000000000000000 RBX: 0000000000000200 RCX: 0000000000000006 RDX: 0000000000000000 RSI: ffffffffa6c1a6ed RDI: ffffffffa6b9ab37 RBP: ffffccff47891b80 R08: 0000000000000001 R09: 0000000000000001 R10: 0000000000000000 R11: ffff8a5d4d2accb8 R12: ffff8a5d403e9000 R13: ffffffffa58307d3 R14: ffff8a5d403e9688 R15: ffff8a5d47067128 page_remove_rmap+0xc/0xb0 mm/rmap.c:1348 zap_pte_range mm/memory.c:1276 [inline] zap_pmd_range mm/memory.c:1380 [inline] zap_pud_range mm/memory.c:1409 [inline] zap_p4d_range mm/memory.c:1430 [inline] unmap_page_range+0x612/0xb00 mm/memory.c:1451 unmap_vmas+0xbe/0x150 mm/memory.c:1528 exit_mmap+0x8f/0x1d0 mm/mmap.c:3218 __mmput kernel/fork.c:1082 [inline] mmput+0x3c/0xe0 kernel/fork.c:1103 exit_mm kernel/exit.c:501 [inline] do_exit+0x369/0xb60 kernel/exit.c:812 do_group_exit+0x34/0xb0 kernel/exit.c:922 get_signal+0x170/0xc80 kernel/signal.c:2775 arch_do_signal_or_restart+0xea/0x740 arch/x86/kernel/signal.c:811 handle_signal_work kernel/entry/common.c:147 [inline] exit_to_user_mode_loop kernel/entry/common.c:171 [inline] exit_to_user_mode_prepare+0x10f/0x190 kernel/entry/common.c:208 irqentry_exit_to_user_mode+0x5/0x30 kernel/entry/common.c:314 asm_sysvec_reschedule_ipi+0x12/0x20 arch/x86/include/asm/idtentry.h:637 RIP: 0033:0x5598fc00409b Code: Unable to access opcode bytes at RIP 0x5598fc004071. RSP: 002b:00007ffe94151cf0 EFLAGS: 00000246 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007f6db39331b0 RDX: 0000000000000004 RSI: 00007ffe94151cfc RDI: 0000000000000001 RBP: 00007ffe94151da0 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000059 R11: 0000000000000246 R12: 00005598fc0010d0 R13: 00007ffe94151ea0 R14: 0000000000000000 R15: 0000000000000000 irq event stamp: 4150 hardirqs last enabled at (4149): [<ffffffffa583080e>] lock_page_memcg+0xbe/0x170 mm/memcontrol.c:2154 hardirqs last disabled at (4150): [<ffffffffa633219b>] sysvec_call_function_single+0xb/0xc0 arch/x86/kernel/smp.c:243 softirqs last enabled at (3846): [<ffffffffa566f621>] invoke_softirq kernel/softirq.c:221 [inline] softirqs last enabled at (3846): [<ffffffffa566f621>] __irq_exit_rcu kernel/softirq.c:422 [inline] softirqs last enabled at (3846): [<ffffffffa566f621>] irq_exit_rcu+0xe1/0x120 kernel/softirq.c:434 softirqs last disabled at (3839): [<ffffffffa566f621>] invoke_softirq kernel/softirq.c:221 [inline] softirqs last disabled at (3839): [<ffffffffa566f621>] __irq_exit_rcu kernel/softirq.c:422 [inline] softirqs last disabled at (3839): [<ffffffffa566f621>] irq_exit_rcu+0xe1/0x120 kernel/softirq.c:434 ---[ end trace 74c79be9940ec2d2 ]--- ------------[ cut here ]------------ WARNING: CPU: 3 PID: 1369 at kernel/events/core.c:2253 event_sched_out+0x4c/0x200 kernel/events/core.c:2253 Modules linked in: CPU: 3 PID: 1369 Comm: exe Tainted: G W 5.12.0-rc2+ #19 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 RIP: 0010:event_sched_out+0x4c/0x200 kernel/events/core.c:2253 Code: 92 01 85 c9 75 12 83 bb a8 00 00 00 01 74 26 5b 5d 41 5c 41 5d 41 5e c3 48 8d 7d 20 be ff ff ff ff e8 18 cd b9 00 85 c0 75 dc <0f> 0b 83 bb a8 00 00 00 01 75 da 48 8b 53 28 48 8b 4b 20 48 8d 43 RSP: 0000:ffff980880158f18 EFLAGS: 00010046 RAX: 0000000000000000 RBX: ffff8a5d4e6db800 RCX: 0000000000000001 RDX: 0000000000000000 RSI: ffffffffa6b4ccef RDI: ffffffffa6b9ab37 RBP: ffff8a5d46534a00 R08: 0000000000000001 R09: 0000000000000000 R10: 0000000000000000 R11: ffff8a5d4d2accb8 R12: ffff8a606fcf0c00 R13: ffff8a606fcf0c00 R14: ffff8a5d46534a00 R15: ffff8a606fcf0c08 FS: 0000000000000000(0000) GS:ffff8a606fcc0000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007fd2b331e225 CR3: 00000001e0e22006 CR4: 0000000000770ee0 DR0: 0000564596006388 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000600 PKRU: 55555554 Call Trace: <IRQ> __perf_remove_from_context+0x29/0xd0 kernel/events/core.c:2333 event_function+0xab/0x100 kernel/events/core.c:252 remote_function kernel/events/core.c:91 [inline] remote_function+0x44/0x50 kernel/events/core.c:71 flush_smp_call_function_queue+0x13a/0x1d0 kernel/smp.c:395 __sysvec_call_function_single+0x3e/0x1c0 arch/x86/kernel/smp.c:248 sysvec_call_function_single+0x89/0xc0 arch/x86/kernel/smp.c:243 </IRQ> asm_sysvec_call_function_single+0x12/0x20 arch/x86/include/asm/idtentry.h:640 RIP: 0010:lock_page_memcg+0xc7/0x170 mm/memcontrol.c:2157 Code: 00 00 e8 6c ae e9 ff 48 c7 c6 d3 07 83 a5 58 4c 89 f7 e8 6c ab e9 ff 48 85 db 74 06 e8 22 e1 f3 ff fb 41 8b 84 24 00 0b 00 00 <85> c0 7e a7 4d 8d b4 24 70 06 00 00 4c 89 f7 e8 85 b2 b0 00 48 89 RSP: 0000:ffff980881bc7b38 EFLAGS: 00000206 RAX: 0000000000000000 RBX: 0000000000000200 RCX: 0000000000000006 RDX: 0000000000000000 RSI: ffffffffa6c1a6ed RDI: ffffffffa6b9ab37 RBP: ffffccff47891b80 R08: 0000000000000001 R09: 0000000000000001 R10: 0000000000000000 R11: ffff8a5d4d2accb8 R12: ffff8a5d403e9000 R13: ffffffffa58307d3 R14: ffff8a5d403e9688 R15: ffff8a5d47067128 page_remove_rmap+0xc/0xb0 mm/rmap.c:1348 zap_pte_range mm/memory.c:1276 [inline] zap_pmd_range mm/memory.c:1380 [inline] zap_pud_range mm/memory.c:1409 [inline] zap_p4d_range mm/memory.c:1430 [inline] unmap_page_range+0x612/0xb00 mm/memory.c:1451 unmap_vmas+0xbe/0x150 mm/memory.c:1528 exit_mmap+0x8f/0x1d0 mm/mmap.c:3218 __mmput kernel/fork.c:1082 [inline] mmput+0x3c/0xe0 kernel/fork.c:1103 exit_mm kernel/exit.c:501 [inline] do_exit+0x369/0xb60 kernel/exit.c:812 do_group_exit+0x34/0xb0 kernel/exit.c:922 get_signal+0x170/0xc80 kernel/signal.c:2775 arch_do_signal_or_restart+0xea/0x740 arch/x86/kernel/signal.c:811 handle_signal_work kernel/entry/common.c:147 [inline] exit_to_user_mode_loop kernel/entry/common.c:171 [inline] exit_to_user_mode_prepare+0x10f/0x190 kernel/entry/common.c:208 irqentry_exit_to_user_mode+0x5/0x30 kernel/entry/common.c:314 asm_sysvec_reschedule_ipi+0x12/0x20 arch/x86/include/asm/idtentry.h:637 RIP: 0033:0x5598fc00409b Code: Unable to access opcode bytes at RIP 0x5598fc004071. RSP: 002b:00007ffe94151cf0 EFLAGS: 00000246 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007f6db39331b0 RDX: 0000000000000004 RSI: 00007ffe94151cfc RDI: 0000000000000001 RBP: 00007ffe94151da0 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000059 R11: 0000000000000246 R12: 00005598fc0010d0 R13: 00007ffe94151ea0 R14: 0000000000000000 R15: 0000000000000000 irq event stamp: 4150 hardirqs last enabled at (4149): [<ffffffffa583080e>] lock_page_memcg+0xbe/0x170 mm/memcontrol.c:2154 hardirqs last disabled at (4150): [<ffffffffa633219b>] sysvec_call_function_single+0xb/0xc0 arch/x86/kernel/smp.c:243 softirqs last enabled at (3846): [<ffffffffa566f621>] invoke_softirq kernel/softirq.c:221 [inline] softirqs last enabled at (3846): [<ffffffffa566f621>] __irq_exit_rcu kernel/softirq.c:422 [inline] softirqs last enabled at (3846): [<ffffffffa566f621>] irq_exit_rcu+0xe1/0x120 kernel/softirq.c:434 softirqs last disabled at (3839): [<ffffffffa566f621>] invoke_softirq kernel/softirq.c:221 [inline] softirqs last disabled at (3839): [<ffffffffa566f621>] __irq_exit_rcu kernel/softirq.c:422 [inline] softirqs last disabled at (3839): [<ffffffffa566f621>] irq_exit_rcu+0xe1/0x120 kernel/softirq.c:434 ---[ end trace 74c79be9940ec2d3 ]--- ------------[ cut here ]------------ WARNING: CPU: 3 PID: 1369 at kernel/events/core.c:2152 perf_group_detach+0xe1/0x300 kernel/events/core.c:2152 Modules linked in: CPU: 3 PID: 1369 Comm: exe Tainted: G W 5.12.0-rc2+ #19 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 RIP: 0010:perf_group_detach+0xe1/0x300 kernel/events/core.c:2152 Code: 41 5c 41 5d 41 5e 41 5f e9 bc 54 ff ff 48 8b 87 20 02 00 00 be ff ff ff ff 48 8d 78 20 e8 27 88 b9 00 85 c0 0f 85 41 ff ff ff <0f> 0b e9 3a ff ff ff 48 8b 45 10 4c 8b 28 48 8d 58 f0 49 83 ed 10 RSP: 0000:ffff980880158f10 EFLAGS: 00010046 RAX: 0000000000000000 RBX: ffff8a5d4e6db800 RCX: 0000000000000001 RDX: 0000000000000000 RSI: ffffffffa6b4ccef RDI: ffffffffa6b9ab37 RBP: ffff8a5d4e6db800 R08: 0000000000000001 R09: 0000000000000000 R10: 0000000000000000 R11: ffff8a5d4d2accb8 R12: ffff8a606fcf0c00 R13: 0000000000000001 R14: ffff8a5d46534a00 R15: ffff8a606fcf0c08 FS: 0000000000000000(0000) GS:ffff8a606fcc0000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007fd2b331e225 CR3: 00000001e0e22006 CR4: 0000000000770ee0 DR0: 0000564596006388 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000600 PKRU: 55555554 Call Trace: <IRQ> __perf_remove_from_context+0x91/0xd0 kernel/events/core.c:2335 event_function+0xab/0x100 kernel/events/core.c:252 remote_function kernel/events/core.c:91 [inline] remote_function+0x44/0x50 kernel/events/core.c:71 flush_smp_call_function_queue+0x13a/0x1d0 kernel/smp.c:395 __sysvec_call_function_single+0x3e/0x1c0 arch/x86/kernel/smp.c:248 sysvec_call_function_single+0x89/0xc0 arch/x86/kernel/smp.c:243 </IRQ> asm_sysvec_call_function_single+0x12/0x20 arch/x86/include/asm/idtentry.h:640 RIP: 0010:lock_page_memcg+0xc7/0x170 mm/memcontrol.c:2157 Code: 00 00 e8 6c ae e9 ff 48 c7 c6 d3 07 83 a5 58 4c 89 f7 e8 6c ab e9 ff 48 85 db 74 06 e8 22 e1 f3 ff fb 41 8b 84 24 00 0b 00 00 <85> c0 7e a7 4d 8d b4 24 70 06 00 00 4c 89 f7 e8 85 b2 b0 00 48 89 RSP: 0000:ffff980881bc7b38 EFLAGS: 00000206 RAX: 0000000000000000 RBX: 0000000000000200 RCX: 0000000000000006 RDX: 0000000000000000 RSI: ffffffffa6c1a6ed RDI: ffffffffa6b9ab37 RBP: ffffccff47891b80 R08: 0000000000000001 R09: 0000000000000001 R10: 0000000000000000 R11: ffff8a5d4d2accb8 R12: ffff8a5d403e9000 R13: ffffffffa58307d3 R14: ffff8a5d403e9688 R15: ffff8a5d47067128 page_remove_rmap+0xc/0xb0 mm/rmap.c:1348 zap_pte_range mm/memory.c:1276 [inline] zap_pmd_range mm/memory.c:1380 [inline] zap_pud_range mm/memory.c:1409 [inline] zap_p4d_range mm/memory.c:1430 [inline] unmap_page_range+0x612/0xb00 mm/memory.c:1451 unmap_vmas+0xbe/0x150 mm/memory.c:1528 exit_mmap+0x8f/0x1d0 mm/mmap.c:3218 __mmput kernel/fork.c:1082 [inline] mmput+0x3c/0xe0 kernel/fork.c:1103 exit_mm kernel/exit.c:501 [inline] do_exit+0x369/0xb60 kernel/exit.c:812 do_group_exit+0x34/0xb0 kernel/exit.c:922 get_signal+0x170/0xc80 kernel/signal.c:2775 arch_do_signal_or_restart+0xea/0x740 arch/x86/kernel/signal.c:811 handle_signal_work kernel/entry/common.c:147 [inline] exit_to_user_mode_loop kernel/entry/common.c:171 [inline] exit_to_user_mode_prepare+0x10f/0x190 kernel/entry/common.c:208 irqentry_exit_to_user_mode+0x5/0x30 kernel/entry/common.c:314 asm_sysvec_reschedule_ipi+0x12/0x20 arch/x86/include/asm/idtentry.h:637 RIP: 0033:0x5598fc00409b Code: Unable to access opcode bytes at RIP 0x5598fc004071. RSP: 002b:00007ffe94151cf0 EFLAGS: 00000246 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007f6db39331b0 RDX: 0000000000000004 RSI: 00007ffe94151cfc RDI: 0000000000000001 RBP: 00007ffe94151da0 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000059 R11: 0000000000000246 R12: 00005598fc0010d0 R13: 00007ffe94151ea0 R14: 0000000000000000 R15: 0000000000000000 irq event stamp: 4150 hardirqs last enabled at (4149): [<ffffffffa583080e>] lock_page_memcg+0xbe/0x170 mm/memcontrol.c:2154 hardirqs last disabled at (4150): [<ffffffffa633219b>] sysvec_call_function_single+0xb/0xc0 arch/x86/kernel/smp.c:243 softirqs last enabled at (3846): [<ffffffffa566f621>] invoke_softirq kernel/softirq.c:221 [inline] softirqs last enabled at (3846): [<ffffffffa566f621>] __irq_exit_rcu kernel/softirq.c:422 [inline] softirqs last enabled at (3846): [<ffffffffa566f621>] irq_exit_rcu+0xe1/0x120 kernel/softirq.c:434 softirqs last disabled at (3839): [<ffffffffa566f621>] invoke_softirq kernel/softirq.c:221 [inline] softirqs last disabled at (3839): [<ffffffffa566f621>] __irq_exit_rcu kernel/softirq.c:422 [inline] softirqs last disabled at (3839): [<ffffffffa566f621>] irq_exit_rcu+0xe1/0x120 kernel/softirq.c:434 ---[ end trace 74c79be9940ec2d4 ]--- ------------[ cut here ]------------ WARNING: CPU: 3 PID: 1369 at kernel/events/core.c:1993 list_del_event+0xaf/0x110 kernel/events/core.c:1993 Modules linked in: CPU: 3 PID: 1369 Comm: exe Tainted: G W 5.12.0-rc2+ #19 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 RIP: 0010:list_del_event+0xaf/0x110 kernel/events/core.c:1993 Code: 00 00 01 eb ba be ff ff ff ff 48 89 ef e8 b9 fe ff ff eb db 48 8d 7b 20 be ff ff ff ff e8 39 1d ba 00 85 c0 0f 85 72 ff ff ff <0f> 0b e9 6b ff ff ff 48 8d 83 e8 00 00 00 f6 85 08 01 00 00 04 48 RSP: 0000:ffff980880158f28 EFLAGS: 00010046 RAX: 0000000000000000 RBX: ffff8a5d46534a00 RCX: 0000000000000001 RDX: 0000000000000000 RSI: ffffffffa6b4ccef RDI: ffffffffa6b9ab37 RBP: ffff8a5d4e6db800 R08: 0000000000000001 R09: 0000000000000000 R10: 0000000000000000 R11: ffff8a5d4d2accb8 R12: ffff8a606fcf0c00 R13: 0000000000000001 R14: ffff8a5d46534a00 R15: ffff8a606fcf0c08 FS: 0000000000000000(0000) GS:ffff8a606fcc0000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007fd2b331e225 CR3: 00000001e0e22006 CR4: 0000000000770ee0 DR0: 0000564596006388 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000600 PKRU: 55555554 Call Trace: <IRQ> __perf_remove_from_context+0x3a/0xd0 kernel/events/core.c:2336 event_function+0xab/0x100 kernel/events/core.c:252 remote_function kernel/events/core.c:91 [inline] remote_function+0x44/0x50 kernel/events/core.c:71 flush_smp_call_function_queue+0x13a/0x1d0 kernel/smp.c:395 __sysvec_call_function_single+0x3e/0x1c0 arch/x86/kernel/smp.c:248 sysvec_call_function_single+0x89/0xc0 arch/x86/kernel/smp.c:243 </IRQ> asm_sysvec_call_function_single+0x12/0x20 arch/x86/include/asm/idtentry.h:640 RIP: 0010:lock_page_memcg+0xc7/0x170 mm/memcontrol.c:2157 Code: 00 00 e8 6c ae e9 ff 48 c7 c6 d3 07 83 a5 58 4c 89 f7 e8 6c ab e9 ff 48 85 db 74 06 e8 22 e1 f3 ff fb 41 8b 84 24 00 0b 00 00 <85> c0 7e a7 4d 8d b4 24 70 06 00 00 4c 89 f7 e8 85 b2 b0 00 48 89 RSP: 0000:ffff980881bc7b38 EFLAGS: 00000206 RAX: 0000000000000000 RBX: 0000000000000200 RCX: 0000000000000006 RDX: 0000000000000000 RSI: ffffffffa6c1a6ed RDI: ffffffffa6b9ab37 RBP: ffffccff47891b80 R08: 0000000000000001 R09: 0000000000000001 R10: 0000000000000000 R11: ffff8a5d4d2accb8 R12: ffff8a5d403e9000 R13: ffffffffa58307d3 R14: ffff8a5d403e9688 R15: ffff8a5d47067128 page_remove_rmap+0xc/0xb0 mm/rmap.c:1348 zap_pte_range mm/memory.c:1276 [inline] zap_pmd_range mm/memory.c:1380 [inline] zap_pud_range mm/memory.c:1409 [inline] zap_p4d_range mm/memory.c:1430 [inline] unmap_page_range+0x612/0xb00 mm/memory.c:1451 unmap_vmas+0xbe/0x150 mm/memory.c:1528 exit_mmap+0x8f/0x1d0 mm/mmap.c:3218 __mmput kernel/fork.c:1082 [inline] mmput+0x3c/0xe0 kernel/fork.c:1103 exit_mm kernel/exit.c:501 [inline] do_exit+0x369/0xb60 kernel/exit.c:812 do_group_exit+0x34/0xb0 kernel/exit.c:922 get_signal+0x170/0xc80 kernel/signal.c:2775 arch_do_signal_or_restart+0xea/0x740 arch/x86/kernel/signal.c:811 handle_signal_work kernel/entry/common.c:147 [inline] exit_to_user_mode_loop kernel/entry/common.c:171 [inline] exit_to_user_mode_prepare+0x10f/0x190 kernel/entry/common.c:208 irqentry_exit_to_user_mode+0x5/0x30 kernel/entry/common.c:314 asm_sysvec_reschedule_ipi+0x12/0x20 arch/x86/include/asm/idtentry.h:637 RIP: 0033:0x5598fc00409b Code: Unable to access opcode bytes at RIP 0x5598fc004071. RSP: 002b:00007ffe94151cf0 EFLAGS: 00000246 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007f6db39331b0 RDX: 0000000000000004 RSI: 00007ffe94151cfc RDI: 0000000000000001 RBP: 00007ffe94151da0 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000059 R11: 0000000000000246 R12: 00005598fc0010d0 R13: 00007ffe94151ea0 R14: 0000000000000000 R15: 0000000000000000 irq event stamp: 4150 hardirqs last enabled at (4149): [<ffffffffa583080e>] lock_page_memcg+0xbe/0x170 mm/memcontrol.c:2154 hardirqs last disabled at (4150): [<ffffffffa633219b>] sysvec_call_function_single+0xb/0xc0 arch/x86/kernel/smp.c:243 softirqs last enabled at (3846): [<ffffffffa566f621>] invoke_softirq kernel/softirq.c:221 [inline] softirqs last enabled at (3846): [<ffffffffa566f621>] __irq_exit_rcu kernel/softirq.c:422 [inline] softirqs last enabled at (3846): [<ffffffffa566f621>] irq_exit_rcu+0xe1/0x120 kernel/softirq.c:434 softirqs last disabled at (3839): [<ffffffffa566f621>] invoke_softirq kernel/softirq.c:221 [inline] softirqs last disabled at (3839): [<ffffffffa566f621>] __irq_exit_rcu kernel/softirq.c:422 [inline] softirqs last disabled at (3839): [<ffffffffa566f621>] irq_exit_rcu+0xe1/0x120 kernel/softirq.c:434 ---[ end trace 74c79be9940ec2d5 ]---
On Wed, Mar 10, 2021 at 11:41:34AM +0100, Marco Elver wrote:
Adds bit perf_event_attr::remove_on_exec, to support removing an event from a task on exec.
This option supports the case where an event is supposed to be process-wide only, and should not propagate beyond exec, to limit monitoring to the original process image only.
Signed-off-by: Marco Elver elver@google.com
+/*
- Removes all events from the current task that have been marked
- remove-on-exec, and feeds their values back to parent events.
- */
+static void perf_event_remove_on_exec(void) +{
- int ctxn;
- for_each_task_context_nr(ctxn) {
struct perf_event_context *ctx;
struct perf_event *event, *next;
ctx = perf_pin_task_context(current, ctxn);
if (!ctx)
continue;
mutex_lock(&ctx->mutex);
list_for_each_entry_safe(event, next, &ctx->event_list, event_entry) {
if (!event->attr.remove_on_exec)
continue;
if (!is_kernel_event(event))
perf_remove_from_owner(event);
perf_remove_from_context(event, DETACH_GROUP);
There's a comment on this in perf_event_exit_event(), if this task happens to have the original event, then DETACH_GROUP will destroy the grouping.
I think this wants to be:
perf_remove_from_text(event, child_event->parent ? DETACH_GROUP : 0);
or something.
/*
* Remove the event and feed back its values to the
* parent event.
*/
perf_event_exit_event(event, ctx, current);
Oooh, and here we call it... but it will do list_del_even() / perf_group_detach() *again*.
So the problem is that perf_event_exit_task_context() doesn't use remove_from_context(), but instead does task_ctx_sched_out() and then relies on the events not being active.
Whereas above you *DO* use remote_from_context(), but then perf_event_exit_event() will try and remove it more.
}
mutex_unlock(&ctx->mutex);
perf_unpin_context(ctx);
put_ctx(ctx);
- }
+}
On Tue, Mar 16, 2021 at 05:22PM +0100, Peter Zijlstra wrote:
On Wed, Mar 10, 2021 at 11:41:34AM +0100, Marco Elver wrote:
Adds bit perf_event_attr::remove_on_exec, to support removing an event from a task on exec.
This option supports the case where an event is supposed to be process-wide only, and should not propagate beyond exec, to limit monitoring to the original process image only.
Signed-off-by: Marco Elver elver@google.com
+/*
- Removes all events from the current task that have been marked
- remove-on-exec, and feeds their values back to parent events.
- */
+static void perf_event_remove_on_exec(void) +{
- int ctxn;
- for_each_task_context_nr(ctxn) {
struct perf_event_context *ctx;
struct perf_event *event, *next;
ctx = perf_pin_task_context(current, ctxn);
if (!ctx)
continue;
mutex_lock(&ctx->mutex);
list_for_each_entry_safe(event, next, &ctx->event_list, event_entry) {
if (!event->attr.remove_on_exec)
continue;
if (!is_kernel_event(event))
perf_remove_from_owner(event);
perf_remove_from_context(event, DETACH_GROUP);
There's a comment on this in perf_event_exit_event(), if this task happens to have the original event, then DETACH_GROUP will destroy the grouping.
I think this wants to be:
perf_remove_from_text(event, child_event->parent ? DETACH_GROUP : 0);
or something.
/*
* Remove the event and feed back its values to the
* parent event.
*/
perf_event_exit_event(event, ctx, current);
Oooh, and here we call it... but it will do list_del_even() / perf_group_detach() *again*.
So the problem is that perf_event_exit_task_context() doesn't use remove_from_context(), but instead does task_ctx_sched_out() and then relies on the events not being active.
Whereas above you *DO* use remote_from_context(), but then perf_event_exit_event() will try and remove it more.
AFAIK, we want to deallocate the events and not just remove them, so doing what perf_event_exit_event() is the right way forward? Or did you have something else in mind?
I'm still trying to make sense of the zoo of synchronisation mechanisms at play here. No matter what I try, it seems I get stuck on the fact that I can't cleanly "pause" the context to remove the events (warnings in event_function()).
This is what I've been playing with to understand:
diff --git a/kernel/events/core.c b/kernel/events/core.c index 450ea9415ed7..c585cef284a0 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -4195,6 +4195,88 @@ static void perf_event_enable_on_exec(int ctxn) put_ctx(clone_ctx); }
+static void perf_remove_from_owner(struct perf_event *event); +static void perf_event_exit_event(struct perf_event *child_event, + struct perf_event_context *child_ctx, + struct task_struct *child); + +/* + * Removes all events from the current task that have been marked + * remove-on-exec, and feeds their values back to parent events. + */ +static void perf_event_remove_on_exec(void) +{ + struct perf_event *event, *next; + int ctxn; + + /***************** BROKEN BROKEN BROKEN *****************/ + + for_each_task_context_nr(ctxn) { + struct perf_event_context *ctx; + bool removed = false; + + ctx = perf_pin_task_context(current, ctxn); + if (!ctx) + continue; + mutex_lock(&ctx->mutex); + + raw_spin_lock_irq(&ctx->lock); + /* + * WIP: Ok, we will unschedule the context, _and_ tell everyone + * still trying to use that it's dead... even though it isn't. + * + * This can't be right... + */ + task_ctx_sched_out(__get_cpu_context(ctx), ctx, EVENT_ALL); + RCU_INIT_POINTER(current->perf_event_ctxp[ctxn], NULL); + WRITE_ONCE(ctx->task, TASK_TOMBSTONE);
This code here is obviously bogus, because it removes the context from the task: we might still need it since this task is not dead yet.
What's the right way to pause the context to remove the events from it?
+ raw_spin_unlock_irq(&ctx->lock); + + list_for_each_entry_safe(event, next, &ctx->event_list, event_entry) { + if (!event->attr.remove_on_exec) + continue; + removed = true; + + if (!is_kernel_event(event)) + perf_remove_from_owner(event); + + /* + * WIP: Want to free the event and feed back its values + * to the parent (if any) ... + */ + perf_event_exit_event(event, ctx, current); + } +
... need to schedule context back in here?
+ + mutex_unlock(&ctx->mutex); + perf_unpin_context(ctx); + put_ctx(ctx); + } +} + struct perf_read_data { struct perf_event *event; bool group; @@ -7553,6 +7635,8 @@ void perf_event_exec(void) true); } rcu_read_unlock(); + + perf_event_remove_on_exec(); }
Thanks, -- Marco
Introduces the TRAP_PERF si_code, and associated siginfo_t field si_perf. These will be used by the perf event subsystem to send signals (if requested) to the task where an event occurred.
Acked-by: Geert Uytterhoeven geert@linux-m68k.org # m68k Acked-by: Arnd Bergmann arnd@arndb.de # asm-generic Signed-off-by: Marco Elver elver@google.com --- arch/m68k/kernel/signal.c | 3 +++ arch/x86/kernel/signal_compat.c | 5 ++++- fs/signalfd.c | 4 ++++ include/linux/compat.h | 2 ++ include/linux/signal.h | 1 + include/uapi/asm-generic/siginfo.h | 6 +++++- include/uapi/linux/signalfd.h | 4 +++- kernel/signal.c | 11 +++++++++++ 8 files changed, 33 insertions(+), 3 deletions(-)
diff --git a/arch/m68k/kernel/signal.c b/arch/m68k/kernel/signal.c index 349570f16a78..a4b7ee1df211 100644 --- a/arch/m68k/kernel/signal.c +++ b/arch/m68k/kernel/signal.c @@ -622,6 +622,9 @@ static inline void siginfo_build_tests(void) /* _sigfault._addr_pkey */ BUILD_BUG_ON(offsetof(siginfo_t, si_pkey) != 0x12);
+ /* _sigfault._perf */ + BUILD_BUG_ON(offsetof(siginfo_t, si_perf) != 0x10); + /* _sigpoll */ BUILD_BUG_ON(offsetof(siginfo_t, si_band) != 0x0c); BUILD_BUG_ON(offsetof(siginfo_t, si_fd) != 0x10); diff --git a/arch/x86/kernel/signal_compat.c b/arch/x86/kernel/signal_compat.c index a5330ff498f0..0e5d0a7e203b 100644 --- a/arch/x86/kernel/signal_compat.c +++ b/arch/x86/kernel/signal_compat.c @@ -29,7 +29,7 @@ static inline void signal_compat_build_tests(void) BUILD_BUG_ON(NSIGFPE != 15); BUILD_BUG_ON(NSIGSEGV != 9); BUILD_BUG_ON(NSIGBUS != 5); - BUILD_BUG_ON(NSIGTRAP != 5); + BUILD_BUG_ON(NSIGTRAP != 6); BUILD_BUG_ON(NSIGCHLD != 6); BUILD_BUG_ON(NSIGSYS != 2);
@@ -138,6 +138,9 @@ static inline void signal_compat_build_tests(void) BUILD_BUG_ON(offsetof(siginfo_t, si_pkey) != 0x20); BUILD_BUG_ON(offsetof(compat_siginfo_t, si_pkey) != 0x14);
+ BUILD_BUG_ON(offsetof(siginfo_t, si_perf) != 0x18); + BUILD_BUG_ON(offsetof(compat_siginfo_t, si_perf) != 0x10); + CHECK_CSI_OFFSET(_sigpoll); CHECK_CSI_SIZE (_sigpoll, 2*sizeof(int)); CHECK_SI_SIZE (_sigpoll, 4*sizeof(int)); diff --git a/fs/signalfd.c b/fs/signalfd.c index 456046e15873..040a1142915f 100644 --- a/fs/signalfd.c +++ b/fs/signalfd.c @@ -134,6 +134,10 @@ static int signalfd_copyinfo(struct signalfd_siginfo __user *uinfo, #endif new.ssi_addr_lsb = (short) kinfo->si_addr_lsb; break; + case SIL_PERF_EVENT: + new.ssi_addr = (long) kinfo->si_addr; + new.ssi_perf = kinfo->si_perf; + break; case SIL_CHLD: new.ssi_pid = kinfo->si_pid; new.ssi_uid = kinfo->si_uid; diff --git a/include/linux/compat.h b/include/linux/compat.h index 6e65be753603..c8821d966812 100644 --- a/include/linux/compat.h +++ b/include/linux/compat.h @@ -236,6 +236,8 @@ typedef struct compat_siginfo { char _dummy_pkey[__COMPAT_ADDR_BND_PKEY_PAD]; u32 _pkey; } _addr_pkey; + /* used when si_code=TRAP_PERF */ + compat_u64 _perf; }; } _sigfault;
diff --git a/include/linux/signal.h b/include/linux/signal.h index 205526c4003a..1e98548d7cf6 100644 --- a/include/linux/signal.h +++ b/include/linux/signal.h @@ -43,6 +43,7 @@ enum siginfo_layout { SIL_FAULT_MCEERR, SIL_FAULT_BNDERR, SIL_FAULT_PKUERR, + SIL_PERF_EVENT, SIL_CHLD, SIL_RT, SIL_SYS, diff --git a/include/uapi/asm-generic/siginfo.h b/include/uapi/asm-generic/siginfo.h index d2597000407a..d0bb9125c853 100644 --- a/include/uapi/asm-generic/siginfo.h +++ b/include/uapi/asm-generic/siginfo.h @@ -91,6 +91,8 @@ union __sifields { char _dummy_pkey[__ADDR_BND_PKEY_PAD]; __u32 _pkey; } _addr_pkey; + /* used when si_code=TRAP_PERF */ + __u64 _perf; }; } _sigfault;
@@ -155,6 +157,7 @@ typedef struct siginfo { #define si_lower _sifields._sigfault._addr_bnd._lower #define si_upper _sifields._sigfault._addr_bnd._upper #define si_pkey _sifields._sigfault._addr_pkey._pkey +#define si_perf _sifields._sigfault._perf #define si_band _sifields._sigpoll._band #define si_fd _sifields._sigpoll._fd #define si_call_addr _sifields._sigsys._call_addr @@ -253,7 +256,8 @@ typedef struct siginfo { #define TRAP_BRANCH 3 /* process taken branch trap */ #define TRAP_HWBKPT 4 /* hardware breakpoint/watchpoint */ #define TRAP_UNK 5 /* undiagnosed trap */ -#define NSIGTRAP 5 +#define TRAP_PERF 6 /* perf event with sigtrap=1 */ +#define NSIGTRAP 6
/* * There is an additional set of SIGTRAP si_codes used by ptrace diff --git a/include/uapi/linux/signalfd.h b/include/uapi/linux/signalfd.h index 83429a05b698..7e333042c7e3 100644 --- a/include/uapi/linux/signalfd.h +++ b/include/uapi/linux/signalfd.h @@ -39,6 +39,8 @@ struct signalfd_siginfo { __s32 ssi_syscall; __u64 ssi_call_addr; __u32 ssi_arch; + __u32 __pad3; + __u64 ssi_perf;
/* * Pad strcture to 128 bytes. Remember to update the @@ -49,7 +51,7 @@ struct signalfd_siginfo { * comes out of a read(2) and we really don't want to have * a compat on read(2). */ - __u8 __pad[28]; + __u8 __pad[16]; };
diff --git a/kernel/signal.c b/kernel/signal.c index ba4d1ef39a9e..f68351825e5e 100644 --- a/kernel/signal.c +++ b/kernel/signal.c @@ -1199,6 +1199,7 @@ static inline bool has_si_pid_and_uid(struct kernel_siginfo *info) case SIL_FAULT_MCEERR: case SIL_FAULT_BNDERR: case SIL_FAULT_PKUERR: + case SIL_PERF_EVENT: case SIL_SYS: ret = false; break; @@ -2531,6 +2532,7 @@ static void hide_si_addr_tag_bits(struct ksignal *ksig) case SIL_FAULT_MCEERR: case SIL_FAULT_BNDERR: case SIL_FAULT_PKUERR: + case SIL_PERF_EVENT: ksig->info.si_addr = arch_untagged_si_addr( ksig->info.si_addr, ksig->sig, ksig->info.si_code); break; @@ -3333,6 +3335,10 @@ void copy_siginfo_to_external32(struct compat_siginfo *to, #endif to->si_pkey = from->si_pkey; break; + case SIL_PERF_EVENT: + to->si_addr = ptr_to_compat(from->si_addr); + to->si_perf = from->si_perf; + break; case SIL_CHLD: to->si_pid = from->si_pid; to->si_uid = from->si_uid; @@ -3413,6 +3419,10 @@ static int post_copy_siginfo_from_user32(kernel_siginfo_t *to, #endif to->si_pkey = from->si_pkey; break; + case SIL_PERF_EVENT: + to->si_addr = compat_ptr(from->si_addr); + to->si_perf = from->si_perf; + break; case SIL_CHLD: to->si_pid = from->si_pid; to->si_uid = from->si_uid; @@ -4593,6 +4603,7 @@ static inline void siginfo_buildtime_checks(void) CHECK_OFFSET(si_lower); CHECK_OFFSET(si_upper); CHECK_OFFSET(si_pkey); + CHECK_OFFSET(si_perf);
/* sigpoll */ CHECK_OFFSET(si_band);
Adds bit perf_event_attr::sigtrap, which can be set to cause events to send SIGTRAP (with si_code TRAP_PERF) to the task where the event occurred. To distinguish perf events and allow user space to decode si_perf (if set), the event type is set in si_errno.
The primary motivation is to support synchronous signals on perf events in the task where an event (such as breakpoints) triggered.
Link: https://lore.kernel.org/lkml/YBv3rAT566k+6zjg@hirez.programming.kicks-ass.ne... Suggested-by: Peter Zijlstra peterz@infradead.org Acked-by: Dmitry Vyukov dvyukov@google.com Signed-off-by: Marco Elver elver@google.com --- v2: * Use atomic_set(&event_count, 1), since it must always be 0 in perf_pending_event_disable(). * Implicitly restrict inheriting events if sigtrap, but the child was cloned with CLONE_CLEAR_SIGHAND, because it is not generally safe if the child cleared all signal handlers to continue sending SIGTRAP. --- include/uapi/linux/perf_event.h | 3 ++- kernel/events/core.c | 28 +++++++++++++++++++++++++++- 2 files changed, 29 insertions(+), 2 deletions(-)
diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h index 8c5b9f5ad63f..3a4dbb1688f0 100644 --- a/include/uapi/linux/perf_event.h +++ b/include/uapi/linux/perf_event.h @@ -391,7 +391,8 @@ struct perf_event_attr { build_id : 1, /* use build id in mmap2 events */ inherit_thread : 1, /* children only inherit if cloned with CLONE_THREAD */ remove_on_exec : 1, /* event is removed from task on exec */ - __reserved_1 : 27; + sigtrap : 1, /* send synchronous SIGTRAP on event */ + __reserved_1 : 26;
union { __u32 wakeup_events; /* wakeup every n events */ diff --git a/kernel/events/core.c b/kernel/events/core.c index bc9e6e35e414..e70c411b0b16 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -6328,6 +6328,17 @@ void perf_event_wakeup(struct perf_event *event) } }
+static void perf_sigtrap(struct perf_event *event) +{ + struct kernel_siginfo info; + + clear_siginfo(&info); + info.si_signo = SIGTRAP; + info.si_code = TRAP_PERF; + info.si_errno = event->attr.type; + force_sig_info(&info); +} + static void perf_pending_event_disable(struct perf_event *event) { int cpu = READ_ONCE(event->pending_disable); @@ -6337,6 +6348,13 @@ static void perf_pending_event_disable(struct perf_event *event)
if (cpu == smp_processor_id()) { WRITE_ONCE(event->pending_disable, -1); + + if (event->attr.sigtrap) { + atomic_set(&event->event_limit, 1); /* rearm event */ + perf_sigtrap(event); + return; + } + perf_event_disable_local(event); return; } @@ -11367,6 +11385,9 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
event->state = PERF_EVENT_STATE_INACTIVE;
+ if (event->attr.sigtrap) + atomic_set(&event->event_limit, 1); + if (task) { event->attach_state = PERF_ATTACH_TASK; /* @@ -11645,6 +11666,9 @@ static int perf_copy_attr(struct perf_event_attr __user *uattr, if (attr->remove_on_exec && attr->enable_on_exec) return -EINVAL;
+ if (attr->sigtrap && !attr->remove_on_exec) + return -EINVAL; + out: return ret;
@@ -12874,7 +12898,9 @@ inherit_task_group(struct perf_event *event, struct task_struct *parent, struct perf_event_context *child_ctx;
if (!event->attr.inherit || - (event->attr.inherit_thread && !(clone_flags & CLONE_THREAD))) { + (event->attr.inherit_thread && !(clone_flags & CLONE_THREAD)) || + /* Do not inherit if sigtrap and signal handlers were cleared. */ + (event->attr.sigtrap && (clone_flags & CLONE_CLEAR_SIGHAND))) { *inherited_all = 0; return 0; }
Encode information from breakpoint attributes into siginfo_t, which helps disambiguate which breakpoint fired.
Note, providing the event fd may be unreliable, since the event may have been modified (via PERF_EVENT_IOC_MODIFY_ATTRIBUTES) between the event triggering and the signal being delivered to user space.
Signed-off-by: Marco Elver elver@google.com --- v2: * Add comment about si_perf==0. --- kernel/events/core.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+)
diff --git a/kernel/events/core.c b/kernel/events/core.c index e70c411b0b16..aa47e111435e 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -6336,6 +6336,22 @@ static void perf_sigtrap(struct perf_event *event) info.si_signo = SIGTRAP; info.si_code = TRAP_PERF; info.si_errno = event->attr.type; + + switch (event->attr.type) { + case PERF_TYPE_BREAKPOINT: + info.si_addr = (void *)(unsigned long)event->attr.bp_addr; + info.si_perf = (event->attr.bp_len << 16) | (u64)event->attr.bp_type; + break; + default: + /* + * No additional info set (si_perf == 0). + * + * Adding new cases for event types to set si_perf to a + * non-constant value must ensure that si_perf != 0. + */ + break; + } + force_sig_info(&info); }
Add a kselftest for testing process-wide perf events with synchronous SIGTRAP on events (using breakpoints). In particular, we want to test that changes to the event propagate to all children, and the SIGTRAPs are in fact synchronously sent to the thread where the event occurred.
Signed-off-by: Marco Elver elver@google.com --- v2: * Patch added to series. --- .../testing/selftests/perf_events/.gitignore | 2 + tools/testing/selftests/perf_events/Makefile | 6 + tools/testing/selftests/perf_events/config | 1 + tools/testing/selftests/perf_events/settings | 1 + .../selftests/perf_events/sigtrap_threads.c | 202 ++++++++++++++++++ 5 files changed, 212 insertions(+) create mode 100644 tools/testing/selftests/perf_events/.gitignore create mode 100644 tools/testing/selftests/perf_events/Makefile create mode 100644 tools/testing/selftests/perf_events/config create mode 100644 tools/testing/selftests/perf_events/settings create mode 100644 tools/testing/selftests/perf_events/sigtrap_threads.c
diff --git a/tools/testing/selftests/perf_events/.gitignore b/tools/testing/selftests/perf_events/.gitignore new file mode 100644 index 000000000000..4dc43e1bd79c --- /dev/null +++ b/tools/testing/selftests/perf_events/.gitignore @@ -0,0 +1,2 @@ +# SPDX-License-Identifier: GPL-2.0-only +sigtrap_threads diff --git a/tools/testing/selftests/perf_events/Makefile b/tools/testing/selftests/perf_events/Makefile new file mode 100644 index 000000000000..973a2c39ca83 --- /dev/null +++ b/tools/testing/selftests/perf_events/Makefile @@ -0,0 +1,6 @@ +# SPDX-License-Identifier: GPL-2.0 +CFLAGS += -Wl,-no-as-needed -Wall -I../../../../usr/include +LDFLAGS += -lpthread + +TEST_GEN_PROGS := sigtrap_threads +include ../lib.mk diff --git a/tools/testing/selftests/perf_events/config b/tools/testing/selftests/perf_events/config new file mode 100644 index 000000000000..ba58ff2203e4 --- /dev/null +++ b/tools/testing/selftests/perf_events/config @@ -0,0 +1 @@ +CONFIG_PERF_EVENTS=y diff --git a/tools/testing/selftests/perf_events/settings b/tools/testing/selftests/perf_events/settings new file mode 100644 index 000000000000..6091b45d226b --- /dev/null +++ b/tools/testing/selftests/perf_events/settings @@ -0,0 +1 @@ +timeout=120 diff --git a/tools/testing/selftests/perf_events/sigtrap_threads.c b/tools/testing/selftests/perf_events/sigtrap_threads.c new file mode 100644 index 000000000000..7ebb9bb34c2e --- /dev/null +++ b/tools/testing/selftests/perf_events/sigtrap_threads.c @@ -0,0 +1,202 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Test for perf events with SIGTRAP across all threads. + * + * Copyright (C) 2021, Google LLC. + */ + +#define _GNU_SOURCE +#include <sys/types.h> + +/* We need the latest siginfo from the kernel repo. */ +#include <asm/siginfo.h> +#define __have_siginfo_t 1 +#define __have_sigval_t 1 +#define __have_sigevent_t 1 + +#include <linux/hw_breakpoint.h> +#include <linux/perf_event.h> +#include <pthread.h> +#include <signal.h> +#include <stdatomic.h> +#include <stdbool.h> +#include <stddef.h> +#include <stdint.h> +#include <stdio.h> +#include <sys/ioctl.h> +#include <sys/syscall.h> +#include <unistd.h> + +#include "../kselftest_harness.h" + +#define NUM_THREADS 5 + +/* Data shared between test body, threads, and signal handler. */ +static struct { + int tids_want_signal; /* Which threads still want a signal. */ + int signal_count; /* Sanity check number of signals received. */ + volatile int iterate_on; /* Variable to set breakpoint on. */ + siginfo_t first_siginfo; /* First observed siginfo_t. */ +} ctx; + +static struct perf_event_attr make_event_attr(bool enabled, volatile void *addr) +{ + struct perf_event_attr attr = { + .type = PERF_TYPE_BREAKPOINT, + .size = sizeof(attr), + .sample_period = 1, + .disabled = !enabled, + .bp_addr = (long)addr, + .bp_type = HW_BREAKPOINT_RW, + .bp_len = HW_BREAKPOINT_LEN_1, + .inherit = 1, /* Children inherit events ... */ + .inherit_thread = 1, /* ... but only cloned with CLONE_THREAD. */ + .remove_on_exec = 1, /* Required by sigtrap. */ + .sigtrap = 1, /* Request synchronous SIGTRAP on event. */ + }; + return attr; +} + +static void sigtrap_handler(int signum, siginfo_t *info, void *ucontext) +{ + if (info->si_code != TRAP_PERF) { + fprintf(stderr, "%s: unexpected si_code %d\n", __func__, info->si_code); + return; + } + + /* + * The data in siginfo_t we're interested in should all be the same + * across threads. + */ + if (!__atomic_fetch_add(&ctx.signal_count, 1, __ATOMIC_RELAXED)) + ctx.first_siginfo = *info; + __atomic_fetch_sub(&ctx.tids_want_signal, syscall(__NR_gettid), __ATOMIC_RELAXED); +} + +static void *test_thread(void *arg) +{ + pthread_barrier_t *barrier = (pthread_barrier_t *)arg; + pid_t tid = syscall(__NR_gettid); + int iter; + int i; + + pthread_barrier_wait(barrier); + + __atomic_fetch_add(&ctx.tids_want_signal, tid, __ATOMIC_RELAXED); + iter = ctx.iterate_on; /* read */ + for (i = 0; i < iter - 1; i++) { + __atomic_fetch_add(&ctx.tids_want_signal, tid, __ATOMIC_RELAXED); + ctx.iterate_on = iter; /* idempotent write */ + } + + return NULL; +} + +FIXTURE(sigtrap_threads) +{ + struct sigaction oldact; + pthread_t threads[NUM_THREADS]; + pthread_barrier_t barrier; + int fd; +}; + +FIXTURE_SETUP(sigtrap_threads) +{ + struct perf_event_attr attr = make_event_attr(false, &ctx.iterate_on); + struct sigaction action = {}; + int i; + + memset(&ctx, 0, sizeof(ctx)); + + /* Initialize sigtrap handler. */ + action.sa_flags = SA_SIGINFO | SA_NODEFER; + action.sa_sigaction = sigtrap_handler; + sigemptyset(&action.sa_mask); + ASSERT_EQ(sigaction(SIGTRAP, &action, &self->oldact), 0); + + /* Initialize perf event. */ + self->fd = syscall(__NR_perf_event_open, &attr, 0, -1, -1, PERF_FLAG_FD_CLOEXEC); + ASSERT_NE(self->fd, -1); + + /* Spawn threads inheriting perf event. */ + pthread_barrier_init(&self->barrier, NULL, NUM_THREADS + 1); + for (i = 0; i < NUM_THREADS; i++) + ASSERT_EQ(pthread_create(&self->threads[i], NULL, test_thread, &self->barrier), 0); +} + +FIXTURE_TEARDOWN(sigtrap_threads) +{ + pthread_barrier_destroy(&self->barrier); + close(self->fd); + sigaction(SIGTRAP, &self->oldact, NULL); +} + +static void run_test_threads(struct __test_metadata *_metadata, + FIXTURE_DATA(sigtrap_threads) *self) +{ + int i; + + pthread_barrier_wait(&self->barrier); + for (i = 0; i < NUM_THREADS; i++) + ASSERT_EQ(pthread_join(self->threads[i], NULL), 0); +} + +TEST_F(sigtrap_threads, remain_disabled) +{ + run_test_threads(_metadata, self); + EXPECT_EQ(ctx.signal_count, 0); + EXPECT_NE(ctx.tids_want_signal, 0); +} + +TEST_F(sigtrap_threads, enable_event) +{ + EXPECT_EQ(ioctl(self->fd, PERF_EVENT_IOC_ENABLE, 0), 0); + run_test_threads(_metadata, self); + + EXPECT_EQ(ctx.signal_count, NUM_THREADS); + EXPECT_EQ(ctx.tids_want_signal, 0); + EXPECT_EQ(ctx.first_siginfo.si_addr, &ctx.iterate_on); + EXPECT_EQ(ctx.first_siginfo.si_errno, PERF_TYPE_BREAKPOINT); + EXPECT_EQ(ctx.first_siginfo.si_perf, (HW_BREAKPOINT_LEN_1 << 16) | HW_BREAKPOINT_RW); + + /* Check enabled for parent. */ + ctx.iterate_on = 0; + EXPECT_EQ(ctx.signal_count, NUM_THREADS + 1); +} + +/* Test that modification propagates to all inherited events. */ +TEST_F(sigtrap_threads, modify_and_enable_event) +{ + struct perf_event_attr new_attr = make_event_attr(true, &ctx.iterate_on); + + EXPECT_EQ(ioctl(self->fd, PERF_EVENT_IOC_MODIFY_ATTRIBUTES, &new_attr), 0); + run_test_threads(_metadata, self); + + EXPECT_EQ(ctx.signal_count, NUM_THREADS); + EXPECT_EQ(ctx.tids_want_signal, 0); + EXPECT_EQ(ctx.first_siginfo.si_addr, &ctx.iterate_on); + EXPECT_EQ(ctx.first_siginfo.si_errno, PERF_TYPE_BREAKPOINT); + EXPECT_EQ(ctx.first_siginfo.si_perf, (HW_BREAKPOINT_LEN_1 << 16) | HW_BREAKPOINT_RW); + + /* Check enabled for parent. */ + ctx.iterate_on = 0; + EXPECT_EQ(ctx.signal_count, NUM_THREADS + 1); +} + +/* Stress test event + signal handling. */ +TEST_F(sigtrap_threads, signal_stress) +{ + ctx.iterate_on = 3000; + + EXPECT_EQ(ioctl(self->fd, PERF_EVENT_IOC_ENABLE, 0), 0); + run_test_threads(_metadata, self); + EXPECT_EQ(ioctl(self->fd, PERF_EVENT_IOC_DISABLE, 0), 0); + + EXPECT_EQ(ctx.signal_count, NUM_THREADS * ctx.iterate_on); + EXPECT_EQ(ctx.tids_want_signal, 0); + EXPECT_EQ(ctx.first_siginfo.si_addr, &ctx.iterate_on); + EXPECT_EQ(ctx.first_siginfo.si_errno, PERF_TYPE_BREAKPOINT); + EXPECT_EQ(ctx.first_siginfo.si_perf, (HW_BREAKPOINT_LEN_1 << 16) | HW_BREAKPOINT_RW); +} + +TEST_HARNESS_MAIN
Add kselftest to test that remove_on_exec removes inherited events from child tasks.
Signed-off-by: Marco Elver elver@google.com --- v2: * Add patch to series. --- .../testing/selftests/perf_events/.gitignore | 1 + tools/testing/selftests/perf_events/Makefile | 2 +- .../selftests/perf_events/remove_on_exec.c | 256 ++++++++++++++++++ 3 files changed, 258 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/perf_events/remove_on_exec.c
diff --git a/tools/testing/selftests/perf_events/.gitignore b/tools/testing/selftests/perf_events/.gitignore index 4dc43e1bd79c..790c47001e77 100644 --- a/tools/testing/selftests/perf_events/.gitignore +++ b/tools/testing/selftests/perf_events/.gitignore @@ -1,2 +1,3 @@ # SPDX-License-Identifier: GPL-2.0-only sigtrap_threads +remove_on_exec diff --git a/tools/testing/selftests/perf_events/Makefile b/tools/testing/selftests/perf_events/Makefile index 973a2c39ca83..fcafa5f0d34c 100644 --- a/tools/testing/selftests/perf_events/Makefile +++ b/tools/testing/selftests/perf_events/Makefile @@ -2,5 +2,5 @@ CFLAGS += -Wl,-no-as-needed -Wall -I../../../../usr/include LDFLAGS += -lpthread
-TEST_GEN_PROGS := sigtrap_threads +TEST_GEN_PROGS := sigtrap_threads remove_on_exec include ../lib.mk diff --git a/tools/testing/selftests/perf_events/remove_on_exec.c b/tools/testing/selftests/perf_events/remove_on_exec.c new file mode 100644 index 000000000000..e176b3a74d55 --- /dev/null +++ b/tools/testing/selftests/perf_events/remove_on_exec.c @@ -0,0 +1,256 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Test for remove_on_exec. + * + * Copyright (C) 2021, Google LLC. + */ + +#define _GNU_SOURCE +#include <sys/types.h> + +/* We need the latest siginfo from the kernel repo. */ +#include <asm/siginfo.h> +#define __have_siginfo_t 1 +#define __have_sigval_t 1 +#define __have_sigevent_t 1 + +#include <linux/perf_event.h> +#include <pthread.h> +#include <signal.h> +#include <stdatomic.h> +#include <stdbool.h> +#include <stddef.h> +#include <stdint.h> +#include <stdio.h> +#include <sys/ioctl.h> +#include <sys/syscall.h> +#include <unistd.h> + +#include "../kselftest_harness.h" + +static volatile int signal_count; + +static struct perf_event_attr make_event_attr(void) +{ + struct perf_event_attr attr = { + .type = PERF_TYPE_HARDWARE, + .size = sizeof(attr), + .config = PERF_COUNT_HW_INSTRUCTIONS, + .sample_period = 1000, + .exclude_kernel = 1, + .exclude_hv = 1, + .disabled = 1, + .inherit = 1, + /* + * Children normally retain their inherited event on exec; with + * remove_on_exec, we'll remove their event, but the parent and + * any other non-exec'd children will keep their events. + */ + .remove_on_exec = 1, + .sigtrap = 1, + }; + return attr; +} + +static void sigtrap_handler(int signum, siginfo_t *info, void *ucontext) +{ + if (info->si_code != TRAP_PERF) { + fprintf(stderr, "%s: unexpected si_code %d\n", __func__, info->si_code); + return; + } + + signal_count++; +} + +FIXTURE(remove_on_exec) +{ + struct sigaction oldact; + int fd; +}; + +FIXTURE_SETUP(remove_on_exec) +{ + struct perf_event_attr attr = make_event_attr(); + struct sigaction action = {}; + + signal_count = 0; + + /* Initialize sigtrap handler. */ + action.sa_flags = SA_SIGINFO | SA_NODEFER; + action.sa_sigaction = sigtrap_handler; + sigemptyset(&action.sa_mask); + ASSERT_EQ(sigaction(SIGTRAP, &action, &self->oldact), 0); + + /* Initialize perf event. */ + self->fd = syscall(__NR_perf_event_open, &attr, 0, -1, -1, PERF_FLAG_FD_CLOEXEC); + ASSERT_NE(self->fd, -1); +} + +FIXTURE_TEARDOWN(remove_on_exec) +{ + close(self->fd); + sigaction(SIGTRAP, &self->oldact, NULL); +} + +/* Verify event propagates to fork'd child. */ +TEST_F(remove_on_exec, fork_only) +{ + int status; + pid_t pid = fork(); + + if (pid == 0) { + ASSERT_EQ(signal_count, 0); + ASSERT_EQ(ioctl(self->fd, PERF_EVENT_IOC_ENABLE, 0), 0); + while (!signal_count); + _exit(42); + } + + while (!signal_count); /* Child enables event. */ + EXPECT_EQ(waitpid(pid, &status, 0), pid); + EXPECT_EQ(WEXITSTATUS(status), 42); +} + +/* + * Verify that event does _not_ propagate to fork+exec'd child; event enabled + * after fork+exec. + */ +TEST_F(remove_on_exec, fork_exec_then_enable) +{ + pid_t pid_exec, pid_only_fork; + int pipefd[2]; + int tmp; + + /* + * Non-exec child, to ensure exec does not affect inherited events of + * other children. + */ + pid_only_fork = fork(); + if (pid_only_fork == 0) { + /* Block until parent enables event. */ + while (!signal_count); + _exit(42); + } + + ASSERT_NE(pipe(pipefd), -1); + pid_exec = fork(); + if (pid_exec == 0) { + ASSERT_NE(dup2(pipefd[1], STDOUT_FILENO), -1); + close(pipefd[0]); + execl("/proc/self/exe", "exec_child", NULL); + _exit((perror("exec failed"), 1)); + } + close(pipefd[1]); + + ASSERT_EQ(waitpid(pid_exec, &tmp, WNOHANG), 0); /* Child is running. */ + /* Wait for exec'd child to start spinning. */ + EXPECT_EQ(read(pipefd[0], &tmp, sizeof(int)), sizeof(int)); + EXPECT_EQ(tmp, 42); + close(pipefd[0]); + /* Now we can enable the event, knowing the child is doing work. */ + EXPECT_EQ(ioctl(self->fd, PERF_EVENT_IOC_ENABLE, 0), 0); + /* If the event propagated to the exec'd child, it will exit normally... */ + usleep(100000); /* ... give time for event to trigger (in case of bug). */ + EXPECT_EQ(waitpid(pid_exec, &tmp, WNOHANG), 0); /* Should still be running. */ + EXPECT_EQ(kill(pid_exec, SIGKILL), 0); + + /* Verify removal from child did not affect this task's event. */ + tmp = signal_count; + while (signal_count == tmp); /* Should not hang! */ + /* Nor should it have affected the first child. */ + EXPECT_EQ(waitpid(pid_only_fork, &tmp, 0), pid_only_fork); + EXPECT_EQ(WEXITSTATUS(tmp), 42); +} + +/* + * Verify that event does _not_ propagate to fork+exec'd child; event enabled + * before fork+exec. + */ +TEST_F(remove_on_exec, enable_then_fork_exec) +{ + pid_t pid_exec; + int tmp; + + EXPECT_EQ(ioctl(self->fd, PERF_EVENT_IOC_ENABLE, 0), 0); + + pid_exec = fork(); + if (pid_exec == 0) { + execl("/proc/self/exe", "exec_child", NULL); + _exit((perror("exec failed"), 1)); + } + + /* + * The child may exit abnormally at any time if the event propagated and + * a SIGTRAP is sent before the handler was set up. + */ + usleep(100000); /* ... give time for event to trigger (in case of bug). */ + EXPECT_EQ(waitpid(pid_exec, &tmp, WNOHANG), 0); /* Should still be running. */ + EXPECT_EQ(kill(pid_exec, SIGKILL), 0); + + /* Verify removal from child did not affect this task's event. */ + tmp = signal_count; + while (signal_count == tmp); /* Should not hang! */ +} + +TEST_F(remove_on_exec, exec_stress) +{ + pid_t pids[30]; + int i, tmp; + + for (i = 0; i < sizeof(pids) / sizeof(pids[0]); i++) { + pids[i] = fork(); + if (pids[i] == 0) { + execl("/proc/self/exe", "exec_child", NULL); + _exit((perror("exec failed"), 1)); + } + + /* Some forked with event disabled, rest with enabled. */ + if (i > 10) + EXPECT_EQ(ioctl(self->fd, PERF_EVENT_IOC_ENABLE, 0), 0); + } + + usleep(100000); /* ... give time for event to trigger (in case of bug). */ + + for (i = 0; i < sizeof(pids) / sizeof(pids[0]); i++) { + /* All children should still be running. */ + EXPECT_EQ(waitpid(pids[i], &tmp, WNOHANG), 0); + EXPECT_EQ(kill(pids[i], SIGKILL), 0); + } + + /* Verify event is still alive. */ + tmp = signal_count; + while (signal_count == tmp); +} + +/* For exec'd child. */ +static void exec_child(void) +{ + struct sigaction action = {}; + const int val = 42; + + /* Set up sigtrap handler in case we erroneously receive a trap. */ + action.sa_flags = SA_SIGINFO | SA_NODEFER; + action.sa_sigaction = sigtrap_handler; + sigemptyset(&action.sa_mask); + if (sigaction(SIGTRAP, &action, NULL)) + _exit((perror("sigaction failed"), 1)); + + /* Signal parent that we're starting to spin. */ + if (write(STDOUT_FILENO, &val, sizeof(int)) == -1) + _exit((perror("write failed"), 1)); + + /* Should hang here until killed. */ + while (!signal_count); +} + +#define main test_main +TEST_HARNESS_MAIN +#undef main +int main(int argc, char *argv[]) +{ + if (!strcmp(argv[0], "exec_child")) { + exec_child(); + return 1; + } + + return test_main(argc, argv); +}
On Wed, Mar 10, 2021 at 11:41AM +0100, Marco Elver wrote:
Add kselftest to test that remove_on_exec removes inherited events from child tasks.
Signed-off-by: Marco Elver elver@google.com
To make compatible with more recent libc, we'll need to fixup the tests with the below.
Also, I've seen that tools/perf/tests exists, however it seems to be primarily about perf-tool related tests. Is this correct?
I'd propose to keep these purely kernel ABI related tests separate, and that way we can also make use of the kselftests framework which will also integrate into various CI systems such as kernelci.org.
Thanks, -- Marco
------ >8 ------
diff --git a/tools/testing/selftests/perf_events/remove_on_exec.c b/tools/testing/selftests/perf_events/remove_on_exec.c index e176b3a74d55..f89d0cfdb81e 100644 --- a/tools/testing/selftests/perf_events/remove_on_exec.c +++ b/tools/testing/selftests/perf_events/remove_on_exec.c @@ -13,6 +13,11 @@ #define __have_siginfo_t 1 #define __have_sigval_t 1 #define __have_sigevent_t 1 +#define __siginfo_t_defined +#define __sigval_t_defined +#define __sigevent_t_defined +#define _BITS_SIGINFO_CONSTS_H 1 +#define _BITS_SIGEVENT_CONSTS_H 1
#include <linux/perf_event.h> #include <pthread.h> diff --git a/tools/testing/selftests/perf_events/sigtrap_threads.c b/tools/testing/selftests/perf_events/sigtrap_threads.c index 7ebb9bb34c2e..b9a7d4b64b3c 100644 --- a/tools/testing/selftests/perf_events/sigtrap_threads.c +++ b/tools/testing/selftests/perf_events/sigtrap_threads.c @@ -13,6 +13,11 @@ #define __have_siginfo_t 1 #define __have_sigval_t 1 #define __have_sigevent_t 1 +#define __siginfo_t_defined +#define __sigval_t_defined +#define __sigevent_t_defined +#define _BITS_SIGINFO_CONSTS_H 1 +#define _BITS_SIGEVENT_CONSTS_H 1
#include <linux/hw_breakpoint.h> #include <linux/perf_event.h>
On Mon, Mar 22, 2021 at 02:24:40PM +0100, Marco Elver wrote:
To make compatible with more recent libc, we'll need to fixup the tests with the below.
OK, that reprodiced things here, thanks!
The below seems to not explode instantly.... it still has the alternative version in as well (and I think it might even work too, but the one I left in seems simpler).
---
kernel/events/core.c | 154 +++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 111 insertions(+), 43 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c index a7220e8c447e..8c0f905cc017 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -2167,8 +2172,9 @@ static void perf_group_detach(struct perf_event *event) * If this is a sibling, remove it from its group. */ if (leader != event) { + leader->nr_siblings--; list_del_init(&event->sibling_list); - event->group_leader->nr_siblings--; + event->group_leader = event; goto out; }
@@ -2182,8 +2188,9 @@ static void perf_group_detach(struct perf_event *event) if (sibling->event_caps & PERF_EV_CAP_SIBLING) perf_remove_sibling_event(sibling);
- sibling->group_leader = sibling; + leader->nr_siblings--; list_del_init(&sibling->sibling_list); + sibling->group_leader = sibling;
/* Inherit group flags from the previous leader */ sibling->group_caps = event->group_caps; @@ -2360,10 +2367,19 @@ __perf_remove_from_context(struct perf_event *event, static void perf_remove_from_context(struct perf_event *event, unsigned long flags) { struct perf_event_context *ctx = event->ctx; + bool remove;
lockdep_assert_held(&ctx->mutex);
- event_function_call(event, __perf_remove_from_context, (void *)flags); + /* + * There is concurrency vs remove_on_exec(). + */ + raw_spin_lock_irq(&ctx->lock); + remove = (event->attach_state & PERF_ATTACH_CONTEXT); + raw_spin_unlock_irq(&ctx->lock); + + if (remove) + event_function_call(event, __perf_remove_from_context, (void *)flags);
/* * The above event_function_call() can NO-OP when it hits @@ -4232,41 +4248,92 @@ static void perf_event_enable_on_exec(int ctxn) static void perf_remove_from_owner(struct perf_event *event); static void perf_event_exit_event(struct perf_event *child_event, struct perf_event_context *child_ctx, - struct task_struct *child); + struct task_struct *child, + bool removed);
/* * Removes all events from the current task that have been marked * remove-on-exec, and feeds their values back to parent events. */ -static void perf_event_remove_on_exec(void) +static void perf_event_remove_on_exec(int ctxn) { - int ctxn; + struct perf_event_context *ctx, *clone_ctx = NULL; + struct perf_event *event, *next; + LIST_HEAD(free_list); + unsigned long flags; + bool modified = false;
- for_each_task_context_nr(ctxn) { - struct perf_event_context *ctx; - struct perf_event *event, *next; + ctx = perf_pin_task_context(current, ctxn); + if (!ctx) + return;
- ctx = perf_pin_task_context(current, ctxn); - if (!ctx) + mutex_lock(&ctx->mutex); + + if (WARN_ON_ONCE(ctx->task != current)) + goto unlock; + + list_for_each_entry_safe(event, next, &ctx->event_list, event_entry) { + if (!event->attr.remove_on_exec) continue; - mutex_lock(&ctx->mutex);
- list_for_each_entry_safe(event, next, &ctx->event_list, event_entry) { - if (!event->attr.remove_on_exec) - continue; + if (!is_kernel_event(event)) + perf_remove_from_owner(event);
- if (!is_kernel_event(event)) - perf_remove_from_owner(event); - perf_remove_from_context(event, DETACH_GROUP); - /* - * Remove the event and feed back its values to the - * parent event. - */ - perf_event_exit_event(event, ctx, current); - } - mutex_unlock(&ctx->mutex); - put_ctx(ctx); + modified = true; + + perf_remove_from_context(event, !!event->parent * DETACH_GROUP); + perf_event_exit_event(event, ctx, current, true); + } + + raw_spin_lock_irqsave(&ctx->lock, flags); + if (modified) + clone_ctx = unclone_ctx(ctx); + --ctx->pin_count; + raw_spin_unlock_irqrestore(&ctx->lock, flags); + +#if 0 + struct perf_cpu_context *cpuctx; + + if (!modified) { + perf_unpin_context(ctx); + goto unlock; + } + + local_irq_save(flags); + cpuctx = __get_cpu_context(ctx); + perf_ctx_lock(cpuctx, ctx); + task_ctx_sched_out(cpuctx, ctx, EVENT_ALL); + + list_for_each_entry_safe(event, next, &ctx->event_list, event_entry) { + if (!event->attr.remove_on_exec) + continue; + + if (event->parent) + perf_group_detach(event); + list_del_event(event, ctx); + + list_add(&event->active_list, &free_list); + } + + ctx_resched(cpuctx, ctx, EVENT_ALL); + + clone_ctx = unclone_ctx(ctx); + --ctx->pin_count; + perf_ctx_unlock(cpuctx, ctx); + local_irq_restore(flags); + + list_for_each_entry_safe(event, next, &free_list, active_entry) { + list_del(&event->active_entry); + perf_event_exit_event(event, ctx, current, true); } +#endif + +unlock: + mutex_unlock(&ctx->mutex); + + put_ctx(ctx); + if (clone_ctx) + put_ctx(clone_ctx); }
struct perf_read_data { @@ -7615,20 +7682,18 @@ void perf_event_exec(void) struct perf_event_context *ctx; int ctxn;
- rcu_read_lock(); for_each_task_context_nr(ctxn) { - ctx = current->perf_event_ctxp[ctxn]; - if (!ctx) - continue; - perf_event_enable_on_exec(ctxn); + perf_event_remove_on_exec(ctxn);
- perf_iterate_ctx(ctx, perf_event_addr_filters_exec, NULL, - true); + rcu_read_lock(); + ctx = rcu_dereference(current->perf_event_ctxp[ctxn]); + if (ctx) { + perf_iterate_ctx(ctx, perf_event_addr_filters_exec, + NULL, true); + } + rcu_read_unlock(); } - rcu_read_unlock(); - - perf_event_remove_on_exec(); }
struct remote_output { @@ -12509,7 +12574,7 @@ static void sync_child_event(struct perf_event *child_event, static void perf_event_exit_event(struct perf_event *child_event, struct perf_event_context *child_ctx, - struct task_struct *child) + struct task_struct *child, bool removed) { struct perf_event *parent_event = child_event->parent;
@@ -12526,12 +12591,15 @@ perf_event_exit_event(struct perf_event *child_event, * and being thorough is better. */ raw_spin_lock_irq(&child_ctx->lock); - WARN_ON_ONCE(child_ctx->is_active); + if (!removed) { + WARN_ON_ONCE(child_ctx->is_active);
- if (parent_event) - perf_group_detach(child_event); - list_del_event(child_event, child_ctx); - perf_event_set_state(child_event, PERF_EVENT_STATE_EXIT); /* is_event_hup() */ + if (parent_event) + perf_group_detach(child_event); + list_del_event(child_event, child_ctx); + } + if (child_event->state >= PERF_EVENT_STATE_EXIT) + perf_event_set_state(child_event, PERF_EVENT_STATE_EXIT); /* is_event_hup() */ raw_spin_unlock_irq(&child_ctx->lock);
/* @@ -12617,7 +12685,7 @@ static void perf_event_exit_task_context(struct task_struct *child, int ctxn) perf_event_task(child, child_ctx, 0);
list_for_each_entry_safe(child_event, next, &child_ctx->event_list, event_entry) - perf_event_exit_event(child_event, child_ctx, child); + perf_event_exit_event(child_event, child_ctx, child, false);
mutex_unlock(&child_ctx->mutex);
On Mon, Mar 22, 2021 at 05:42PM +0100, Peter Zijlstra wrote:
On Mon, Mar 22, 2021 at 02:24:40PM +0100, Marco Elver wrote:
To make compatible with more recent libc, we'll need to fixup the tests with the below.
OK, that reprodiced things here, thanks!
The below seems to not explode instantly.... it still has the alternative version in as well (and I think it might even work too, but the one I left in seems simpler).
Thanks! Unfortunately neither version worked if I tortured it a little with this:
for x in {1..1000}; do ( tools/testing/selftests/perf_events/remove_on_exec & ); done
Which resulted in the 2 warnings:
WARNING: CPU: 1 PID: 795 at kernel/events/core.c:242 event_function+0xf3/0x100 WARNING: CPU: 1 PID: 795 at kernel/events/core.c:247 event_function+0xef/0x100
with efs->func==__perf_event_enable. I believe it's sufficient to add
mutex_lock(&parent_event->child_mutex); list_del_init(&event->child_list); mutex_unlock(&parent_event->child_mutex);
right before removing from context. With the version I have now (below for completeness), extended torture with the above test results in no more warnings and the test also passes.
I'd be happy to send a non-RFC v3 with all that squashed in. I'd need your Signed-off-by for the diff you sent to proceed (and add your Co-developed-by).
Thanks, -- Marco
------ >8 ------
diff --git a/kernel/events/core.c b/kernel/events/core.c index aa47e111435e..cea7c88fe131 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -2165,8 +2165,9 @@ static void perf_group_detach(struct perf_event *event) * If this is a sibling, remove it from its group. */ if (leader != event) { + leader->nr_siblings--; list_del_init(&event->sibling_list); - event->group_leader->nr_siblings--; + event->group_leader = event; goto out; }
@@ -2180,8 +2181,9 @@ static void perf_group_detach(struct perf_event *event) if (sibling->event_caps & PERF_EV_CAP_SIBLING) perf_remove_sibling_event(sibling);
- sibling->group_leader = sibling; + leader->nr_siblings--; list_del_init(&sibling->sibling_list); + sibling->group_leader = sibling;
/* Inherit group flags from the previous leader */ sibling->group_caps = event->group_caps; @@ -2358,10 +2360,19 @@ __perf_remove_from_context(struct perf_event *event, static void perf_remove_from_context(struct perf_event *event, unsigned long flags) { struct perf_event_context *ctx = event->ctx; + bool remove;
lockdep_assert_held(&ctx->mutex);
- event_function_call(event, __perf_remove_from_context, (void *)flags); + /* + * There is concurrency vs remove_on_exec(). + */ + raw_spin_lock_irq(&ctx->lock); + remove = (event->attach_state & PERF_ATTACH_CONTEXT); + raw_spin_unlock_irq(&ctx->lock); + + if (remove) + event_function_call(event, __perf_remove_from_context, (void *)flags);
/* * The above event_function_call() can NO-OP when it hits @@ -4198,41 +4209,68 @@ static void perf_event_enable_on_exec(int ctxn) static void perf_remove_from_owner(struct perf_event *event); static void perf_event_exit_event(struct perf_event *child_event, struct perf_event_context *child_ctx, - struct task_struct *child); + struct task_struct *child, + bool removed);
/* * Removes all events from the current task that have been marked * remove-on-exec, and feeds their values back to parent events. */ -static void perf_event_remove_on_exec(void) +static void perf_event_remove_on_exec(int ctxn) { - int ctxn; + struct perf_event_context *ctx, *clone_ctx = NULL; + struct perf_event *event, *next; + LIST_HEAD(free_list); + unsigned long flags; + bool modified = false;
- for_each_task_context_nr(ctxn) { - struct perf_event_context *ctx; - struct perf_event *event, *next; + ctx = perf_pin_task_context(current, ctxn); + if (!ctx) + return;
- ctx = perf_pin_task_context(current, ctxn); - if (!ctx) + mutex_lock(&ctx->mutex); + + if (WARN_ON_ONCE(ctx->task != current)) + goto unlock; + + list_for_each_entry_safe(event, next, &ctx->event_list, event_entry) { + struct perf_event *parent_event = event->parent; + + if (!event->attr.remove_on_exec) continue; - mutex_lock(&ctx->mutex);
- list_for_each_entry_safe(event, next, &ctx->event_list, event_entry) { - if (!event->attr.remove_on_exec) - continue; + if (!is_kernel_event(event)) + perf_remove_from_owner(event);
- if (!is_kernel_event(event)) - perf_remove_from_owner(event); - perf_remove_from_context(event, DETACH_GROUP); + modified = true; + + if (parent_event) { /* - * Remove the event and feed back its values to the - * parent event. + * Remove event from parent, to avoid race where the + * parent concurrently iterates through its children to + * enable, disable, or otherwise modify an event. */ - perf_event_exit_event(event, ctx, current); + mutex_lock(&parent_event->child_mutex); + list_del_init(&event->child_list); + mutex_unlock(&parent_event->child_mutex); } - mutex_unlock(&ctx->mutex); - put_ctx(ctx); + + perf_remove_from_context(event, !!event->parent * DETACH_GROUP); + perf_event_exit_event(event, ctx, current, true); } + + raw_spin_lock_irqsave(&ctx->lock, flags); + if (modified) + clone_ctx = unclone_ctx(ctx); + --ctx->pin_count; + raw_spin_unlock_irqrestore(&ctx->lock, flags); + +unlock: + mutex_unlock(&ctx->mutex); + + put_ctx(ctx); + if (clone_ctx) + put_ctx(clone_ctx); }
struct perf_read_data { @@ -7581,20 +7619,18 @@ void perf_event_exec(void) struct perf_event_context *ctx; int ctxn;
- rcu_read_lock(); for_each_task_context_nr(ctxn) { - ctx = current->perf_event_ctxp[ctxn]; - if (!ctx) - continue; - perf_event_enable_on_exec(ctxn); + perf_event_remove_on_exec(ctxn);
- perf_iterate_ctx(ctx, perf_event_addr_filters_exec, NULL, - true); + rcu_read_lock(); + ctx = rcu_dereference(current->perf_event_ctxp[ctxn]); + if (ctx) { + perf_iterate_ctx(ctx, perf_event_addr_filters_exec, + NULL, true); + } + rcu_read_unlock(); } - rcu_read_unlock(); - - perf_event_remove_on_exec(); }
struct remote_output { @@ -12472,7 +12508,7 @@ static void sync_child_event(struct perf_event *child_event, static void perf_event_exit_event(struct perf_event *child_event, struct perf_event_context *child_ctx, - struct task_struct *child) + struct task_struct *child, bool removed) { struct perf_event *parent_event = child_event->parent;
@@ -12489,12 +12525,15 @@ perf_event_exit_event(struct perf_event *child_event, * and being thorough is better. */ raw_spin_lock_irq(&child_ctx->lock); - WARN_ON_ONCE(child_ctx->is_active); + if (!removed) { + WARN_ON_ONCE(child_ctx->is_active);
- if (parent_event) - perf_group_detach(child_event); - list_del_event(child_event, child_ctx); - perf_event_set_state(child_event, PERF_EVENT_STATE_EXIT); /* is_event_hup() */ + if (parent_event) + perf_group_detach(child_event); + list_del_event(child_event, child_ctx); + } + if (child_event->state >= PERF_EVENT_STATE_EXIT) + perf_event_set_state(child_event, PERF_EVENT_STATE_EXIT); /* is_event_hup() */ raw_spin_unlock_irq(&child_ctx->lock);
/* @@ -12580,7 +12619,7 @@ static void perf_event_exit_task_context(struct task_struct *child, int ctxn) perf_event_task(child, child_ctx, 0);
list_for_each_entry_safe(child_event, next, &child_ctx->event_list, event_entry) - perf_event_exit_event(child_event, child_ctx, child); + perf_event_exit_event(child_event, child_ctx, child, false);
mutex_unlock(&child_ctx->mutex);
On Tue, Mar 23, 2021 at 10:52:41AM +0100, Marco Elver wrote:
with efs->func==__perf_event_enable. I believe it's sufficient to add
mutex_lock(&parent_event->child_mutex); list_del_init(&event->child_list); mutex_unlock(&parent_event->child_mutex);
right before removing from context. With the version I have now (below for completeness), extended torture with the above test results in no more warnings and the test also passes.
- list_for_each_entry_safe(event, next, &ctx->event_list, event_entry) {
struct perf_event *parent_event = event->parent;
if (!event->attr.remove_on_exec) continue;
if (!is_kernel_event(event))
perf_remove_from_owner(event);
modified = true;
if (parent_event) { /*
* Remove event from parent, to avoid race where the
* parent concurrently iterates through its children to
* enable, disable, or otherwise modify an event. */
mutex_lock(&parent_event->child_mutex);
list_del_init(&event->child_list);
}mutex_unlock(&parent_event->child_mutex);
^^^ this, right?
But that's something perf_event_exit_event() alread does. So then you're worried about the order of things.
perf_remove_from_context(event, !!event->parent * DETACH_GROUP);
}perf_event_exit_event(event, ctx, current, true);
perf_event_release_kernel() first does perf_remove_from_context() and then clears the child_list, and that makes sense because if we're there, there's no external access anymore, the filedesc is gone and nobody will be iterating child_list anymore.
perf_event_exit_task_context() and perf_event_exit_event() OTOH seem to rely on ctx->task == TOMBSTONE to sabotage event_function_call() such that if anybody is iterating the child_list, it'll NOP out.
But here we don't have neither, and thus need to worry about the order vs child_list iteration.
I suppose we should stick sync_child_event() in there as well.
And at that point there's very little value in still using perf_event_exit_event()... let me see if there's something to be done about that.
On Tue, 23 Mar 2021 at 11:32, Peter Zijlstra peterz@infradead.org wrote:
On Tue, Mar 23, 2021 at 10:52:41AM +0100, Marco Elver wrote:
with efs->func==__perf_event_enable. I believe it's sufficient to add
mutex_lock(&parent_event->child_mutex); list_del_init(&event->child_list); mutex_unlock(&parent_event->child_mutex);
right before removing from context. With the version I have now (below for completeness), extended torture with the above test results in no more warnings and the test also passes.
list_for_each_entry_safe(event, next, &ctx->event_list, event_entry) {
struct perf_event *parent_event = event->parent;
if (!event->attr.remove_on_exec) continue;
if (!is_kernel_event(event))
perf_remove_from_owner(event);
modified = true;
if (parent_event) { /*
* Remove event from parent, to avoid race where the
* parent concurrently iterates through its children to
* enable, disable, or otherwise modify an event. */
mutex_lock(&parent_event->child_mutex);
list_del_init(&event->child_list);
mutex_unlock(&parent_event->child_mutex); }
^^^ this, right?
But that's something perf_event_exit_event() alread does. So then you're worried about the order of things.
Correct. We somehow need to prohibit the parent from doing an event_function_call() while we potentially deactivate the context with perf_remove_from_context().
perf_remove_from_context(event, !!event->parent * DETACH_GROUP);
perf_event_exit_event(event, ctx, current, true); }
perf_event_release_kernel() first does perf_remove_from_context() and then clears the child_list, and that makes sense because if we're there, there's no external access anymore, the filedesc is gone and nobody will be iterating child_list anymore.
perf_event_exit_task_context() and perf_event_exit_event() OTOH seem to rely on ctx->task == TOMBSTONE to sabotage event_function_call() such that if anybody is iterating the child_list, it'll NOP out.
But here we don't have neither, and thus need to worry about the order vs child_list iteration.
I suppose we should stick sync_child_event() in there as well.
And at that point there's very little value in still using perf_event_exit_event()... let me see if there's something to be done about that.
I don't mind dropping use of perf_event_exit_event() and open coding all of this. That would also avoid modifying perf_event_exit_event().
But I leave it to you what you think is nicest.
Thanks, -- Marco
On Tue, Mar 23, 2021 at 11:41AM +0100, Marco Elver wrote:
On Tue, 23 Mar 2021 at 11:32, Peter Zijlstra peterz@infradead.org wrote:
[...]
if (parent_event) { /*
* Remove event from parent, to avoid race where the
* parent concurrently iterates through its children to
* enable, disable, or otherwise modify an event. */
mutex_lock(&parent_event->child_mutex);
list_del_init(&event->child_list);
mutex_unlock(&parent_event->child_mutex); }
^^^ this, right?
But that's something perf_event_exit_event() alread does. So then you're worried about the order of things.
Correct. We somehow need to prohibit the parent from doing an event_function_call() while we potentially deactivate the context with perf_remove_from_context().
perf_remove_from_context(event, !!event->parent * DETACH_GROUP);
perf_event_exit_event(event, ctx, current, true); }
perf_event_release_kernel() first does perf_remove_from_context() and then clears the child_list, and that makes sense because if we're there, there's no external access anymore, the filedesc is gone and nobody will be iterating child_list anymore.
perf_event_exit_task_context() and perf_event_exit_event() OTOH seem to rely on ctx->task == TOMBSTONE to sabotage event_function_call() such that if anybody is iterating the child_list, it'll NOP out.
But here we don't have neither, and thus need to worry about the order vs child_list iteration.
I suppose we should stick sync_child_event() in there as well.
And at that point there's very little value in still using perf_event_exit_event()... let me see if there's something to be done about that.
I don't mind dropping use of perf_event_exit_event() and open coding all of this. That would also avoid modifying perf_event_exit_event().
But I leave it to you what you think is nicest.
I played a bit more with it, and the below would be the version without using perf_event_exit_event(). Perhaps it isn't too bad.
Thanks, -- Marco
------ >8 ------
diff --git a/kernel/events/core.c b/kernel/events/core.c index aa47e111435e..288b61820dab 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -2165,8 +2165,9 @@ static void perf_group_detach(struct perf_event *event) * If this is a sibling, remove it from its group. */ if (leader != event) { + leader->nr_siblings--; list_del_init(&event->sibling_list); - event->group_leader->nr_siblings--; + event->group_leader = event; goto out; }
@@ -2180,8 +2181,9 @@ static void perf_group_detach(struct perf_event *event) if (sibling->event_caps & PERF_EV_CAP_SIBLING) perf_remove_sibling_event(sibling);
- sibling->group_leader = sibling; + leader->nr_siblings--; list_del_init(&sibling->sibling_list); + sibling->group_leader = sibling;
/* Inherit group flags from the previous leader */ sibling->group_caps = event->group_caps; @@ -2358,10 +2360,19 @@ __perf_remove_from_context(struct perf_event *event, static void perf_remove_from_context(struct perf_event *event, unsigned long flags) { struct perf_event_context *ctx = event->ctx; + bool remove;
lockdep_assert_held(&ctx->mutex);
- event_function_call(event, __perf_remove_from_context, (void *)flags); + /* + * There is concurrency vs remove_on_exec(). + */ + raw_spin_lock_irq(&ctx->lock); + remove = (event->attach_state & PERF_ATTACH_CONTEXT); + raw_spin_unlock_irq(&ctx->lock); + + if (remove) + event_function_call(event, __perf_remove_from_context, (void *)flags);
/* * The above event_function_call() can NO-OP when it hits @@ -4196,43 +4207,86 @@ static void perf_event_enable_on_exec(int ctxn) }
static void perf_remove_from_owner(struct perf_event *event); -static void perf_event_exit_event(struct perf_event *child_event, - struct perf_event_context *child_ctx, - struct task_struct *child); +static void sync_child_event(struct perf_event *child_event, + struct task_struct *child); +static void free_event(struct perf_event *event);
/* * Removes all events from the current task that have been marked * remove-on-exec, and feeds their values back to parent events. */ -static void perf_event_remove_on_exec(void) +static void perf_event_remove_on_exec(int ctxn) { - int ctxn; + struct perf_event_context *ctx, *clone_ctx = NULL; + struct perf_event *event, *next; + LIST_HEAD(free_list); + unsigned long flags; + bool modified = false;
- for_each_task_context_nr(ctxn) { - struct perf_event_context *ctx; - struct perf_event *event, *next; + ctx = perf_pin_task_context(current, ctxn); + if (!ctx) + return;
- ctx = perf_pin_task_context(current, ctxn); - if (!ctx) + mutex_lock(&ctx->mutex); + + if (WARN_ON_ONCE(ctx->task != current)) + goto unlock; + + list_for_each_entry_safe(event, next, &ctx->event_list, event_entry) { + struct perf_event *parent_event = event->parent; + + if (!event->attr.remove_on_exec) continue; - mutex_lock(&ctx->mutex);
- list_for_each_entry_safe(event, next, &ctx->event_list, event_entry) { - if (!event->attr.remove_on_exec) - continue; + if (!is_kernel_event(event)) + perf_remove_from_owner(event); + + modified = true;
- if (!is_kernel_event(event)) - perf_remove_from_owner(event); - perf_remove_from_context(event, DETACH_GROUP); + if (parent_event) { /* - * Remove the event and feed back its values to the - * parent event. + * Remove event from parent *before* modifying contexts, + * to avoid race where the parent concurrently iterates + * through its children to enable, disable, or otherwise + * modify an event. */ - perf_event_exit_event(event, ctx, current); + + sync_child_event(event, current); + + WARN_ON_ONCE(parent_event->ctx->parent_ctx); + mutex_lock(&parent_event->child_mutex); + list_del_init(&event->child_list); + mutex_unlock(&parent_event->child_mutex); + + perf_event_wakeup(parent_event); + put_event(parent_event); } - mutex_unlock(&ctx->mutex); - put_ctx(ctx); + + perf_remove_from_context(event, !!event->parent * DETACH_GROUP); + + raw_spin_lock_irq(&ctx->lock); + WARN_ON_ONCE(ctx->is_active); + perf_event_set_state(event, PERF_EVENT_STATE_EXIT); /* is_event_hup() */ + raw_spin_unlock_irq(&ctx->lock); + + if (parent_event) + free_event(event); + else + perf_event_wakeup(event); } + + raw_spin_lock_irqsave(&ctx->lock, flags); + if (modified) + clone_ctx = unclone_ctx(ctx); + --ctx->pin_count; + raw_spin_unlock_irqrestore(&ctx->lock, flags); + +unlock: + mutex_unlock(&ctx->mutex); + + put_ctx(ctx); + if (clone_ctx) + put_ctx(clone_ctx); }
struct perf_read_data { @@ -7581,20 +7635,18 @@ void perf_event_exec(void) struct perf_event_context *ctx; int ctxn;
- rcu_read_lock(); for_each_task_context_nr(ctxn) { - ctx = current->perf_event_ctxp[ctxn]; - if (!ctx) - continue; - perf_event_enable_on_exec(ctxn); + perf_event_remove_on_exec(ctxn);
- perf_iterate_ctx(ctx, perf_event_addr_filters_exec, NULL, - true); + rcu_read_lock(); + ctx = rcu_dereference(current->perf_event_ctxp[ctxn]); + if (ctx) { + perf_iterate_ctx(ctx, perf_event_addr_filters_exec, + NULL, true); + } + rcu_read_unlock(); } - rcu_read_unlock(); - - perf_event_remove_on_exec(); }
struct remote_output {
On Tue, Mar 23, 2021 at 11:32:03AM +0100, Peter Zijlstra wrote:
And at that point there's very little value in still using perf_event_exit_event()... let me see if there's something to be done about that.
I ended up with something like the below. Which then simplifies remove_on_exec() to:
static void perf_event_remove_on_exec(int ctxn) { struct perf_event_context *ctx, *clone_ctx = NULL; struct perf_event *event, *next; bool modified = false; unsigned long flags;
ctx = perf_pin_task_context(current, ctxn); if (!ctx) return;
mutex_lock(&ctx->mutex);
if (WARN_ON_ONCE(ctx->task != current)) goto unlock;
list_for_each_entry_safe(event, next, &ctx->event_list, event_entry) { if (!event->attr.remove_on_exec) continue;
if (!is_kernel_event(event)) perf_remove_from_owner(event);
modified = true;
perf_event_exit_event(event, ctx); }
raw_spin_lock_irqsave(&ctx->lock, flags); if (modified) clone_ctx = unclone_ctx(ctx); --ctx->pin_count; raw_spin_unlock_irqrestore(&ctx->lock, flags);
unlock: mutex_unlock(&ctx->mutex);
put_ctx(ctx); if (clone_ctx) put_ctx(clone_ctx); }
Very lightly tested with that {1..1000} thing.
---
Subject: perf: Rework perf_event_exit_event() From: Peter Zijlstra peterz@infradead.org Date: Tue Mar 23 15:16:06 CET 2021
Make perf_event_exit_event() more robust, such that we can use it from other contexts. Specifically the up and coming remove_on_exec.
For this to work we need to address a few issues. Remove_on_exec will not destroy the entire context, so we cannot rely on TASK_TOMBSTONE to disable event_function_call() and we thus have to use perf_remove_from_context().
When using perf_remove_from_context(), there's two races to consider. The first is against close(), where we can have concurrent tear-down of the event. The second is against child_list iteration, which should not find a half baked event.
To address this, teach perf_remove_from_context() to special case !ctx->is_active and about DETACH_CHILD.
Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org --- include/linux/perf_event.h | 1 kernel/events/core.c | 144 +++++++++++++++++++++++++-------------------- 2 files changed, 81 insertions(+), 64 deletions(-)
--- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -607,6 +607,7 @@ struct swevent_hlist { #define PERF_ATTACH_TASK_DATA 0x08 #define PERF_ATTACH_ITRACE 0x10 #define PERF_ATTACH_SCHED_CB 0x20 +#define PERF_ATTACH_CHILD 0x40
struct perf_cgroup; struct perf_buffer; --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -2210,6 +2210,26 @@ static void perf_group_detach(struct per perf_event__header_size(leader); }
+static void sync_child_event(struct perf_event *child_event); + +static void perf_child_detach(struct perf_event *event) +{ + struct perf_event *parent_event = event->parent; + + if (!(event->attach_state & PERF_ATTACH_CHILD)) + return; + + event->attach_state &= ~PERF_ATTACH_CHILD; + + if (WARN_ON_ONCE(!parent_event)) + return; + + lockdep_assert_held(&parent_event->child_mutex); + + sync_child_event(event); + list_del_init(&event->child_list); +} + static bool is_orphaned_event(struct perf_event *event) { return event->state == PERF_EVENT_STATE_DEAD; @@ -2317,6 +2337,7 @@ group_sched_out(struct perf_event *group }
#define DETACH_GROUP 0x01UL +#define DETACH_CHILD 0x02UL
/* * Cross CPU call to remove a performance event @@ -2340,6 +2361,8 @@ __perf_remove_from_context(struct perf_e event_sched_out(event, cpuctx, ctx); if (flags & DETACH_GROUP) perf_group_detach(event); + if (flags & DETACH_CHILD) + perf_child_detach(event); list_del_event(event, ctx);
if (!ctx->nr_events && ctx->is_active) { @@ -2368,25 +2391,21 @@ static void perf_remove_from_context(str
lockdep_assert_held(&ctx->mutex);
- event_function_call(event, __perf_remove_from_context, (void *)flags); - /* - * The above event_function_call() can NO-OP when it hits - * TASK_TOMBSTONE. In that case we must already have been detached - * from the context (by perf_event_exit_event()) but the grouping - * might still be in-tact. - */ - WARN_ON_ONCE(event->attach_state & PERF_ATTACH_CONTEXT); - if ((flags & DETACH_GROUP) && - (event->attach_state & PERF_ATTACH_GROUP)) { - /* - * Since in that case we cannot possibly be scheduled, simply - * detach now. - */ - raw_spin_lock_irq(&ctx->lock); - perf_group_detach(event); + * Because of perf_event_exit_task(), perf_remove_from_context() ought + * to work in the face of TASK_TOMBSTONE, unlike every other + * event_function_call() user. + */ + raw_spin_lock_irq(&ctx->lock); + if (!ctx->is_active) { + __perf_remove_from_context(event, __get_cpu_context(ctx), + ctx, (void *)flags); raw_spin_unlock_irq(&ctx->lock); + return; } + raw_spin_unlock_irq(&ctx->lock); + + event_function_call(event, __perf_remove_from_context, (void *)flags); }
/* @@ -12379,14 +12398,17 @@ void perf_pmu_migrate_context(struct pmu } EXPORT_SYMBOL_GPL(perf_pmu_migrate_context);
-static void sync_child_event(struct perf_event *child_event, - struct task_struct *child) +static void sync_child_event(struct perf_event *child_event) { struct perf_event *parent_event = child_event->parent; u64 child_val;
- if (child_event->attr.inherit_stat) - perf_event_read_event(child_event, child); + if (child_event->attr.inherit_stat) { + struct task_struct *task = child_event->ctx->task; + + if (task) + perf_event_read_event(child_event, task); + }
child_val = perf_event_count(child_event);
@@ -12401,60 +12423,53 @@ static void sync_child_event(struct perf }
static void -perf_event_exit_event(struct perf_event *child_event, - struct perf_event_context *child_ctx, - struct task_struct *child) +perf_event_exit_event(struct perf_event *event, struct perf_event_context *ctx) { - struct perf_event *parent_event = child_event->parent; + struct perf_event *parent_event = event->parent; + unsigned long detach_flags = 0;
- /* - * Do not destroy the 'original' grouping; because of the context - * switch optimization the original events could've ended up in a - * random child task. - * - * If we were to destroy the original group, all group related - * operations would cease to function properly after this random - * child dies. - * - * Do destroy all inherited groups, we don't care about those - * and being thorough is better. - */ - raw_spin_lock_irq(&child_ctx->lock); - WARN_ON_ONCE(child_ctx->is_active); + if (parent_event) { + /* + * Do not destroy the 'original' grouping; because of the + * context switch optimization the original events could've + * ended up in a random child task. + * + * If we were to destroy the original group, all group related + * operations would cease to function properly after this + * random child dies. + * + * Do destroy all inherited groups, we don't care about those + * and being thorough is better. + */ + detach_flags = DETACH_GROUP | DETACH_CHILD; + mutex_lock(&parent_event->child_mutex); + }
- if (parent_event) - perf_group_detach(child_event); - list_del_event(child_event, child_ctx); - perf_event_set_state(child_event, PERF_EVENT_STATE_EXIT); /* is_event_hup() */ - raw_spin_unlock_irq(&child_ctx->lock); + perf_remove_from_context(event, detach_flags); + + raw_spin_lock_irq(&ctx->lock); + if (event->state > PERF_EVENT_STATE_EXIT) + perf_event_set_state(event, PERF_EVENT_STATE_EXIT); + raw_spin_unlock_irq(&ctx->lock);
/* - * Parent events are governed by their filedesc, retain them. + * Child events can be freed. */ - if (!parent_event) { - perf_event_wakeup(child_event); + if (parent_event) { + mutex_unlock(&parent_event->child_mutex); + /* + * Kick perf_poll() for is_event_hup(); + */ + perf_event_wakeup(parent_event); + free_event(event); + put_event(parent_event); return; } - /* - * Child events can be cleaned up. - */ - - sync_child_event(child_event, child);
/* - * Remove this event from the parent's list - */ - WARN_ON_ONCE(parent_event->ctx->parent_ctx); - mutex_lock(&parent_event->child_mutex); - list_del_init(&child_event->child_list); - mutex_unlock(&parent_event->child_mutex); - - /* - * Kick perf_poll() for is_event_hup(). + * Parent events are governed by their filedesc, retain them. */ - perf_event_wakeup(parent_event); - free_event(child_event); - put_event(parent_event); + perf_event_wakeup(event); }
static void perf_event_exit_task_context(struct task_struct *child, int ctxn) @@ -12511,7 +12526,7 @@ static void perf_event_exit_task_context perf_event_task(child, child_ctx, 0);
list_for_each_entry_safe(child_event, next, &child_ctx->event_list, event_entry) - perf_event_exit_event(child_event, child_ctx, child); + perf_event_exit_event(child_event, child_ctx);
mutex_unlock(&child_ctx->mutex);
@@ -12771,6 +12786,7 @@ inherit_event(struct perf_event *parent_ */ raw_spin_lock_irqsave(&child_ctx->lock, flags); add_event_to_ctx(child_event, child_ctx); + child_event->attach_state |= PERF_ATTACH_CHILD; raw_spin_unlock_irqrestore(&child_ctx->lock, flags);
/*
On Tue, Mar 23, 2021 at 03:45PM +0100, Peter Zijlstra wrote:
On Tue, Mar 23, 2021 at 11:32:03AM +0100, Peter Zijlstra wrote:
And at that point there's very little value in still using perf_event_exit_event()... let me see if there's something to be done about that.
I ended up with something like the below. Which then simplifies remove_on_exec() to:
[...]
Very lightly tested with that {1..1000} thing.
Subject: perf: Rework perf_event_exit_event() From: Peter Zijlstra peterz@infradead.org Date: Tue Mar 23 15:16:06 CET 2021
Make perf_event_exit_event() more robust, such that we can use it from other contexts. Specifically the up and coming remove_on_exec.
For this to work we need to address a few issues. Remove_on_exec will not destroy the entire context, so we cannot rely on TASK_TOMBSTONE to disable event_function_call() and we thus have to use perf_remove_from_context().
When using perf_remove_from_context(), there's two races to consider. The first is against close(), where we can have concurrent tear-down of the event. The second is against child_list iteration, which should not find a half baked event.
To address this, teach perf_remove_from_context() to special case !ctx->is_active and about DETACH_CHILD.
Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org
Very nice, thanks! It seems to all hold up to testing as well.
Unless you already have this on some branch somewhere, I'll prepend it to the series for now. I'll test some more and try to get v3 out tomorrow.
Thanks, -- Marco
On Tue, Mar 23, 2021 at 04:58:37PM +0100, Marco Elver wrote:
On Tue, Mar 23, 2021 at 03:45PM +0100, Peter Zijlstra wrote:
On Tue, Mar 23, 2021 at 11:32:03AM +0100, Peter Zijlstra wrote:
And at that point there's very little value in still using perf_event_exit_event()... let me see if there's something to be done about that.
I ended up with something like the below. Which then simplifies remove_on_exec() to:
[...]
Very lightly tested with that {1..1000} thing.
Subject: perf: Rework perf_event_exit_event() From: Peter Zijlstra peterz@infradead.org Date: Tue Mar 23 15:16:06 CET 2021
Make perf_event_exit_event() more robust, such that we can use it from other contexts. Specifically the up and coming remove_on_exec.
For this to work we need to address a few issues. Remove_on_exec will not destroy the entire context, so we cannot rely on TASK_TOMBSTONE to disable event_function_call() and we thus have to use perf_remove_from_context().
When using perf_remove_from_context(), there's two races to consider. The first is against close(), where we can have concurrent tear-down of the event. The second is against child_list iteration, which should not find a half baked event.
To address this, teach perf_remove_from_context() to special case !ctx->is_active and about DETACH_CHILD.
Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org
Very nice, thanks! It seems to all hold up to testing as well.
Unless you already have this on some branch somewhere, I'll prepend it to the series for now. I'll test some more and try to get v3 out tomorrow.
I have not queued it, so please keep it in your series so it stays together (and tested).
Thanks!
On Mon, Mar 22, 2021 at 6:24 AM Marco Elver elver@google.com wrote:
On Wed, Mar 10, 2021 at 11:41AM +0100, Marco Elver wrote:
Add kselftest to test that remove_on_exec removes inherited events from child tasks.
Signed-off-by: Marco Elver elver@google.com
To make compatible with more recent libc, we'll need to fixup the tests with the below.
Also, I've seen that tools/perf/tests exists, however it seems to be primarily about perf-tool related tests. Is this correct?
I'd propose to keep these purely kernel ABI related tests separate, and that way we can also make use of the kselftests framework which will also integrate into various CI systems such as kernelci.org.
Perhaps there is a way to have both? Having the perf tool spot an errant kernel feels like a feature. There are also tools/lib/perf/tests and Vince Weaver's tests [1]. It is possible to run standalone tests from within perf test by having them be executed by a shell test.
Thanks, Ian
[1] https://github.com/deater/perf_event_tests
Thanks, -- Marco
------ >8 ------
diff --git a/tools/testing/selftests/perf_events/remove_on_exec.c b/tools/testing/selftests/perf_events/remove_on_exec.c index e176b3a74d55..f89d0cfdb81e 100644 --- a/tools/testing/selftests/perf_events/remove_on_exec.c +++ b/tools/testing/selftests/perf_events/remove_on_exec.c @@ -13,6 +13,11 @@ #define __have_siginfo_t 1 #define __have_sigval_t 1 #define __have_sigevent_t 1 +#define __siginfo_t_defined +#define __sigval_t_defined +#define __sigevent_t_defined +#define _BITS_SIGINFO_CONSTS_H 1 +#define _BITS_SIGEVENT_CONSTS_H 1
#include <linux/perf_event.h> #include <pthread.h> diff --git a/tools/testing/selftests/perf_events/sigtrap_threads.c b/tools/testing/selftests/perf_events/sigtrap_threads.c index 7ebb9bb34c2e..b9a7d4b64b3c 100644 --- a/tools/testing/selftests/perf_events/sigtrap_threads.c +++ b/tools/testing/selftests/perf_events/sigtrap_threads.c @@ -13,6 +13,11 @@ #define __have_siginfo_t 1 #define __have_sigval_t 1 #define __have_sigevent_t 1 +#define __siginfo_t_defined +#define __sigval_t_defined +#define __sigevent_t_defined +#define _BITS_SIGINFO_CONSTS_H 1 +#define _BITS_SIGEVENT_CONSTS_H 1
#include <linux/hw_breakpoint.h> #include <linux/perf_event.h>
On Tue, 23 Mar 2021 at 04:10, Ian Rogers irogers@google.com wrote:
On Mon, Mar 22, 2021 at 6:24 AM Marco Elver elver@google.com wrote:
On Wed, Mar 10, 2021 at 11:41AM +0100, Marco Elver wrote:
Add kselftest to test that remove_on_exec removes inherited events from child tasks.
Signed-off-by: Marco Elver elver@google.com
To make compatible with more recent libc, we'll need to fixup the tests with the below.
Also, I've seen that tools/perf/tests exists, however it seems to be primarily about perf-tool related tests. Is this correct?
I'd propose to keep these purely kernel ABI related tests separate, and that way we can also make use of the kselftests framework which will also integrate into various CI systems such as kernelci.org.
Perhaps there is a way to have both? Having the perf tool spot an errant kernel feels like a feature. There are also tools/lib/perf/tests and Vince Weaver's tests [1]. It is possible to run standalone tests from within perf test by having them be executed by a shell test.
Thanks for the pointers. Sure, I'd support more additional tests.
But I had another look and it seems the tests in tools/{perf,lib/perf}/tests do focus on perf-tool or the library respectively, so adding kernel ABI tests there feels wrong. (If perf-tool somehow finds use for sigtrap, or remove_on_exec, then having a perf-tool specific test for those would make sense again.)
The tests at [1] do seem relevant, and its test strategy seems more extensive, including testing older kernels. Unfortunately it is out-of-tree, but that's probably because it was started before kselftest came into existence. But there are probably things that [1] contains that are not appropriate in-tree.
It's all a bit confusing.
Going forward, if you insist on tests being also added to [1], we can perhaps mirror some of the kselftest tests there. There's also a logistical problem with the tests added here, because the tests require an up-to-date siginfo_t, and they use the kernel's <asm/siginfo.h> with some trickery. Until libc's siginfo_t is updated, it probably doesn't make sense to add these tests to [1].
The other question is, would it be possible to also copy some of the tests in [1] and convert to kselftest, so that they live in-tree and are tested regularly (CI, ...)?
Because I'd much prefer in-tree tests with little boilerplate, that are structured with parsable output; in the kernel we have the kselftest framework for tests with a user space component, and KUnit for pure in-kernel tests.
Thanks, -- Marco
Thanks, Ian
[...]
On Tue, Mar 23, 2021 at 10:47AM +0100, Marco Elver wrote:
On Tue, 23 Mar 2021 at 04:10, Ian Rogers irogers@google.com wrote:
On Mon, Mar 22, 2021 at 6:24 AM Marco Elver elver@google.com wrote:
On Wed, Mar 10, 2021 at 11:41AM +0100, Marco Elver wrote:
Add kselftest to test that remove_on_exec removes inherited events from child tasks.
Signed-off-by: Marco Elver elver@google.com
To make compatible with more recent libc, we'll need to fixup the tests with the below.
Also, I've seen that tools/perf/tests exists, however it seems to be primarily about perf-tool related tests. Is this correct?
I'd propose to keep these purely kernel ABI related tests separate, and that way we can also make use of the kselftests framework which will also integrate into various CI systems such as kernelci.org.
Perhaps there is a way to have both? Having the perf tool spot an errant kernel feels like a feature. There are also tools/lib/perf/tests and Vince Weaver's tests [1]. It is possible to run standalone tests from within perf test by having them be executed by a shell test.
Thanks for the pointers. Sure, I'd support more additional tests.
But I had another look and it seems the tests in tools/{perf,lib/perf}/tests do focus on perf-tool or the library respectively, so adding kernel ABI tests there feels wrong. (If perf-tool somehow finds use for sigtrap, or remove_on_exec, then having a perf-tool specific test for those would make sense again.)
Ok, I checked once more, and I did find a few pure kernel ABI tests e.g. in "wp.c".
[...]
Because I'd much prefer in-tree tests with little boilerplate, that are structured with parsable output; in the kernel we have the kselftest framework for tests with a user space component, and KUnit for pure in-kernel tests.
So let's try to have both... but from what I could tell, the remove_on_exec test just can't be turned into a perf tool built-in test, at least not easily. In perf tool I also can't use the new "si_perf" field yet.
I'll add the patch below at the end of the series, so that we can have both. Too many tests probably don't hurt...
Thanks, -- Marco
------ >8 ------
commit 6a98611ace59c867aa135f780b1879990180548e Author: Marco Elver elver@google.com Date: Tue Mar 23 19:51:12 2021 +0100
perf test: Add basic stress test for sigtrap handling
Ports the stress test from tools/testing/selftests/sigtrap_threads.c, and add as a perf tool built-in test. This allows checking the basic sigtrap functionality from within the perf tool.
Signed-off-by: Marco Elver elver@google.com
diff --git a/tools/perf/tests/Build b/tools/perf/tests/Build index 650aec19d490..a429c7a02b37 100644 --- a/tools/perf/tests/Build +++ b/tools/perf/tests/Build @@ -64,6 +64,7 @@ perf-y += parse-metric.o perf-y += pe-file-parsing.o perf-y += expand-cgroup.o perf-y += perf-time-to-tsc.o +perf-y += sigtrap.o
$(OUTPUT)tests/llvm-src-base.c: tests/bpf-script-example.c tests/Build $(call rule_mkdir) diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c index c4b888f18e9c..28a1cb5eaa77 100644 --- a/tools/perf/tests/builtin-test.c +++ b/tools/perf/tests/builtin-test.c @@ -359,6 +359,11 @@ static struct test generic_tests[] = { .func = test__perf_time_to_tsc, .is_supported = test__tsc_is_supported, }, + { + .desc = "Sigtrap support", + .func = test__sigtrap, + .is_supported = test__wp_is_supported, /* uses wp for test */ + }, { .func = NULL, }, diff --git a/tools/perf/tests/sigtrap.c b/tools/perf/tests/sigtrap.c new file mode 100644 index 000000000000..0888a4e02222 --- /dev/null +++ b/tools/perf/tests/sigtrap.c @@ -0,0 +1,153 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Basic stress-test for sigtrap support. + * + * Copyright (C) 2021, Google LLC. + */ + +#include <pthread.h> +#include <signal.h> +#include <stdint.h> +#include <stdlib.h> +#include <string.h> +#include <sys/ioctl.h> +#include <sys/syscall.h> +#include <unistd.h> +#include <linux/hw_breakpoint.h> +#include <linux/kernel.h> +#include "tests.h" +#include "debug.h" +#include "event.h" +#include "cloexec.h" +#include "../perf-sys.h" + +#define NUM_THREADS 5 + +/* Data shared between test body, threads, and signal handler. */ +static struct { + int tids_want_signal; /* Which threads still want a signal. */ + int signal_count; /* Sanity check number of signals received. */ + volatile int iterate_on; /* Variable to set breakpoint on. */ + siginfo_t first_siginfo; /* First observed siginfo_t. */ +} ctx; + +static struct perf_event_attr make_event_attr(void) +{ + struct perf_event_attr attr = { + .type = PERF_TYPE_BREAKPOINT, + .size = sizeof(attr), + .sample_period = 1, + .disabled = 1, + .bp_addr = (long)&ctx.iterate_on, + .bp_type = HW_BREAKPOINT_RW, + .bp_len = HW_BREAKPOINT_LEN_1, + .inherit = 1, /* Children inherit events ... */ + .inherit_thread = 1, /* ... but only cloned with CLONE_THREAD. */ + .remove_on_exec = 1, /* Required by sigtrap. */ + .sigtrap = 1, /* Request synchronous SIGTRAP on event. */ + }; + return attr; +} + +static void +sigtrap_handler(int signum __maybe_unused, siginfo_t *info, void *ucontext __maybe_unused) +{ + if (!__atomic_fetch_add(&ctx.signal_count, 1, __ATOMIC_RELAXED)) + ctx.first_siginfo = *info; + __atomic_fetch_sub(&ctx.tids_want_signal, syscall(SYS_gettid), __ATOMIC_RELAXED); +} + +static void *test_thread(void *arg) +{ + pthread_barrier_t *barrier = (pthread_barrier_t *)arg; + pid_t tid = syscall(SYS_gettid); + int i; + + pthread_barrier_wait(barrier); + + __atomic_fetch_add(&ctx.tids_want_signal, tid, __ATOMIC_RELAXED); + for (i = 0; i < ctx.iterate_on - 1; i++) + __atomic_fetch_add(&ctx.tids_want_signal, tid, __ATOMIC_RELAXED); + + return NULL; +} + +static int run_test_threads(pthread_t *threads, pthread_barrier_t *barrier) +{ + int i; + + pthread_barrier_wait(barrier); + for (i = 0; i < NUM_THREADS; i++) + TEST_ASSERT_EQUAL("pthread_join() failed", pthread_join(threads[i], NULL), 0); + + return 0; +} + +static int run_stress_test(int fd, pthread_t *threads, pthread_barrier_t *barrier) +{ + ctx.iterate_on = 3000; + + TEST_ASSERT_EQUAL("misfired signal?", ctx.signal_count, 0); + TEST_ASSERT_EQUAL("enable failed", ioctl(fd, PERF_EVENT_IOC_ENABLE, 0), 0); + if (run_test_threads(threads, barrier)) + return -1; + TEST_ASSERT_EQUAL("disable failed", ioctl(fd, PERF_EVENT_IOC_DISABLE, 0), 0); + + TEST_ASSERT_EQUAL("unexpected sigtraps", ctx.signal_count, NUM_THREADS * ctx.iterate_on); + TEST_ASSERT_EQUAL("missing signals or incorrectly delivered", ctx.tids_want_signal, 0); + TEST_ASSERT_VAL("unexpected si_addr", ctx.first_siginfo.si_addr == &ctx.iterate_on); + TEST_ASSERT_EQUAL("unexpected si_errno", ctx.first_siginfo.si_errno, PERF_TYPE_BREAKPOINT); +#if 0 /* FIXME: test build and enable when libc's signal.h has si_perf. */ + TEST_ASSERT_VAL("unexpected si_perf", ctx.first_siginfo.si_perf == + ((HW_BREAKPOINT_LEN_1 << 16) | HW_BREAKPOINT_RW)); +#endif + + return 0; +} + +int test__sigtrap(struct test *test __maybe_unused, int subtest __maybe_unused) +{ + struct perf_event_attr attr = make_event_attr(); + struct sigaction action = {}; + struct sigaction oldact; + pthread_t threads[NUM_THREADS]; + pthread_barrier_t barrier; + int i, fd, ret = 0; + + pthread_barrier_init(&barrier, NULL, NUM_THREADS + 1); + + action.sa_flags = SA_SIGINFO | SA_NODEFER; + action.sa_sigaction = sigtrap_handler; + sigemptyset(&action.sa_mask); + if (sigaction(SIGTRAP, &action, &oldact)) { + pr_debug("FAILED sigaction()\n"); + ret = -1; + goto out_sigaction; + } + + + fd = sys_perf_event_open(&attr, 0, -1, -1, perf_event_open_cloexec_flag()); + if (fd < 0) { + pr_debug("FAILED sys_perf_event_open()\n"); + ret = -1; + goto out_sigaction; + } + + /* Spawn threads inheriting perf event. */ + for (i = 0; i < NUM_THREADS; i++) { + if (pthread_create(&threads[i], NULL, test_thread, &barrier)) { + pr_debug("FAILED pthread_create()"); + ret = -1; + goto out_perf_event; + } + } + + ret |= run_stress_test(fd, threads, &barrier); + +out_perf_event: + close(fd); +out_sigaction: + sigaction(SIGTRAP, &oldact, NULL); + pthread_barrier_destroy(&barrier); + return ret; +} diff --git a/tools/perf/tests/tests.h b/tools/perf/tests/tests.h index b85f005308a3..c3f2e2ecbfd6 100644 --- a/tools/perf/tests/tests.h +++ b/tools/perf/tests/tests.h @@ -127,6 +127,7 @@ int test__parse_metric(struct test *test, int subtest); int test__pe_file_parsing(struct test *test, int subtest); int test__expand_cgroup_events(struct test *test, int subtest); int test__perf_time_to_tsc(struct test *test, int subtest); +int test__sigtrap(struct test *test, int subtest);
bool test__bp_signal_is_supported(void); bool test__bp_account_is_supported(void);
linux-kselftest-mirror@lists.linaro.org