This is a rare corner case, but it does happen:
In perf_rotate_context(), when the first cpu flexible event fail to schedule, cpu_rotate is 1, while cpu_event is NULL. Since cpu_event is NULL, perf_rotate_context will _NOT_ call cpu_ctx_sched_out(), thus cpuctx->ctx.is_active will have EVENT_FLEXIBLE set. Then, the next perf_event_sched_in() will skip all cpu flexible events because of the EVENT_FLEXIBLE bit.
In the next call of perf_rotate_context(), cpu_rotate stays 1, and cpu_event stays NULL, so this process repeats. The end result is, flexible events on this cpu will not be scheduled (until another event being added to the cpuctx).
Similar issue may happen with the task_ctx. But it is usually not a problem because the task_ctx moves around different CPU.
Fix this corner case by using cpu_rotate and task_rotate to gate calls for (cpu_)ctx_sched_out and rotate_ctx. Also enable rotate_ctx() to handle event == NULL case.
Fixes: 8d5bce0c37fa ("perf/core: Optimize perf_rotate_context() event scheduling") Cc: stable@vger.kernel.org # v4.17+ Cc: Peter Zijlstra peterz@infradead.org Cc: Arnaldo Carvalho de Melo acme@kernel.org Cc: Thomas Gleixner tglx@linutronix.de Signed-off-by: Song Liu songliubraving@fb.com --- kernel/events/core.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c index 4655adbbae10..50021735f367 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -3775,6 +3775,13 @@ static void rotate_ctx(struct perf_event_context *ctx, struct perf_event *event) if (ctx->rotate_disable) return;
+ /* if no event specified, try to rotate the first event */ + if (!event) + event = rb_entry_safe(rb_first(&ctx->flexible_groups.tree), + typeof(*event), group_node); + if (!event) + return; + perf_event_groups_delete(&ctx->flexible_groups, event); perf_event_groups_insert(&ctx->flexible_groups, event); } @@ -3816,14 +3823,14 @@ static bool perf_rotate_context(struct perf_cpu_context *cpuctx) * As per the order given at ctx_resched() first 'pop' task flexible * and then, if needed CPU flexible. */ - if (task_event || (task_ctx && cpu_event)) + if (task_rotate || (task_ctx && cpu_rotate)) ctx_sched_out(task_ctx, cpuctx, EVENT_FLEXIBLE); - if (cpu_event) + if (cpu_rotate) cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE);
- if (task_event) + if (task_rotate) rotate_ctx(task_ctx, task_event); - if (cpu_event) + if (cpu_rotate) rotate_ctx(&cpuctx->ctx, cpu_event);
perf_event_sched_in(cpuctx, task_ctx, current);