Hi Greg,
On Mon, Sep 29, 2025 at 01:40:52PM +0200, gregkh@linuxfoundation.org wrote:
The patch below does not apply to the 6.16-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
To reproduce the conflict and resubmit, you may use the following commands:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-6.16.y git checkout FETCH_HEAD git cherry-pick -x 55ed11b181c43d81ce03b50209e4e7c4a14ba099 # <resolve conflicts, build, test, etc.> git commit -s git send-email --to 'stable@vger.kernel.org' --in-reply-to '2025092952-wooing-result-72e9@gregkh' --subject-prefix 'PATCH 6.16.y' HEAD^..
Possible dependencies:
This patch depends on upstream commit 353656eb84fe ("sched_ext: Make scx_idle_cpu() and related helpers static").
To resolve the conflict I think the best would be to apply commit 353656eb84fef ("sched_ext: idle: Make local functions static in ext_idle.c") to 6.16-stable as well.
This commit only makes some functions static (no functional changes), so it should be safe for stable and it'd keep the code more aligned with upstream.
Thanks, -Andrea
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 55ed11b181c43d81ce03b50209e4e7c4a14ba099 Mon Sep 17 00:00:00 2001 From: Andrea Righi arighi@nvidia.com Date: Sat, 20 Sep 2025 15:26:21 +0200 Subject: [PATCH] sched_ext: idle: Handle migration-disabled tasks in BPF code
When scx_bpf_select_cpu_dfl()/and() kfuncs are invoked outside of ops.select_cpu() we can't rely on @p->migration_disabled to determine if migration is disabled for the task @p.
In fact, migration is always disabled for the current task while running BPF code: __bpf_prog_enter() disables migration and __bpf_prog_exit() re-enables it.
To handle this, when @p->migration_disabled == 1, check whether @p is the current task. If so, migration was not disabled before entering the callback, otherwise migration was disabled.
This ensures correct idle CPU selection in all cases. The behavior of ops.select_cpu() remains unchanged, because this callback is never invoked for the current task and migration-disabled tasks are always excluded.
Example: without this change scx_bpf_select_cpu_and() called from ops.enqueue() always returns -EBUSY; with this change applied, it correctly returns idle CPUs.
Fixes: 06efc9fe0b8de ("sched_ext: idle: Handle migration-disabled tasks in idle selection") Cc: stable@vger.kernel.org # v6.16+ Signed-off-by: Andrea Righi arighi@nvidia.com Acked-by: Changwoo Min changwoo@igalia.com Signed-off-by: Tejun Heo tj@kernel.org
diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c index 7174e1c1a392..537c6992bb63 100644 --- a/kernel/sched/ext_idle.c +++ b/kernel/sched/ext_idle.c @@ -856,6 +856,32 @@ static bool check_builtin_idle_enabled(void) return false; } +/*
- Determine whether @p is a migration-disabled task in the context of BPF
- code.
- We can't simply check whether @p->migration_disabled is set in a
- sched_ext callback, because migration is always disabled for the current
- task while running BPF code.
- The prolog (__bpf_prog_enter) and epilog (__bpf_prog_exit) respectively
- disable and re-enable migration. For this reason, the current task
- inside a sched_ext callback is always a migration-disabled task.
- Therefore, when @p->migration_disabled == 1, check whether @p is the
- current task or not: if it is, then migration was not disabled before
- entering the callback, otherwise migration was disabled.
- Returns true if @p is migration-disabled, false otherwise.
- */
+static bool is_bpf_migration_disabled(const struct task_struct *p) +{
- if (p->migration_disabled == 1)
return p != current;
- else
return p->migration_disabled;
+}
static s32 select_cpu_from_kfunc(struct task_struct *p, s32 prev_cpu, u64 wake_flags, const struct cpumask *allowed, u64 flags) { @@ -898,7 +924,7 @@ static s32 select_cpu_from_kfunc(struct task_struct *p, s32 prev_cpu, u64 wake_f * selection optimizations and simply check whether the previously * used CPU is idle and within the allowed cpumask. */
- if (p->nr_cpus_allowed == 1 || is_migration_disabled(p)) {
- if (p->nr_cpus_allowed == 1 || is_bpf_migration_disabled(p)) { if (cpumask_test_cpu(prev_cpu, allowed ?: p->cpus_ptr) && scx_idle_test_and_clear_cpu(prev_cpu)) cpu = prev_cpu;