The patch below does not apply to the 5.4-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
To reproduce the conflict and resubmit, you may use the following commands:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-5.4.y git checkout FETCH_HEAD git cherry-pick -x 2f027354122f58ee846468a6f6b48672fff92e9b # <resolve conflicts, build, test, etc.> git commit -s git send-email --to 'stable@vger.kernel.org' --in-reply-to '2024081221-swinging-unselect-ce80@gregkh' --subject-prefix 'PATCH 5.4.y' HEAD^..
Possible dependencies:
2f027354122f ("sched/core: Introduce sched_set_rq_on/offline() helper") cab3ecaed5cd ("sched/core: Fixed missing rq clock update before calling set_rq_offline()") 5cb9eaa3d274 ("sched: Wrap rq::lock access") 39d371b7c0c2 ("sched: Provide raw_spin_rq_*lock*() helpers") ed3cd45f8ca8 ("Merge tag 'v5.11' into sched/core, to pick up fixes & refresh the branch")
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 2f027354122f58ee846468a6f6b48672fff92e9b Mon Sep 17 00:00:00 2001 From: Yang Yingliang yangyingliang@huawei.com Date: Wed, 3 Jul 2024 11:16:09 +0800 Subject: [PATCH] sched/core: Introduce sched_set_rq_on/offline() helper
Introduce sched_set_rq_on/offline() helper, so it can be called in normal or error path simply. No functional changed.
Cc: stable@kernel.org Signed-off-by: Yang Yingliang yangyingliang@huawei.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Link: https://lore.kernel.org/r/20240703031610.587047-4-yangyingliang@huaweicloud....
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 949473e414f9..4d119e930beb 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7845,6 +7845,30 @@ void set_rq_offline(struct rq *rq) } }
+static inline void sched_set_rq_online(struct rq *rq, int cpu) +{ + struct rq_flags rf; + + rq_lock_irqsave(rq, &rf); + if (rq->rd) { + BUG_ON(!cpumask_test_cpu(cpu, rq->rd->span)); + set_rq_online(rq); + } + rq_unlock_irqrestore(rq, &rf); +} + +static inline void sched_set_rq_offline(struct rq *rq, int cpu) +{ + struct rq_flags rf; + + rq_lock_irqsave(rq, &rf); + if (rq->rd) { + BUG_ON(!cpumask_test_cpu(cpu, rq->rd->span)); + set_rq_offline(rq); + } + rq_unlock_irqrestore(rq, &rf); +} + /* * used to mark begin/end of suspend/resume: */ @@ -7914,7 +7938,6 @@ static inline void sched_smt_present_dec(int cpu) int sched_cpu_activate(unsigned int cpu) { struct rq *rq = cpu_rq(cpu); - struct rq_flags rf;
/* * Clear the balance_push callback and prepare to schedule @@ -7943,12 +7966,7 @@ int sched_cpu_activate(unsigned int cpu) * 2) At runtime, if cpuset_cpu_active() fails to rebuild the * domains. */ - rq_lock_irqsave(rq, &rf); - if (rq->rd) { - BUG_ON(!cpumask_test_cpu(cpu, rq->rd->span)); - set_rq_online(rq); - } - rq_unlock_irqrestore(rq, &rf); + sched_set_rq_online(rq, cpu);
return 0; } @@ -7956,7 +7974,6 @@ int sched_cpu_activate(unsigned int cpu) int sched_cpu_deactivate(unsigned int cpu) { struct rq *rq = cpu_rq(cpu); - struct rq_flags rf; int ret;
/* @@ -7987,12 +8004,7 @@ int sched_cpu_deactivate(unsigned int cpu) */ synchronize_rcu();
- rq_lock_irqsave(rq, &rf); - if (rq->rd) { - BUG_ON(!cpumask_test_cpu(cpu, rq->rd->span)); - set_rq_offline(rq); - } - rq_unlock_irqrestore(rq, &rf); + sched_set_rq_offline(rq, cpu);
/* * When going down, decrement the number of cores with SMT present.
linux-stable-mirror@lists.linaro.org