Syzbot found a GPF in reweight_entity(). This has been bisected to commit 4ef0c5c6b5ba ("kernel/sched: Fix sched_fork() access an invalid sched_task_group")
There is a race between sched_post_fork() and setpriority(PRIO_PGRP) within a thread group that causes a null-ptr-deref in reweight_entity() in CFS. The scenario is that the main process spawns number of new threads, which then call setpriority(PRIO_PGRP, 0, prio), wait, and exit. For each of the new threads the copy_process() gets invoked, which adds the new task_struct to the group, and eventually calls sched_post_fork() for it.
In the above scenario there is a possibility that setpriority(PRIO_PGRP) and set_one_prio() will be called for a thread in the group that is just being created by copy_process(), and for which the sched_post_fork() has not been executed yet. This will trigger a null pointer dereference in reweight_entity(), as it will try to access the run queue pointer, which hasn't been set. This results it a crash as shown below:
KASAN: null-ptr-deref in range [0x00000000000000a0-0x00000000000000a7] CPU: 0 PID: 2392 Comm: reduced_repro Not tainted 5.16.0-11201-gb42c5a161ea3 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1.fc35 04/01/2014 RIP: 0010:reweight_entity+0x15d/0x440 RSP: 0018:ffffc900035dfcf8 EFLAGS: 00010006 Call Trace: <TASK> reweight_task+0xde/0x1c0 set_load_weight+0x21c/0x2b0 set_user_nice.part.0+0x2d1/0x519 set_user_nice.cold+0x8/0xd set_one_prio+0x24f/0x263 __do_sys_setpriority+0x2d3/0x640 __x64_sys_setpriority+0x84/0x8b do_syscall_64+0x35/0xb0 entry_SYSCALL_64_after_hwframe+0x44/0xae </TASK> ---[ end trace 9dc80a9d378ed00a ]---
Before the mentioned change the cfs_rq pointer for the task has been set in sched_fork(), which is called much earlier in copy_process(), before the new task is added to the thread_group. Now it is done in the sched_post_fork(), which is called after that.
Cc: Ingo Molnar mingo@redhat.com Cc: Peter Zijlstra peterz@infradead.org Cc: Juri Lelli juri.lelli@redhat.com Cc: Vincent Guittot vincent.guittot@linaro.org Cc: Dietmar Eggemann dietmar.eggemann@arm.com Cc: Steven Rostedt rostedt@goodmis.org Cc: Ben Segall bsegall@google.com Cc: Mel Gorman mgorman@suse.de Cc: Daniel Bristot de Oliveira bristot@redhat.com Cc: Zhang Qiao zhangqiao22@huawei.com Cc: stable@vger.kernel.org Cc: linux-kernel@vger.kernel.org
Link: https://syzkaller.appspot.com/bug?id=9d9c27adc674e3a7932b22b61c79a02da82cbdc... Fixes: 4ef0c5c6b5ba ("kernel/sched: Fix sched_fork() access an invalid sched_task_group") Reported-by: syzbot+af7a719bc92395ee41b3@syzkaller.appspotmail.com Signed-off-by: Tadeusz Struk tadeusz.struk@linaro.org --- Changes in v2: - Added a check in set_user_nice(), and return from there if the task is not fully setup instead of returning from reweight_entity() --- kernel/sched/core.c | 4 ++++ kernel/sched/sched.h | 11 +++++++++++ 2 files changed, 15 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 2e4ae00e52d1..c3e74b6d595b 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6897,6 +6897,10 @@ void set_user_nice(struct task_struct *p, long nice)
if (task_nice(p) == nice || nice < MIN_NICE || nice > MAX_NICE) return; + + /* Check if the task's schedule run queue is setup correctly */ + if (!task_rq_ready(p)) + return; /* * We have to be careful, if called from sys_setpriority(), * the task might be in the middle of scheduling on another CPU. diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index de53be905739..464f629bff5a 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1394,6 +1394,12 @@ static inline struct cfs_rq *group_cfs_rq(struct sched_entity *grp) return grp->my_q; }
+/* returns true if cfs run queue is set for the task */ +static inline bool task_rq_ready(struct task_struct *p) +{ + return !!task_cfs_rq(p); +} + #else
static inline struct task_struct *task_of(struct sched_entity *se) @@ -1419,6 +1425,11 @@ static inline struct cfs_rq *group_cfs_rq(struct sched_entity *grp) { return NULL; } + +static inline bool task_rq_ready(struct task_struct *p) +{ + return true; +} #endif
extern void update_rq_clock(struct rq *rq);
Hi,
On Thu, Jan 20, 2022 at 12:01:39PM -0800, Tadeusz Struk wrote:
Syzbot found a GPF in reweight_entity(). This has been bisected to commit 4ef0c5c6b5ba ("kernel/sched: Fix sched_fork() access an invalid sched_task_group")
There is a race between sched_post_fork() and setpriority(PRIO_PGRP) within a thread group that causes a null-ptr-deref in reweight_entity() in CFS. The scenario is that the main process spawns number of new threads, which then call setpriority(PRIO_PGRP, 0, prio), wait, and exit. For each of the new threads the copy_process() gets invoked, which adds the new task_struct to the group, and eventually calls sched_post_fork() for it.
In the above scenario there is a possibility that setpriority(PRIO_PGRP) and set_one_prio() will be called for a thread in the group that is just being created by copy_process(), and for which the sched_post_fork() has not been executed yet. This will trigger a null pointer dereference in reweight_entity(), as it will try to access the run queue pointer, which hasn't been set.
It's kinda strange that p->se.cfs_rq is NULLed in __sched_fork(). AFAICT, that lets set_task_rq_fair() distinguish between fork and other paths per ad936d8658fd, but it's causing this problem now and it's not the only way that set_task_rq_fair() could tell the difference.
We might be able to get rid of the NULL assignment instead of adding code to detect it. Maybe something like this, against today's mainline? set_task_rq_fair() would rely on TASK_NEW instead of NULL.
Haven't thought it all the way through, so could be missing something. Will think more
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 848eaa0efe0ea..9a5b264c5dc10 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4241,10 +4241,6 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p) p->se.vruntime = 0; INIT_LIST_HEAD(&p->se.group_node);
-#ifdef CONFIG_FAIR_GROUP_SCHED - p->se.cfs_rq = NULL; -#endif - #ifdef CONFIG_SCHEDSTATS /* Even if schedstat is disabled, there should not be garbage */ memset(&p->stats, 0, sizeof(p->stats)); diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 5146163bfabb9..7aff3b603220d 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3339,15 +3339,19 @@ static inline void update_tg_load_avg(struct cfs_rq *cfs_rq) * caller only guarantees p->pi_lock is held; no other assumptions, * including the state of rq->lock, should be made. */ -void set_task_rq_fair(struct sched_entity *se, - struct cfs_rq *prev, struct cfs_rq *next) +void set_task_rq_fair(struct task_struct *p, struct cfs_rq *next) { + struct sched_entity *se = &p->se; + struct cfs_rq *prev = se->cfs_rq; u64 p_last_update_time; u64 n_last_update_time;
if (!sched_feat(ATTACH_AGE_LOAD)) return;
+ if (p->__state == TASK_NEW) + return; + /* * We are supposed to update the task to "current" time, then its up to * date and ready to go to new CPU/cfs_rq. But we have difficulty in @@ -3355,7 +3359,7 @@ void set_task_rq_fair(struct sched_entity *se, * time. This will result in the wakee task is less decayed, but giving * the wakee more load sounds not bad. */ - if (!(se->avg.last_update_time && prev)) + if (!se->avg.last_update_time) return;
#ifndef CONFIG_64BIT diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index de53be9057390..a6f749f136ee1 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -514,11 +514,10 @@ extern int sched_group_set_shares(struct task_group *tg, unsigned long shares); extern int sched_group_set_idle(struct task_group *tg, long idle);
#ifdef CONFIG_SMP -extern void set_task_rq_fair(struct sched_entity *se, - struct cfs_rq *prev, struct cfs_rq *next); +extern void set_task_rq_fair(struct task_struct *p, struct cfs_rq *next); #else /* !CONFIG_SMP */ -static inline void set_task_rq_fair(struct sched_entity *se, - struct cfs_rq *prev, struct cfs_rq *next) { } +static inline void set_task_rq_fair(struct task_struct *p, + struct cfs_rq *next) {} #endif /* CONFIG_SMP */ #endif /* CONFIG_FAIR_GROUP_SCHED */
@@ -1910,7 +1909,7 @@ static inline void set_task_rq(struct task_struct *p, unsigned int cpu) #endif
#ifdef CONFIG_FAIR_GROUP_SCHED - set_task_rq_fair(&p->se, p->se.cfs_rq, tg->cfs_rq[cpu]); + set_task_rq_fair(p, tg->cfs_rq[cpu]); p->se.cfs_rq = tg->cfs_rq[cpu]; p->se.parent = tg->se[cpu]; #endif
On Tue, 25 Jan 2022 at 02:18, Daniel Jordan daniel.m.jordan@oracle.com wrote:
Hi,
On Thu, Jan 20, 2022 at 12:01:39PM -0800, Tadeusz Struk wrote:
Syzbot found a GPF in reweight_entity(). This has been bisected to commit 4ef0c5c6b5ba ("kernel/sched: Fix sched_fork() access an invalid sched_task_group")
There is a race between sched_post_fork() and setpriority(PRIO_PGRP) within a thread group that causes a null-ptr-deref in reweight_entity() in CFS. The scenario is that the main process spawns number of new threads, which then call setpriority(PRIO_PGRP, 0, prio), wait, and exit. For each of the new threads the copy_process() gets invoked, which adds the new task_struct to the group, and eventually calls sched_post_fork() for it.
In the above scenario there is a possibility that setpriority(PRIO_PGRP) and set_one_prio() will be called for a thread in the group that is just being created by copy_process(), and for which the sched_post_fork() has not been executed yet. This will trigger a null pointer dereference in reweight_entity(), as it will try to access the run queue pointer, which hasn't been set.
It's kinda strange that p->se.cfs_rq is NULLed in __sched_fork(). AFAICT, that lets set_task_rq_fair() distinguish between fork and other paths per ad936d8658fd, but it's causing this problem now and it's not the only way that set_task_rq_fair() could tell the difference.
We might be able to get rid of the NULL assignment instead of adding code to detect it. Maybe something like this, against today's mainline? set_task_rq_fair() would rely on TASK_NEW instead of NULL.
Haven't thought it all the way through, so could be missing something. Will think more
Could we use : set_load_weight(p, !(p->__state & TASK_NEW)); instead of set_load_weight(p, true); in set_user_nice and __setscheduler_params.
The current always true value forces the update of the weight of the cfs_rq of the task which is not already set in this case
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 848eaa0efe0ea..9a5b264c5dc10 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4241,10 +4241,6 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p) p->se.vruntime = 0; INIT_LIST_HEAD(&p->se.group_node);
-#ifdef CONFIG_FAIR_GROUP_SCHED
p->se.cfs_rq = NULL;
-#endif
#ifdef CONFIG_SCHEDSTATS /* Even if schedstat is disabled, there should not be garbage */ memset(&p->stats, 0, sizeof(p->stats)); diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 5146163bfabb9..7aff3b603220d 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3339,15 +3339,19 @@ static inline void update_tg_load_avg(struct cfs_rq *cfs_rq)
- caller only guarantees p->pi_lock is held; no other assumptions,
- including the state of rq->lock, should be made.
*/ -void set_task_rq_fair(struct sched_entity *se,
struct cfs_rq *prev, struct cfs_rq *next)
+void set_task_rq_fair(struct task_struct *p, struct cfs_rq *next) {
struct sched_entity *se = &p->se;
struct cfs_rq *prev = se->cfs_rq; u64 p_last_update_time; u64 n_last_update_time; if (!sched_feat(ATTACH_AGE_LOAD)) return;
if (p->__state == TASK_NEW)
return;
/* * We are supposed to update the task to "current" time, then its up to * date and ready to go to new CPU/cfs_rq. But we have difficulty in
@@ -3355,7 +3359,7 @@ void set_task_rq_fair(struct sched_entity *se, * time. This will result in the wakee task is less decayed, but giving * the wakee more load sounds not bad. */
if (!(se->avg.last_update_time && prev))
if (!se->avg.last_update_time) return;
#ifndef CONFIG_64BIT diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index de53be9057390..a6f749f136ee1 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -514,11 +514,10 @@ extern int sched_group_set_shares(struct task_group *tg, unsigned long shares); extern int sched_group_set_idle(struct task_group *tg, long idle);
#ifdef CONFIG_SMP -extern void set_task_rq_fair(struct sched_entity *se,
struct cfs_rq *prev, struct cfs_rq *next);
+extern void set_task_rq_fair(struct task_struct *p, struct cfs_rq *next); #else /* !CONFIG_SMP */ -static inline void set_task_rq_fair(struct sched_entity *se,
struct cfs_rq *prev, struct cfs_rq *next) { }
+static inline void set_task_rq_fair(struct task_struct *p,
struct cfs_rq *next) {}
#endif /* CONFIG_SMP */ #endif /* CONFIG_FAIR_GROUP_SCHED */
@@ -1910,7 +1909,7 @@ static inline void set_task_rq(struct task_struct *p, unsigned int cpu) #endif
#ifdef CONFIG_FAIR_GROUP_SCHED
set_task_rq_fair(&p->se, p->se.cfs_rq, tg->cfs_rq[cpu]);
set_task_rq_fair(p, tg->cfs_rq[cpu]); p->se.cfs_rq = tg->cfs_rq[cpu]; p->se.parent = tg->se[cpu];
#endif
On 1/25/22 10:30, Tadeusz Struk wrote:
On 1/25/22 01:14, Vincent Guittot wrote:
Could we use : set_load_weight(p, !(p->__state & TASK_NEW)); instead of set_load_weight(p, true); in set_user_nice and __setscheduler_params.
Wouldn't that require READ_ONCE() and rmb() after the read?
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 848eaa0efe0e..3d7ede06b971 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6921,7 +6921,7 @@ void set_user_nice(struct task_struct *p, long nice) put_prev_task(rq, p);
p->static_prio = NICE_TO_PRIO(nice); - set_load_weight(p, true); + set_load_weight(p, !(READ_ONCE(p->__state) & TASK_NEW)); old_prio = p->prio; p->prio = effective_prio(p);
That works for me. I will send a new version soon.
linux-stable-mirror@lists.linaro.org