From: Xie XiuQi xiexiuqi@huawei.com
commit a860fa7b96e1a1c974556327aa1aee852d434c21 upstream.
sched_clock_cpu() may not be consistent between CPUs. If a task migrates to another CPU, then se.exec_start is set to that CPU's rq_clock_task() by update_stats_curr_start(). Specifically, the new value might be before the old value due to clock skew.
So then if in numa_get_avg_runtime() the expression:
'now - p->last_task_numa_placement'
ends up as -1, then the divider '*period + 1' in task_numa_placement() is 0 and things go bang. Similar to update_curr(), check if time goes backwards to avoid this.
[ peterz: Wrote new changelog. ] [ mingo: Tweaked the code comment. ]
Signed-off-by: Xie XiuQi xiexiuqi@huawei.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Cc: cj.chengjian@huawei.com Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/20190425080016.GX11158@hirez.programming.kicks-ass.... Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- kernel/sched/fair.c | 4 ++++ 1 file changed, 4 insertions(+)
--- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2016,6 +2016,10 @@ static u64 numa_get_avg_runtime(struct t if (p->last_task_numa_placement) { delta = runtime - p->last_sum_exec_runtime; *period = now - p->last_task_numa_placement; + + /* Avoid time going backwards, prevent potential divide error: */ + if (unlikely((s64)*period < 0)) + *period = 0; } else { delta = p->se.avg.load_sum; *period = LOAD_AVG_MAX;