From: Tao Zhou ouwen210@hotmail.com
[ Upstream commit 6c8116c914b65be5e4d6f66d69c8142eb0648c22 ]
In update_sg_wakeup_stats(), the comment says:
Computing avg_load makes sense only when group is fully busy or overloaded.
But, the code below this comment does not check like this.
From reading the code about avg_load in other functions, I
confirm that avg_load should be calculated in fully busy or overloaded case. The comment is correct and the checking condition is wrong. So, change that condition.
Fixes: 57abff067a08 ("sched/fair: Rework find_idlest_group()") Signed-off-by: Tao Zhou ouwen210@hotmail.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Reviewed-by: Vincent Guittot vincent.guittot@linaro.org Acked-by: Mel Gorman mgorman@suse.de Link: https://lkml.kernel.org/r/Message-ID: Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/sched/fair.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 0ff2f43ac9cd7..1f5ea23c752be 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -8323,7 +8323,8 @@ static inline void update_sg_wakeup_stats(struct sched_domain *sd, * Computing avg_load makes sense only when group is fully busy or * overloaded */ - if (sgs->group_type < group_fully_busy) + if (sgs->group_type == group_fully_busy || + sgs->group_type == group_overloaded) sgs->avg_load = (sgs->group_load * SCHED_CAPACITY_SCALE) / sgs->group_capacity; }