This is a note to let you know that I've just added the patch titled
sched/fair: Make select_idle_cpu() more aggressive
to the 4.9-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git%3Ba=su...
The filename of the patch is: sched-fair-make-select_idle_cpu-more-aggressive.patch and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree, please let stable@vger.kernel.org know about it.
From foo@baz Tue Dec 12 13:26:17 CET 2017
From: Peter Zijlstra peterz@infradead.org Date: Wed, 1 Mar 2017 11:24:35 +0100 Subject: sched/fair: Make select_idle_cpu() more aggressive
From: Peter Zijlstra peterz@infradead.org
[ Upstream commit 4c77b18cf8b7ab37c7d5737b4609010d2ceec5f0 ]
Kitsunyan reported desktop latency issues on his Celeron 887 because of commit:
1b568f0aabf2 ("sched/core: Optimize SCHED_SMT")
... even though his CPU doesn't do SMT.
The effect of running the SMT code on a !SMT part is basically a more aggressive select_idle_cpu(). Removing the avg condition fixed things for him.
I also know FB likes this test gone, even though other workloads like having it.
For now, take it out by default, until we get a better idea.
Reported-by: kitsunyan kitsunyan@inbox.ru Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Cc: Chris Mason clm@fb.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Mike Galbraith efault@gmx.de Cc: Mike Galbraith umgwanakikbuti@gmail.com Cc: Peter Zijlstra peterz@infradead.org Cc: Thomas Gleixner tglx@linutronix.de Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar mingo@kernel.org Signed-off-by: Sasha Levin alexander.levin@verizon.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/sched/fair.c | 2 +- kernel/sched/features.h | 5 +++++ 2 files changed, 6 insertions(+), 1 deletion(-)
--- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5451,7 +5451,7 @@ static int select_idle_cpu(struct task_s * Due to large variance we need a large fuzz factor; hackbench in * particularly is sensitive here. */ - if ((avg_idle / 512) < avg_cost) + if (sched_feat(SIS_AVG_CPU) && (avg_idle / 512) < avg_cost) return -1;
time = local_clock(); --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -51,6 +51,11 @@ SCHED_FEAT(NONTASK_CAPACITY, true) */ SCHED_FEAT(TTWU_QUEUE, true)
+/* + * When doing wakeups, attempt to limit superfluous scans of the LLC domain. + */ +SCHED_FEAT(SIS_AVG_CPU, false) + #ifdef HAVE_RT_PUSH_IPI /* * In order to avoid a thundering herd attack of CPUs that are
Patches currently in stable-queue which might be from peterz@infradead.org are
queue-4.9/smp-hotplug-move-step-cpuhp_ap_smpcfd_dying-to-the-correct-place.patch queue-4.9/efi-esrt-use-memunmap-instead-of-kfree-to-free-the-remapping.patch queue-4.9/x86-hpet-prevent-might-sleep-splat-on-resume.patch queue-4.9/efi-move-some-sysfs-files-to-be-read-only-by-root.patch queue-4.9/x86-platform-uv-bau-fix-hub-errors-by-remove-initial-write-to-sw-ack-register.patch queue-4.9/x86-mpx-selftests-fix-up-weird-arrays.patch queue-4.9/blk-mq-initialize-mq-kobjects-in-blk_mq_init_allocated_queue.patch queue-4.9/x86-selftests-add-clobbers-for-int80-on-x86_64.patch queue-4.9/jump_label-invoke-jump_label_test-via-early_initcall.patch queue-4.9/sched-fair-make-select_idle_cpu-more-aggressive.patch