On Tue, Aug 31, 2021 at 06:46:40AM +0200, Patrick Schaaf wrote:
Looking into this again.
Unfortunately, couldn't see how I would do bisection on the issue, as it appears with that 5.4.118 commit implicated by the call stack,and with tha tremoved,is obviously gone(that I tested, 5.4.135 with b56ad4febe67b8c0647c0a3e427e935a76dedb59 reverted runs smoothly for me, while the original 5.4.135 with that 5.4.118 time commit in, now on a dozen machines, throws the WARNING.
I got email on the side of someone (Igor, on Cc) who sees the same with DELL servers, a newer 5.10 kernel, for him running IPVS + he sees actual operational impact there.
I just had a look at Linus mainlinetree, and see there is this followup / further fix from Peter Zijlstra, https://github.com/torvalds/linux/commit/3a7956e25e1d7b3c148569e78895e1f3178... ; now I'm much too incompetent to try and backport that, as it looks more involved, but I imagine such a backport would be needed to fix the WARNING (or IPVS breakage of Igor) we see.
3a7956e25e1d ("kthread: Fix PF_KTHREAD vs to_kthread() race") munged into 5.4.135
Never even seen a compiler, please tests.
--- kernel/kthread.c | 43 +++++++++++++++++++++++++++++-------------- kernel/sched/fair.c | 2 +- 2 files changed, 30 insertions(+), 15 deletions(-)
diff --git a/kernel/kthread.c b/kernel/kthread.c index b2bac5d929d2..22750a8af83e 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -76,6 +76,25 @@ static inline struct kthread *to_kthread(struct task_struct *k) return (__force void *)k->set_child_tid; }
+/* + * Variant of to_kthread() that doesn't assume @p is a kthread. + * + * Per construction; when: + * + * (p->flags & PF_KTHREAD) && p->set_child_tid + * + * the task is both a kthread and struct kthread is persistent. However + * PF_KTHREAD on it's own is not, kernel_thread() can exec() (See umh.c and + * begin_new_exec()). + */ +static inline struct kthread *__to_kthread(struct task_struct *p) +{ + void *kthread = (__force void *)p->set_child_tid; + if (kthread && !(p->flags & PF_KTHREAD)) + kthread = NULL; + return kthread; +} + void free_kthread_struct(struct task_struct *k) { struct kthread *kthread; @@ -176,10 +195,11 @@ void *kthread_data(struct task_struct *task) */ void *kthread_probe_data(struct task_struct *task) { - struct kthread *kthread = to_kthread(task); + struct kthread *kthread = __to_kthread(task); void *data = NULL;
- probe_kernel_read(&data, &kthread->data, sizeof(data)); + if (kthread) + probe_kernel_read(&data, &kthread->data, sizeof(data)); return data; }
@@ -490,9 +510,9 @@ void kthread_set_per_cpu(struct task_struct *k, int cpu) set_bit(KTHREAD_IS_PER_CPU, &kthread->flags); }
-bool kthread_is_per_cpu(struct task_struct *k) +bool kthread_is_per_cpu(struct task_struct *p) { - struct kthread *kthread = to_kthread(k); + struct kthread *kthread = __to_kthread(p); if (!kthread) return false;
@@ -1272,11 +1292,9 @@ EXPORT_SYMBOL(kthread_destroy_worker); */ void kthread_associate_blkcg(struct cgroup_subsys_state *css) { - struct kthread *kthread; + struct kthread *kthread = __to_kthread(current); +
- if (!(current->flags & PF_KTHREAD)) - return; - kthread = to_kthread(current); if (!kthread) return;
@@ -1298,13 +1316,10 @@ EXPORT_SYMBOL(kthread_associate_blkcg); */ struct cgroup_subsys_state *kthread_blkcg(void) { - struct kthread *kthread; + struct kthread *kthread = __to_kthread(current)
- if (current->flags & PF_KTHREAD) { - kthread = to_kthread(current); - if (kthread) - return kthread->blkcg_css; - } + if (kthread) + return kthread->blkcg_css; return NULL; } EXPORT_SYMBOL(kthread_blkcg); diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 74cb20f32f72..87d9fad9d01d 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -7301,7 +7301,7 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env) return 0;
/* Disregard pcpu kthreads; they are where they need to be. */ - if ((p->flags & PF_KTHREAD) && kthread_is_per_cpu(p)) + if (kthread_is_per_cpu(p)) return 0;
if (!cpumask_test_cpu(env->dst_cpu, p->cpus_ptr)) {