From: Li RongQing lirongqing@baidu.com
Currently, when 'hung_task_panic' is enabled, the kernel panics immediately upon detecting the first hung task. However, some hung tasks are transient and the system can recover, while others are persistent and may accumulate progressively.
This patch extends the 'hung_task_panic' sysctl to allow specifying the number of hung tasks that must be detected before triggering a kernel panic. This provides finer control for environments where transient hangs may occur but persistent hangs should still be fatal.
The sysctl can be set to: - 0: disabled (never panic) - 1: original behavior (panic on first hung task) - N: panic when N hung tasks are detected
This maintains backward compatibility while providing more flexibility for handling different hang scenarios.
Signed-off-by: Li RongQing lirongqing@baidu.com --- Diff with v2: not add new sysctl, extend hung_task_panic
Documentation/admin-guide/kernel-parameters.txt | 20 +++++++++++++------- Documentation/admin-guide/sysctl/kernel.rst | 3 ++- arch/arm/configs/aspeed_g5_defconfig | 2 +- kernel/configs/debug.config | 2 +- kernel/hung_task.c | 16 +++++++++++----- lib/Kconfig.debug | 10 ++++++---- tools/testing/selftests/wireguard/qemu/kernel.config | 2 +- 7 files changed, 35 insertions(+), 20 deletions(-)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index a51ab46..7d9a8ee 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1992,14 +1992,20 @@ the added memory block itself do not be affected.
hung_task_panic= - [KNL] Should the hung task detector generate panics. - Format: 0 | 1 + [KNL] Number of hung tasks to trigger kernel panic. + Format: <int> + + Set this to the number of hung tasks that must be + detected before triggering a kernel panic. + + 0: don't panic + 1: panic immediately on first hung task + N: panic after N hung tasks are detect
- A value of 1 instructs the kernel to panic when a - hung task is detected. The default value is controlled - by the CONFIG_BOOTPARAM_HUNG_TASK_PANIC build-time - option. The value selected by this boot parameter can - be changed later by the kernel.hung_task_panic sysctl. + The default value is controlled by the + CONFIG_BOOTPARAM_HUNG_TASK_PANIC build-time option. The value + selected by this boot parameter can be changed later by the + kernel.hung_task_panic sysctl.
hvc_iucv= [S390] Number of z/VM IUCV hypervisor console (HVC) terminal devices. Valid values: 0..8 diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst index f3ee807..0a8dfab 100644 --- a/Documentation/admin-guide/sysctl/kernel.rst +++ b/Documentation/admin-guide/sysctl/kernel.rst @@ -397,7 +397,8 @@ a hung task is detected. hung_task_panic ===============
-Controls the kernel's behavior when a hung task is detected. +When set to a non-zero value, a kernel panic will be triggered if the +number of detected hung tasks reaches this value This file shows up if ``CONFIG_DETECT_HUNG_TASK`` is enabled.
= ================================================= diff --git a/arch/arm/configs/aspeed_g5_defconfig b/arch/arm/configs/aspeed_g5_defconfig index 61cee1e..c3b0d5f 100644 --- a/arch/arm/configs/aspeed_g5_defconfig +++ b/arch/arm/configs/aspeed_g5_defconfig @@ -308,7 +308,7 @@ CONFIG_PANIC_ON_OOPS=y CONFIG_PANIC_TIMEOUT=-1 CONFIG_SOFTLOCKUP_DETECTOR=y CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y -CONFIG_BOOTPARAM_HUNG_TASK_PANIC=y +CONFIG_BOOTPARAM_HUNG_TASK_PANIC=1 CONFIG_WQ_WATCHDOG=y # CONFIG_SCHED_DEBUG is not set CONFIG_FUNCTION_TRACER=y diff --git a/kernel/configs/debug.config b/kernel/configs/debug.config index e81327d..9f6ab7d 100644 --- a/kernel/configs/debug.config +++ b/kernel/configs/debug.config @@ -83,7 +83,7 @@ CONFIG_SLUB_DEBUG_ON=y # # Debug Oops, Lockups and Hangs # -# CONFIG_BOOTPARAM_HUNG_TASK_PANIC is not set +CONFIG_BOOTPARAM_HUNG_TASK_PANIC=0 # CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC is not set CONFIG_DEBUG_ATOMIC_SLEEP=y CONFIG_DETECT_HUNG_TASK=y diff --git a/kernel/hung_task.c b/kernel/hung_task.c index b2c1f14..3929ed9 100644 --- a/kernel/hung_task.c +++ b/kernel/hung_task.c @@ -81,7 +81,7 @@ static unsigned int __read_mostly sysctl_hung_task_all_cpu_backtrace; * hung task is detected: */ static unsigned int __read_mostly sysctl_hung_task_panic = - IS_ENABLED(CONFIG_BOOTPARAM_HUNG_TASK_PANIC); + CONFIG_BOOTPARAM_HUNG_TASK_PANIC;
static int hung_task_panic(struct notifier_block *this, unsigned long event, void *ptr) @@ -218,8 +218,11 @@ static inline void debug_show_blocker(struct task_struct *task, unsigned long ti } #endif
-static void check_hung_task(struct task_struct *t, unsigned long timeout) +static void check_hung_task(struct task_struct *t, unsigned long timeout, + unsigned long prev_detect_count) { + unsigned long total_hung_task; + if (!task_is_hung(t, timeout)) return;
@@ -229,9 +232,11 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout) */ sysctl_hung_task_detect_count++;
+ total_hung_task = sysctl_hung_task_detect_count - prev_detect_count; trace_sched_process_hang(t);
- if (sysctl_hung_task_panic) { + if (sysctl_hung_task_panic && + (total_hung_task >= sysctl_hung_task_panic)) { console_verbose(); hung_task_show_lock = true; hung_task_call_panic = true; @@ -300,6 +305,7 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout) int max_count = sysctl_hung_task_check_count; unsigned long last_break = jiffies; struct task_struct *g, *t; + unsigned long prev_detect_count = sysctl_hung_task_detect_count;
/* * If the system crashed already then all bets are off, @@ -320,7 +326,7 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout) last_break = jiffies; }
- check_hung_task(t, timeout); + check_hung_task(t, timeout, prev_detect_count); } unlock: rcu_read_unlock(); @@ -389,7 +395,7 @@ static const struct ctl_table hung_task_sysctls[] = { .mode = 0644, .proc_handler = proc_dointvec_minmax, .extra1 = SYSCTL_ZERO, - .extra2 = SYSCTL_ONE, + .extra2 = SYSCTL_INT_MAX, }, { .procname = "hung_task_check_count", diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 3034e294..077b9e4 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1258,12 +1258,14 @@ config DEFAULT_HUNG_TASK_TIMEOUT Keeping the default should be fine in most cases.
config BOOTPARAM_HUNG_TASK_PANIC - bool "Panic (Reboot) On Hung Tasks" + int "Number of hung tasks to trigger kernel panic" depends on DETECT_HUNG_TASK + default 0 help - Say Y here to enable the kernel to panic on "hung tasks", - which are bugs that cause the kernel to leave a task stuck - in uninterruptible "D" state. + The number of hung tasks must be detected to trigger kernel panic. + + - 0: Don't trigger panic + - N: Panic when N hung tasks are detected
The panic can be used in combination with panic_timeout, to cause the system to reboot automatically after a diff --git a/tools/testing/selftests/wireguard/qemu/kernel.config b/tools/testing/selftests/wireguard/qemu/kernel.config index 936b18b..0504c11 100644 --- a/tools/testing/selftests/wireguard/qemu/kernel.config +++ b/tools/testing/selftests/wireguard/qemu/kernel.config @@ -81,7 +81,7 @@ CONFIG_WQ_WATCHDOG=y CONFIG_DETECT_HUNG_TASK=y CONFIG_BOOTPARAM_HARDLOCKUP_PANIC=y CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y -CONFIG_BOOTPARAM_HUNG_TASK_PANIC=y +CONFIG_BOOTPARAM_HUNG_TASK_PANIC=1 CONFIG_PANIC_TIMEOUT=-1 CONFIG_STACKTRACE=y CONFIG_EARLY_PRINTK=y
…
This patch extends the …
Will another imperative wording approach become more helpful for an improved change description? https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Docu...
…
+++ b/kernel/hung_task.c
… @@ -229,9 +232,11 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout) …
trace_sched_process_hang(t);
- if (sysctl_hung_task_panic) {
- if (sysctl_hung_task_panic &&
(total_hung_task >= sysctl_hung_task_panic)) {
…
I suggest to use the following source code variant instead.
if (sysctl_hung_task_panic && total_hung_task >= sysctl_hung_task_panic) {
Regards, Markus
…
This patch extends the …
Will another imperative wording approach become more helpful for an improved change description? https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Docu... entation/process/submitting-patches.rst?h=v6.17#n94
will fix in next version
…
+++ b/kernel/hung_task.c
… @@ -229,9 +232,11 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout) …
trace_sched_process_hang(t);
- if (sysctl_hung_task_panic) {
- if (sysctl_hung_task_panic &&
(total_hung_task >= sysctl_hung_task_panic)) {…
I suggest to use the following source code variant instead.
if (sysctl_hung_task_panic && total_hung_task >= sysctl_hung_task_panic) {
will fix in next version
thanks
-Li
Regards, Markus
Hi--
On 10/12/25 4:50 AM, lirongqing wrote:
From: Li RongQing lirongqing@baidu.com
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index a51ab46..7d9a8ee 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1992,14 +1992,20 @@ the added memory block itself do not be affected. hung_task_panic=
[KNL] Should the hung task detector generate panics.Format: 0 | 1
[KNL] Number of hung tasks to trigger kernel panic.Format: <int>Set this to the number of hung tasks that must bedetected before triggering a kernel panic.0: don't panic1: panic immediately on first hung taskN: panic after N hung tasks are detect
are detected
A value of 1 instructs the kernel to panic when ahung task is detected. The default value is controlledby the CONFIG_BOOTPARAM_HUNG_TASK_PANIC build-timeoption. The value selected by this boot parameter canbe changed later by the kernel.hung_task_panic sysctl.
The default value is controlled by theCONFIG_BOOTPARAM_HUNG_TASK_PANIC build-time option. The valueselected by this boot parameter can be changed later by thekernel.hung_task_panic sysctl.hvc_iucv= [S390] Number of z/VM IUCV hypervisor console (HVC) terminal devices. Valid values: 0..8
Thanks for the patch!
I noticed the implementation panics only when N tasks are detected within a single scan, because total_hung_task is reset for each check_hung_uninterruptible_tasks() run.
So some suggestions to align the documentation with the code's behavior below :)
On 2025/10/12 19:50, lirongqing wrote:
From: Li RongQing lirongqing@baidu.com
Currently, when 'hung_task_panic' is enabled, the kernel panics immediately upon detecting the first hung task. However, some hung tasks are transient and the system can recover, while others are persistent and may accumulate progressively.
This patch extends the 'hung_task_panic' sysctl to allow specifying the number of hung tasks that must be detected before triggering a kernel panic. This provides finer control for environments where transient hangs may occur but persistent hangs should still be fatal.
The sysctl can be set to:
- 0: disabled (never panic)
- 1: original behavior (panic on first hung task)
- N: panic when N hung tasks are detected
This maintains backward compatibility while providing more flexibility for handling different hang scenarios.
Signed-off-by: Li RongQing lirongqing@baidu.com
Diff with v2: not add new sysctl, extend hung_task_panic
Documentation/admin-guide/kernel-parameters.txt | 20 +++++++++++++------- Documentation/admin-guide/sysctl/kernel.rst | 3 ++- arch/arm/configs/aspeed_g5_defconfig | 2 +- kernel/configs/debug.config | 2 +- kernel/hung_task.c | 16 +++++++++++----- lib/Kconfig.debug | 10 ++++++---- tools/testing/selftests/wireguard/qemu/kernel.config | 2 +- 7 files changed, 35 insertions(+), 20 deletions(-)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index a51ab46..7d9a8ee 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1992,14 +1992,20 @@ the added memory block itself do not be affected. hung_task_panic=
[KNL] Should the hung task detector generate panics.Format: 0 | 1
[KNL] Number of hung tasks to trigger kernel panic.Format: <int>Set this to the number of hung tasks that must bedetected before triggering a kernel panic.0: don't panic1: panic immediately on first hung taskN: panic after N hung tasks are detect
The description should be more specific :)
N: panic after N hung tasks are detected in a single scan
Would it be better and cleaner?
A value of 1 instructs the kernel to panic when ahung task is detected. The default value is controlledby the CONFIG_BOOTPARAM_HUNG_TASK_PANIC build-timeoption. The value selected by this boot parameter canbe changed later by the kernel.hung_task_panic sysctl.
The default value is controlled by theCONFIG_BOOTPARAM_HUNG_TASK_PANIC build-time option. The valueselected by this boot parameter can be changed later by thekernel.hung_task_panic sysctl.hvc_iucv= [S390] Number of z/VM IUCV hypervisor console (HVC) terminal devices. Valid values: 0..8 diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst index f3ee807..0a8dfab 100644 --- a/Documentation/admin-guide/sysctl/kernel.rst +++ b/Documentation/admin-guide/sysctl/kernel.rst @@ -397,7 +397,8 @@ a hung task is detected. hung_task_panic =============== -Controls the kernel's behavior when a hung task is detected. +When set to a non-zero value, a kernel panic will be triggered if the +number of detected hung tasks reaches this value
Hmm... that is also ambiguous ...
+When set to a non-zero value, a kernel panic will be triggered if the +number of hung tasks found during a single scan reaches this value.
This file shows up if ``CONFIG_DETECT_HUNG_TASK`` is enabled. = ================================================= diff --git a/arch/arm/configs/aspeed_g5_defconfig b/arch/arm/configs/aspeed_g5_defconfig index 61cee1e..c3b0d5f 100644 --- a/arch/arm/configs/aspeed_g5_defconfig +++ b/arch/arm/configs/aspeed_g5_defconfig @@ -308,7 +308,7 @@ CONFIG_PANIC_ON_OOPS=y CONFIG_PANIC_TIMEOUT=-1 CONFIG_SOFTLOCKUP_DETECTOR=y CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y -CONFIG_BOOTPARAM_HUNG_TASK_PANIC=y +CONFIG_BOOTPARAM_HUNG_TASK_PANIC=1 CONFIG_WQ_WATCHDOG=y # CONFIG_SCHED_DEBUG is not set CONFIG_FUNCTION_TRACER=y diff --git a/kernel/configs/debug.config b/kernel/configs/debug.config index e81327d..9f6ab7d 100644 --- a/kernel/configs/debug.config +++ b/kernel/configs/debug.config @@ -83,7 +83,7 @@ CONFIG_SLUB_DEBUG_ON=y # # Debug Oops, Lockups and Hangs # -# CONFIG_BOOTPARAM_HUNG_TASK_PANIC is not set +CONFIG_BOOTPARAM_HUNG_TASK_PANIC=0 # CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC is not set CONFIG_DEBUG_ATOMIC_SLEEP=y CONFIG_DETECT_HUNG_TASK=y diff --git a/kernel/hung_task.c b/kernel/hung_task.c index b2c1f14..3929ed9 100644 --- a/kernel/hung_task.c +++ b/kernel/hung_task.c @@ -81,7 +81,7 @@ static unsigned int __read_mostly sysctl_hung_task_all_cpu_backtrace;
- hung task is detected:
*/ static unsigned int __read_mostly sysctl_hung_task_panic =
- IS_ENABLED(CONFIG_BOOTPARAM_HUNG_TASK_PANIC);
- CONFIG_BOOTPARAM_HUNG_TASK_PANIC;
static int hung_task_panic(struct notifier_block *this, unsigned long event, void *ptr) @@ -218,8 +218,11 @@ static inline void debug_show_blocker(struct task_struct *task, unsigned long ti } #endif -static void check_hung_task(struct task_struct *t, unsigned long timeout) +static void check_hung_task(struct task_struct *t, unsigned long timeout,
{unsigned long prev_detect_count)- unsigned long total_hung_task;
- if (!task_is_hung(t, timeout)) return;
@@ -229,9 +232,11 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout) */ sysctl_hung_task_detect_count++;
- total_hung_task = sysctl_hung_task_detect_count - prev_detect_count; trace_sched_process_hang(t);
- if (sysctl_hung_task_panic) {
- if (sysctl_hung_task_panic &&
console_verbose(); hung_task_show_lock = true; hung_task_call_panic = true;(total_hung_task >= sysctl_hung_task_panic)) {@@ -300,6 +305,7 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout) int max_count = sysctl_hung_task_check_count; unsigned long last_break = jiffies; struct task_struct *g, *t;
- unsigned long prev_detect_count = sysctl_hung_task_detect_count;
/* * If the system crashed already then all bets are off, @@ -320,7 +326,7 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout) last_break = jiffies; }
check_hung_task(t, timeout);
} unlock: rcu_read_unlock();check_hung_task(t, timeout, prev_detect_count);@@ -389,7 +395,7 @@ static const struct ctl_table hung_task_sysctls[] = { .mode = 0644, .proc_handler = proc_dointvec_minmax, .extra1 = SYSCTL_ZERO,
.extra2 = SYSCTL_ONE,
}, { .procname = "hung_task_check_count",.extra2 = SYSCTL_INT_MAX,diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 3034e294..077b9e4 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1258,12 +1258,14 @@ config DEFAULT_HUNG_TASK_TIMEOUT Keeping the default should be fine in most cases. config BOOTPARAM_HUNG_TASK_PANIC
- bool "Panic (Reboot) On Hung Tasks"
- int "Number of hung tasks to trigger kernel panic" depends on DETECT_HUNG_TASK
- default 0 help
Say Y here to enable the kernel to panic on "hung tasks",which are bugs that cause the kernel to leave a task stuckin uninterruptible "D" state.
The number of hung tasks must be detected to trigger kernel panic.- 0: Don't trigger panic- N: Panic when N hung tasks are detected
+ - N: Panic when N hung tasks are detected in a single scan
With these documentation changes, this patch would accurately describe its behavior, IMHO.
The panic can be used in combination with panic_timeout, to cause the system to reboot automatically after a diff --git a/tools/testing/selftests/wireguard/qemu/kernel.config b/tools/testing/selftests/wireguard/qemu/kernel.config index 936b18b..0504c11 100644 --- a/tools/testing/selftests/wireguard/qemu/kernel.config +++ b/tools/testing/selftests/wireguard/qemu/kernel.config @@ -81,7 +81,7 @@ CONFIG_WQ_WATCHDOG=y CONFIG_DETECT_HUNG_TASK=y CONFIG_BOOTPARAM_HARDLOCKUP_PANIC=y CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y -CONFIG_BOOTPARAM_HUNG_TASK_PANIC=y +CONFIG_BOOTPARAM_HUNG_TASK_PANIC=1 CONFIG_PANIC_TIMEOUT=-1 CONFIG_STACKTRACE=y CONFIG_EARLY_PRINTK=y
On Tue 2025-10-14 13:23:58, Lance Yang wrote:
Thanks for the patch!
I noticed the implementation panics only when N tasks are detected within a single scan, because total_hung_task is reset for each check_hung_uninterruptible_tasks() run.
Great catch!
Does it make sense? Is is the intended behavior, please?
So some suggestions to align the documentation with the code's behavior below :)
On 2025/10/12 19:50, lirongqing wrote:
From: Li RongQing lirongqing@baidu.com
Currently, when 'hung_task_panic' is enabled, the kernel panics immediately upon detecting the first hung task. However, some hung tasks are transient and the system can recover, while others are persistent and may accumulate progressively.
My understanding is that this patch wanted to do:
+ report even temporary stalls + panic only when the stall was much longer and likely persistent
Which might make some sense. But the code does something else.
--- a/kernel/hung_task.c +++ b/kernel/hung_task.c @@ -229,9 +232,11 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout) */ sysctl_hung_task_detect_count++;
- total_hung_task = sysctl_hung_task_detect_count - prev_detect_count; trace_sched_process_hang(t);
- if (sysctl_hung_task_panic) {
- if (sysctl_hung_task_panic &&
console_verbose(); hung_task_show_lock = true; hung_task_call_panic = true;(total_hung_task >= sysctl_hung_task_panic)) {
I would expect that this patch added another counter, similar to sysctl_hung_task_detect_count. It would be incremented only once per check when a hung task was detected. And it would be cleared (reset) when no hung task was found.
Best Regards, Petr
On Tue 2025-10-14 13:23:58, Lance Yang wrote:
Thanks for the patch!
I noticed the implementation panics only when N tasks are detected within a single scan, because total_hung_task is reset for each check_hung_uninterruptible_tasks() run.
Great catch!
Does it make sense? Is is the intended behavior, please?
Yes, this is intended behavior
So some suggestions to align the documentation with the code's behavior below :)
On 2025/10/12 19:50, lirongqing wrote:
From: Li RongQing lirongqing@baidu.com
Currently, when 'hung_task_panic' is enabled, the kernel panics immediately upon detecting the first hung task. However, some hung tasks are transient and the system can recover, while others are persistent and may accumulate progressively.
My understanding is that this patch wanted to do:
- report even temporary stalls
- panic only when the stall was much longer and likely persistent
Which might make some sense. But the code does something else.
A single task hanging for an extended period may not be a critical issue, as users might still log into the system to investigate. However, if multiple tasks hang simultaneously-such as in cases of I/O hangs caused by disk failures-it could prevent users from logging in and become a serious problem, and a panic is expected.
--- a/kernel/hung_task.c +++ b/kernel/hung_task.c @@ -229,9 +232,11 @@ static void check_hung_task(struct task_struct
*t, unsigned long timeout)
*/sysctl_hung_task_detect_count++;
- total_hung_task = sysctl_hung_task_detect_count -
+prev_detect_count; trace_sched_process_hang(t);
- if (sysctl_hung_task_panic) {
- if (sysctl_hung_task_panic &&
console_verbose(); hung_task_show_lock = true; hung_task_call_panic = true;(total_hung_task >= sysctl_hung_task_panic)) {I would expect that this patch added another counter, similar to sysctl_hung_task_detect_count. It would be incremented only once per check when a hung task was detected. And it would be cleared (reset) when no hung task was found.
Best Regards, Petr
On Tue 2025-10-14 10:49:53, Li,Rongqing wrote:
On Tue 2025-10-14 13:23:58, Lance Yang wrote:
Thanks for the patch!
I noticed the implementation panics only when N tasks are detected within a single scan, because total_hung_task is reset for each check_hung_uninterruptible_tasks() run.
Great catch!
Does it make sense? Is is the intended behavior, please?
Yes, this is intended behavior
So some suggestions to align the documentation with the code's behavior below :)
On 2025/10/12 19:50, lirongqing wrote:
From: Li RongQing lirongqing@baidu.com
Currently, when 'hung_task_panic' is enabled, the kernel panics immediately upon detecting the first hung task. However, some hung tasks are transient and the system can recover, while others are persistent and may accumulate progressively.
My understanding is that this patch wanted to do:
- report even temporary stalls
- panic only when the stall was much longer and likely persistent
Which might make some sense. But the code does something else.
A single task hanging for an extended period may not be a critical issue, as users might still log into the system to investigate. However, if multiple tasks hang simultaneously-such as in cases of I/O hangs caused by disk failures-it could prevent users from logging in and become a serious problem, and a panic is expected.
I see. This another approach and it makes sense as well. An this is much more clear description than the original text.
I would also update the subject to something like:
hung_task: Panic when there are more than N hung tasks at the same time
That said, I think that both approaches make sense.
Your approach would trigger the panic when many processes are stuck. Note that it still might be a transient state. But I agree that the more stuck processes exist the more serious the problem likely is for the heath of the system.
My approach would trigger panic when a single process hangs for a long time. It will trigger more likely only when the problem is persistent. The seriousness depends on which particular process get stuck.
I am fine with your approach. Just please, make more clear that the number means the number of hung tasks at the same time. And mention the problems to login, ...
Best Regards, Petr
I would also update the subject to something like:
hung_task: Panic when there are more than N hung tasks at the sametime
Ok, I will update
That said, I think that both approaches make sense.
Your approach would trigger the panic when many processes are stuck. Note that it still might be a transient state. But I agree that the more stuck processes exist the more serious the problem likely is for the heath of the system.
My approach would trigger panic when a single process hangs for a long time. It will trigger more likely only when the problem is persistent. The seriousness depends on which particular process get stuck.
Yes, both are reasonable requirement, and I will leave it to you or anyone else interested to implement it
Thanks
-Li.
I am fine with your approach. Just please, make more clear that the number means the number of hung tasks at the same time. And mention the problems to login, ...
Best Regards, Petr
On 2025/10/14 17:45, Petr Mladek wrote:
On Tue 2025-10-14 13:23:58, Lance Yang wrote:
Thanks for the patch!
I noticed the implementation panics only when N tasks are detected within a single scan, because total_hung_task is reset for each check_hung_uninterruptible_tasks() run.
Great catch!
Does it make sense? Is is the intended behavior, please?
So some suggestions to align the documentation with the code's behavior below :)
On 2025/10/12 19:50, lirongqing wrote:
From: Li RongQing lirongqing@baidu.com
Currently, when 'hung_task_panic' is enabled, the kernel panics immediately upon detecting the first hung task. However, some hung tasks are transient and the system can recover, while others are persistent and may accumulate progressively.
My understanding is that this patch wanted to do:
+ report even temporary stalls + panic only when the stall was much longer and likely persistentWhich might make some sense. But the code does something else.
Cool. Sounds good to me!
--- a/kernel/hung_task.c +++ b/kernel/hung_task.c @@ -229,9 +232,11 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout) */ sysctl_hung_task_detect_count++;
- total_hung_task = sysctl_hung_task_detect_count - prev_detect_count; trace_sched_process_hang(t);
- if (sysctl_hung_task_panic) {
- if (sysctl_hung_task_panic &&
(total_hung_task >= sysctl_hung_task_panic)) { console_verbose(); hung_task_show_lock = true; hung_task_call_panic = true;I would expect that this patch added another counter, similar to sysctl_hung_task_detect_count. It would be incremented only once per check when a hung task was detected. And it would be cleared (reset) when no hung task was found.
Much cleaner. We could add an internal counter for that, yeah. No need to expose it to userspace ;)
Petr's suggestion seems to align better with the goal of panicking on persistent hangs, IMHO. Panic after N consecutive checks with hung tasks.
@RongQing does that work for you?
Currently, when 'hung_task_panic' is enabled, the kernel panics immediately upon detecting the first hung task. However, some hung tasks are transient and the system can recover, while others are persistent and may accumulate progressively.
My understanding is that this patch wanted to do:
+ report even temporary stalls + panic only when the stall was much longer and likely persistentWhich might make some sense. But the code does something else.
Cool. Sounds good to me!
--- a/kernel/hung_task.c +++ b/kernel/hung_task.c @@ -229,9 +232,11 @@ static void check_hung_task(struct task_struct
*t, unsigned long timeout)
*/ sysctl_hung_task_detect_count++;
- total_hung_task = sysctl_hung_task_detect_count -
+prev_detect_count; trace_sched_process_hang(t);
- if (sysctl_hung_task_panic) {
- if (sysctl_hung_task_panic &&
(total_hung_task >= sysctl_hung_task_panic)) { console_verbose(); hung_task_show_lock = true; hung_task_call_panic = true;I would expect that this patch added another counter, similar to sysctl_hung_task_detect_count. It would be incremented only once per check when a hung task was detected. And it would be cleared (reset) when no hung task was found.
Much cleaner. We could add an internal counter for that, yeah. No need to expose it to userspace ;)
Petr's suggestion seems to align better with the goal of panicking on persistent hangs, IMHO. Panic after N consecutive checks with hung tasks.
@RongQing does that work for you?
In my opinion, a single task hang is not a critical issue, fatal hangs—such as those caused by I/O hangs, network card failures, or hangs while holding locks—will inevitably lead to multiple tasks being hung. In such scenarios, users cannot even log in to the machine, making it extremely difficult to investigate the root cause. Therefore, I believe the current approach is sound. What's your opinion?
-Li
On 2025/10/14 19:18, Li,Rongqing wrote:
Currently, when 'hung_task_panic' is enabled, the kernel panics immediately upon detecting the first hung task. However, some hung tasks are transient and the system can recover, while others are persistent and may accumulate progressively.
My understanding is that this patch wanted to do:
+ report even temporary stalls + panic only when the stall was much longer and likely persistentWhich might make some sense. But the code does something else.
Cool. Sounds good to me!
--- a/kernel/hung_task.c +++ b/kernel/hung_task.c @@ -229,9 +232,11 @@ static void check_hung_task(struct task_struct
*t, unsigned long timeout)
*/ sysctl_hung_task_detect_count++;
- total_hung_task = sysctl_hung_task_detect_count -
+prev_detect_count; trace_sched_process_hang(t);
- if (sysctl_hung_task_panic) {
- if (sysctl_hung_task_panic &&
(total_hung_task >= sysctl_hung_task_panic)) { console_verbose(); hung_task_show_lock = true; hung_task_call_panic = true;I would expect that this patch added another counter, similar to sysctl_hung_task_detect_count. It would be incremented only once per check when a hung task was detected. And it would be cleared (reset) when no hung task was found.
Much cleaner. We could add an internal counter for that, yeah. No need to expose it to userspace ;)
Petr's suggestion seems to align better with the goal of panicking on persistent hangs, IMHO. Panic after N consecutive checks with hung tasks.
@RongQing does that work for you?
In my opinion, a single task hang is not a critical issue, fatal hangs—such as those caused by I/O hangs, network card failures, or hangs while holding locks—will inevitably lead to multiple tasks being hung. In such scenarios, users cannot even log in to the machine, making it extremely difficult to investigate the root cause. Therefore, I believe the current approach is sound. What's your opinion?
Thanks! I'm fine with either approach. Let's hear what the other folks think ;)
linux-kselftest-mirror@lists.linaro.org