pidfd are file descriptors referring to a process created with the
CLONE_PIDFD clone(2) flag. Android low memory killer (LMK) needs pidfd
polling support to replace code that currently checks for existence of
/proc/pid for knowing that a process that is signalled to be killed has
died, which is both racy and slow. The pidfd poll approach is race-free,
and also allows the LMK to do other things (such as by polling on other
fds) while awaiting the process being killed to die.
It prevents a situation where a PID is reused between when LMK sends a
kill signal and checks for existence of the PID, since the wrong PID is
now possibly checked for existence.
In this patch, we follow the same existing mechanism in the kernel used
when the parent of the task group is to be notified (do_notify_parent).
This is when the tasks waiting on a poll of pidfd are also awakened.
We have decided to include the waitqueue in struct pid for the following
reasons:
1. The wait queue has to survive for the lifetime of the poll. Including
it in task_struct would not be option in this case because the task can
be reaped and destroyed before the poll returns.
2. By including the struct pid for the waitqueue means that during
de_thread(), the new thread group leader automatically gets the new
waitqueue/pid even though its task_struct is different.
Appropriate test cases are added in the second patch to provide coverage
of all the cases the patch is handling.
Andy had a similar patch [1] in the past which was a good reference
however this patch tries to handle different situations properly related
to thread group existence, and how/where it notifies. And also solves
other bugs (waitqueue lifetime). Daniel had a similar patch [2]
recently which this patch supercedes.
[1] https://lore.kernel.org/patchwork/patch/345098/
[2] https://lore.kernel.org/lkml/20181029175322.189042-1-dancol@google.com/
Cc: luto(a)amacapital.net
Cc: rostedt(a)goodmis.org
Cc: dancol(a)google.com
Cc: sspatil(a)google.com
Cc: christian(a)brauner.io
Cc: jannh(a)google.com
Cc: surenb(a)google.com
Cc: timmurray(a)google.com
Cc: Jonathan Kowalski <bl0pbl33p(a)gmail.com>
Cc: torvalds(a)linux-foundation.org
Cc: kernel-team(a)android.com
Co-developed-by: Daniel Colascione <dancol(a)google.com>
Signed-off-by: Joel Fernandes (Google) <joel(a)joelfernandes.org>
---
RFC -> v1:
* Based on CLONE_PIDFD patches: https://lwn.net/Articles/786244/
* Updated selftests.
* Renamed poll wake function to do_notify_pidfd.
* Removed depending on EXIT flags
* Removed POLLERR flag since semantics are controversial and
we don't have usecases for it right now (later we can add if there's
a need for it).
include/linux/pid.h | 3 +++
kernel/fork.c | 33 +++++++++++++++++++++++++++++++++
kernel/pid.c | 2 ++
kernel/signal.c | 14 ++++++++++++++
4 files changed, 52 insertions(+)
diff --git a/include/linux/pid.h b/include/linux/pid.h
index 3c8ef5a199ca..1484db6ca8d1 100644
--- a/include/linux/pid.h
+++ b/include/linux/pid.h
@@ -3,6 +3,7 @@
#define _LINUX_PID_H
#include <linux/rculist.h>
+#include <linux/wait.h>
enum pid_type
{
@@ -60,6 +61,8 @@ struct pid
unsigned int level;
/* lists of tasks that use this pid */
struct hlist_head tasks[PIDTYPE_MAX];
+ /* wait queue for pidfd notifications */
+ wait_queue_head_t wait_pidfd;
struct rcu_head rcu;
struct upid numbers[1];
};
diff --git a/kernel/fork.c b/kernel/fork.c
index 5525837ed80e..fb3b614f6456 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1685,8 +1685,41 @@ static void pidfd_show_fdinfo(struct seq_file *m, struct file *f)
}
#endif
+static unsigned int pidfd_poll(struct file *file, struct poll_table_struct *pts)
+{
+ struct task_struct *task;
+ struct pid *pid;
+ int poll_flags = 0;
+
+ /*
+ * tasklist_lock must be held because to avoid racing with
+ * changes in exit_state and wake up. Basically to avoid:
+ *
+ * P0: read exit_state = 0
+ * P1: write exit_state = EXIT_DEAD
+ * P1: Do a wake up - wq is empty, so do nothing
+ * P0: Queue for polling - wait forever.
+ */
+ read_lock(&tasklist_lock);
+ pid = file->private_data;
+ task = pid_task(pid, PIDTYPE_PID);
+ WARN_ON_ONCE(task && !thread_group_leader(task));
+
+ if (!task || (task->exit_state && thread_group_empty(task)))
+ poll_flags = POLLIN | POLLRDNORM;
+
+ if (!poll_flags)
+ poll_wait(file, &pid->wait_pidfd, pts);
+
+ read_unlock(&tasklist_lock);
+
+ return poll_flags;
+}
+
+
const struct file_operations pidfd_fops = {
.release = pidfd_release,
+ .poll = pidfd_poll,
#ifdef CONFIG_PROC_FS
.show_fdinfo = pidfd_show_fdinfo,
#endif
diff --git a/kernel/pid.c b/kernel/pid.c
index 20881598bdfa..5c90c239242f 100644
--- a/kernel/pid.c
+++ b/kernel/pid.c
@@ -214,6 +214,8 @@ struct pid *alloc_pid(struct pid_namespace *ns)
for (type = 0; type < PIDTYPE_MAX; ++type)
INIT_HLIST_HEAD(&pid->tasks[type]);
+ init_waitqueue_head(&pid->wait_pidfd);
+
upid = pid->numbers + ns->level;
spin_lock_irq(&pidmap_lock);
if (!(ns->pid_allocated & PIDNS_ADDING))
diff --git a/kernel/signal.c b/kernel/signal.c
index 1581140f2d99..16e7718316e5 100644
--- a/kernel/signal.c
+++ b/kernel/signal.c
@@ -1800,6 +1800,17 @@ int send_sigqueue(struct sigqueue *q, struct pid *pid, enum pid_type type)
return ret;
}
+static void do_notify_pidfd(struct task_struct *task)
+{
+ struct pid *pid;
+
+ lockdep_assert_held(&tasklist_lock);
+
+ pid = get_task_pid(task, PIDTYPE_PID);
+ wake_up_all(&pid->wait_pidfd);
+ put_pid(pid);
+}
+
/*
* Let a parent know about the death of a child.
* For a stopped/continued status change, use do_notify_parent_cldstop instead.
@@ -1823,6 +1834,9 @@ bool do_notify_parent(struct task_struct *tsk, int sig)
BUG_ON(!tsk->ptrace &&
(tsk->group_leader != tsk || !thread_group_empty(tsk)));
+ /* Wake up all pidfd waiters */
+ do_notify_pidfd(tsk);
+
if (sig != SIGCHLD) {
/*
* This is only possible if parent == real_parent.
--
2.21.0.593.g511ec345e18-goog
Hi,
this series is the result of the discussion to the RFC patch found at [1].
The goal is to make x86' ftrace_int3_handler() not to simply skip over
the trapping instruction as this is problematic in the context of
the live patching consistency model. For details, c.f. the commit message
of [3/4] ("x86/ftrace: make ftrace_int3_handler() not to skip fops
invocation").
Everything is based on v5.1-rc6, please let me know in case you want me to
rebase on somehing else.
For x86_64, the live patching selftest added in [4/4] succeeds with this
series applied and fails without it. On 32 bits I only compile-tested.
checkpatch reports warnings about
- an overlong line in assembly -- I chose to ignore that
- MAINTAINERS perhaps needing updates due to the new files
arch/x86/kernel/ftrace_int3_stubs.S and
tools/testing/selftests/livepatch/test-livepatch-vs-ftrace.sh.
As the existing arch/x86/kernel/ftrace_{32,64}.S haven't got an
explicit entry either, this one is probably Ok? The selftest
definitely is.
Changes to the RFC patch:
- s/trampoline/stub/ to avoid confusion with the ftrace_ops' trampolines,
- use a fixed size stack kept in struct thread_info for passing the
(adjusted) ->ip values from ftrace_int3_handler() to the stubs,
- provide one stub for each of the two possible jump targets and hardcode
those,
- add the live patching selftest.
Thanks,
Nicolai
Nicolai Stange (4):
x86/thread_info: introduce ->ftrace_int3_stack member
ftrace: drop 'static' qualifier from ftrace_ops_list_func()
x86/ftrace: make ftrace_int3_handler() not to skip fops invocation
selftests/livepatch: add "ftrace a live patched function" test
arch/x86/include/asm/thread_info.h | 11 +++
arch/x86/kernel/Makefile | 1 +
arch/x86/kernel/asm-offsets.c | 8 +++
arch/x86/kernel/ftrace.c | 79 +++++++++++++++++++---
arch/x86/kernel/ftrace_int3_stubs.S | 61 +++++++++++++++++
kernel/trace/ftrace.c | 8 +--
tools/testing/selftests/livepatch/Makefile | 3 +-
.../livepatch/test-livepatch-vs-ftrace.sh | 44 ++++++++++++
8 files changed, 199 insertions(+), 16 deletions(-)
create mode 100644 arch/x86/kernel/ftrace_int3_stubs.S
create mode 100755 tools/testing/selftests/livepatch/test-livepatch-vs-ftrace.sh
--
2.13.7
On Mon, Apr 29, 2019 at 11:53 AM Linus Torvalds
<torvalds(a)linux-foundation.org> wrote:
>
>
>
> On Mon, Apr 29, 2019, 11:42 Andy Lutomirski <luto(a)kernel.org> wrote:
>>
>>
>> I'm less than 100% convinced about this argument. Sure, an NMI right
>> there won't cause a problem. But an NMI followed by an interrupt will
>> kill us if preemption is on. I can think of three solutions:
>
>
> No, because either the sti shadow disables nmi too (that's the case on some CPUs at least) or the iret from nmi does.
>
> Otherwise you could never trust the whole sti shadow thing - and it very much is part of the architecture.
>
Is this documented somewhere? And do you actually believe that this
is true under KVM, Hyper-V, etc? As I recall, Andrew Cooper dug in to
the way that VMX dealt with this stuff and concluded that the SDM was
blatantly wrong in many cases, which leads me to believe that Xen
HVM/PVH is the *only* hypervisor that gets it right.
Steven's point about batched updates is quite valid, though. My
personal favorite solution to this whole mess is to rework the whole
thing so that the int3 handler simply returns and retries and to
replace the sync_core() broadcast with an SMI broadcast. I don't know
whether this will actually work on real CPUs and on VMs and whether
it's going to crash various BIOSes out there.
On Mon, Apr 29, 2019 at 12:02 PM Linus Torvalds
<torvalds(a)linux-foundation.org> wrote:
>
> If nmi were to break it, it would be a cpu bug. I'm pretty sure I've
> seen the "shadow stops even nmi" documented for some uarch, but as
> mentioned it's not necessarily the only way to guarantee the shadow.
In fact, the documentation is simply the official Intel instruction
docs for "STI":
The IF flag and the STI and CLI instructions do not prohibit the
generation of exceptions and NMI interrupts. NMI interrupts (and
SMIs) may be blocked for one macroinstruction following an STI.
note the "may be blocked". As mentioned, that's just one option for
not having NMI break the STI shadow guarantee, but it's clearly one
that Intel has done at times, and clearly even documents as having
done so.
There is absolutely no question that the sti shadow is real, and that
people have depended on it for _decades_. It would be a horrible
errata if the shadow can just be made to go away by randomly getting
an NMI or SMI.
Linus
On Mon, 29 Apr 2019 11:59:04 -0700
Linus Torvalds <torvalds(a)linux-foundation.org> wrote:
> I really don't care. Just do what I suggested, and if you have numbers to
> show problems, then maybe I'll care.
>
Are you suggesting that I rewrite the code to do it one function at a
time? This has always been batch mode. This is not something new. The
function tracer has been around longer than the text poke code.
> Right now you're just making excuses for this. I described the solution
> months ago, now I've written a patch, if that's not good enough then we can
> just skip this all entirely.
>
> Honestly, if you need to rewrite tens of thousands of calls, maybe you're
> doing something wrong?
>
# cd /sys/kernel/debug/tracing
# cat available_filter_functions | wc -l
45856
# cat enabled_functions | wc -l
0
# echo function > current_tracer
# cat enabled_functions | wc -l
45856
There, I just enabled 45,856 function call sites in one shot!
How else do you want to update them? Every function in the kernel has a
nop, that turns into a call to the ftrace_handler, if I add another
user of that code, it will change each one as well.
-- Steve