We observed an issue with NXP 5.15 LTS kernel that dma_alloc_coherent()
may fail sometimes when there're multiple processes trying to allocate
CMA memory.
This issue can be very easily reproduced on MX6Q SDB board with latest
linux-next kernel by writing a test module creating 16 or 32 threads
allocating random size of CMA memory in parallel at the background.
Or simply enabling CONFIG_CMA_DEBUG, you can see endless of CMA alloc
retries during booting:
[ 1.452124] cma: cma_alloc(): memory range at (ptrval) is busy,retrying
....
(thousands of reties)
The root cause of this issue is that since commit a4efc174b382
("mm/cma.c: remove redundant cma_mutex lock"), CMA supports concurrent
memory allocation.
It's possible that the memory range process A try to alloc has already
been isolated by the allocation of process B during memory migration.
The problem here is that the memory range isolated during one allocation
by start_isolate_page_range() could be much bigger than the real size we
want to alloc due to the range is aligned to MAX_ORDER_NR_PAGES.
Taking an ARMv7 platform with 1G memory as an example, when MAX_ORDER_NR_PAGES
is big (e.g. 32M with max_order 14) and CMA memory is relatively small
(e.g. 128M), there're only 4 MAX_ORDER slot, then it's very easy that
all CMA memory may have already been isolated by other processes when
one trying to allocate memory using dma_alloc_coherent().
Since current CMA code will only scan one time of whole available CMA
memory, then dma_alloc_coherent() may easy fail due to contention with
other processes.
This patchset introduces a retry mechanism to rescan CMA bitmap for -EBUSY
error in case the target pageblock may has been temporarily isolated
by others and released later.
It also improves the CMA allocation performance by trying the next
MAX_ORDER_NR_PAGES range during reties rather than looping within the
same isolated range in small steps which wasting CPU mips.
The following test is based on linux-next: next-20211213.
Without the fix, it's easily fail.
# insmod cma_alloc.ko pnum=16
[ 274.322369] CMA alloc test enter: thread number: 16
[ 274.329948] cpu: 0, pid: 692, index 4 pages 144
[ 274.330143] cpu: 1, pid: 694, index 2 pages 44
[ 274.330359] cpu: 2, pid: 695, index 7 pages 757
[ 274.330760] cpu: 2, pid: 696, index 4 pages 144
[ 274.330974] cpu: 2, pid: 697, index 6 pages 512
[ 274.331223] cpu: 2, pid: 698, index 6 pages 512
[ 274.331499] cpu: 2, pid: 699, index 2 pages 44
[ 274.332228] cpu: 2, pid: 700, index 0 pages 7
[ 274.337421] cpu: 0, pid: 701, index 1 pages 38
[ 274.337618] cpu: 2, pid: 702, index 0 pages 7
[ 274.344669] cpu: 1, pid: 703, index 0 pages 7
[ 274.344807] cpu: 3, pid: 704, index 6 pages 512
[ 274.348269] cpu: 2, pid: 705, index 5 pages 148
[ 274.349490] cma: cma_alloc: reserved: alloc failed, req-size: 38 pages, ret: -16
[ 274.366292] cpu: 1, pid: 706, index 4 pages 144
[ 274.366562] cpu: 0, pid: 707, index 3 pages 128
[ 274.367356] cma: cma_alloc: reserved: alloc failed, req-size: 128 pages, ret: -16
[ 274.367370] cpu: 0, pid: 707, index 3 pages 128 failed
[ 274.371148] cma: cma_alloc: reserved: alloc failed, req-size: 148 pages, ret: -16
[ 274.375348] cma: cma_alloc: reserved: alloc failed, req-size: 144 pages, ret: -16
[ 274.384256] cpu: 2, pid: 708, index 0 pages 7
....
With the fix, 32 threads allocating in parallel can pass overnight
stress test.
root@imx6qpdlsolox:~# insmod cma_alloc.ko pnum=32
[ 112.976809] cma_alloc: loading out-of-tree module taints kernel.
[ 112.984128] CMA alloc test enter: thread number: 32
[ 112.989748] cpu: 2, pid: 707, index 6 pages 512
[ 112.994342] cpu: 1, pid: 708, index 6 pages 512
[ 112.995162] cpu: 0, pid: 709, index 3 pages 128
[ 112.995867] cpu: 2, pid: 710, index 0 pages 7
[ 112.995910] cpu: 3, pid: 711, index 2 pages 44
[ 112.996005] cpu: 3, pid: 712, index 7 pages 757
[ 112.996098] cpu: 3, pid: 713, index 7 pages 757
...
[41877.368163] cpu: 1, pid: 737, index 2 pages 44
[41877.369388] cpu: 1, pid: 736, index 3 pages 128
[41878.486516] cpu: 0, pid: 737, index 2 pages 44
[41878.486515] cpu: 2, pid: 739, index 4 pages 144
[41878.486622] cpu: 1, pid: 736, index 3 pages 128
[41878.486948] cpu: 2, pid: 735, index 7 pages 757
[41878.487279] cpu: 2, pid: 738, index 4 pages 144
[41879.526603] cpu: 1, pid: 739, index 3 pages 128
[41879.606491] cpu: 2, pid: 737, index 3 pages 128
[41879.606550] cpu: 0, pid: 736, index 0 pages 7
[41879.612271] cpu: 2, pid: 738, index 4 pages 144
...
v1:
https://patchwork.kernel.org/project/linux-mm/cover/20211215080242.3034856-…
v2:
https://patchwork.kernel.org/project/linux-mm/cover/20220112131552.3329380-…
Dong Aisheng (2):
mm: cma: fix allocation may fail sometimes
mm: cma: try next MAX_ORDER_NR_PAGES during retry
mm/cma.c | 23 +++++++++++++++++++++--
1 file changed, 21 insertions(+), 2 deletions(-)
--
2.25.1
The current implementation of PTRACE_KILL is buggy and has been for
many years as it assumes it's target has stopped in ptrace_stop. At a
quick skim it looks like this assumption has existed since ptrace
support was added in linux v1.0.
While PTRACE_KILL has been deprecated we can not remove it as
a quick search with google code search reveals many existing
programs calling it.
When the ptracee is not stopped at ptrace_stop some fields would be
set that are ignored except in ptrace_stop. Making the userspace
visible behavior of PTRACE_KILL a noop in those case.
As the usual rules are not obeyed it is not clear what the
consequences are of calling PTRACE_KILL on a running process.
Presumably userspace does not do this as it achieves nothing.
Replace the implementation of PTRACE_KILL with a simple
send_sig_info(SIGKILL) followed by a return 0. This changes the
observable user space behavior only in that PTRACE_KILL on a process
not stopped in ptrace_stop will also kill it. As that has always
been the intent of the code this seems like a reasonable change.
Cc: stable(a)vger.kernel.org
Reported-by: Al Viro <viro(a)zeniv.linux.org.uk>
Suggested-by: Al Viro <viro(a)zeniv.linux.org.uk>
Signed-off-by: "Eric W. Biederman" <ebiederm(a)xmission.com>
---
arch/x86/kernel/step.c | 3 +--
kernel/ptrace.c | 5 ++---
2 files changed, 3 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kernel/step.c b/arch/x86/kernel/step.c
index 0f3c307b37b3..8e2b2552b5ee 100644
--- a/arch/x86/kernel/step.c
+++ b/arch/x86/kernel/step.c
@@ -180,8 +180,7 @@ void set_task_blockstep(struct task_struct *task, bool on)
*
* NOTE: this means that set/clear TIF_BLOCKSTEP is only safe if
* task is current or it can't be running, otherwise we can race
- * with __switch_to_xtra(). We rely on ptrace_freeze_traced() but
- * PTRACE_KILL is not safe.
+ * with __switch_to_xtra(). We rely on ptrace_freeze_traced().
*/
local_irq_disable();
debugctl = get_debugctlmsr();
diff --git a/kernel/ptrace.c b/kernel/ptrace.c
index da30dcd477a0..7105821595bc 100644
--- a/kernel/ptrace.c
+++ b/kernel/ptrace.c
@@ -1236,9 +1236,8 @@ int ptrace_request(struct task_struct *child, long request,
return ptrace_resume(child, request, data);
case PTRACE_KILL:
- if (child->exit_state) /* already dead */
- return 0;
- return ptrace_resume(child, request, SIGKILL);
+ send_sig_info(SIGKILL, SEND_SIG_NOINFO, child);
+ return 0;
#ifdef CONFIG_HAVE_ARCH_TRACEHOOK
case PTRACE_GETREGSET:
--
2.35.3
xtensa is the last user of the PT_SINGLESTEP flag. Changing tsk->ptrace in
user_enable_single_step and user_disable_single_step without locking could
potentiallly cause problems.
So use a thread info flag instead of a flag in tsk->ptrace. Use TIF_SINGLESTEP
that xtensa already had defined but unused.
Remove the definitions of PT_SINGLESTEP and PT_BLOCKSTEP as they have no more users.
Cc: stable(a)vger.kernel.org
Acked-by: Max Filippov <jcmvbkbc(a)gmail.com>
Signed-off-by: "Eric W. Biederman" <ebiederm(a)xmission.com>
---
arch/xtensa/kernel/ptrace.c | 4 ++--
arch/xtensa/kernel/signal.c | 4 ++--
include/linux/ptrace.h | 6 ------
3 files changed, 4 insertions(+), 10 deletions(-)
diff --git a/arch/xtensa/kernel/ptrace.c b/arch/xtensa/kernel/ptrace.c
index 323c678a691f..b952e67cc0cc 100644
--- a/arch/xtensa/kernel/ptrace.c
+++ b/arch/xtensa/kernel/ptrace.c
@@ -225,12 +225,12 @@ const struct user_regset_view *task_user_regset_view(struct task_struct *task)
void user_enable_single_step(struct task_struct *child)
{
- child->ptrace |= PT_SINGLESTEP;
+ set_tsk_thread_flag(child, TIF_SINGLESTEP);
}
void user_disable_single_step(struct task_struct *child)
{
- child->ptrace &= ~PT_SINGLESTEP;
+ clear_tsk_thread_flag(child, TIF_SINGLESTEP);
}
/*
diff --git a/arch/xtensa/kernel/signal.c b/arch/xtensa/kernel/signal.c
index 6f68649e86ba..ac50ec46c8f1 100644
--- a/arch/xtensa/kernel/signal.c
+++ b/arch/xtensa/kernel/signal.c
@@ -473,7 +473,7 @@ static void do_signal(struct pt_regs *regs)
/* Set up the stack frame */
ret = setup_frame(&ksig, sigmask_to_save(), regs);
signal_setup_done(ret, &ksig, 0);
- if (current->ptrace & PT_SINGLESTEP)
+ if (test_thread_flag(TIF_SINGLESTEP))
task_pt_regs(current)->icountlevel = 1;
return;
@@ -499,7 +499,7 @@ static void do_signal(struct pt_regs *regs)
/* If there's no signal to deliver, we just restore the saved mask. */
restore_saved_sigmask();
- if (current->ptrace & PT_SINGLESTEP)
+ if (test_thread_flag(TIF_SINGLESTEP))
task_pt_regs(current)->icountlevel = 1;
return;
}
diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h
index 4c06f9f8ef3f..c952c5ba8fab 100644
--- a/include/linux/ptrace.h
+++ b/include/linux/ptrace.h
@@ -46,12 +46,6 @@ extern int ptrace_access_vm(struct task_struct *tsk, unsigned long addr,
#define PT_EXITKILL (PTRACE_O_EXITKILL << PT_OPT_FLAG_SHIFT)
#define PT_SUSPEND_SECCOMP (PTRACE_O_SUSPEND_SECCOMP << PT_OPT_FLAG_SHIFT)
-/* single stepping state bits (used on ARM and PA-RISC) */
-#define PT_SINGLESTEP_BIT 31
-#define PT_SINGLESTEP (1<<PT_SINGLESTEP_BIT)
-#define PT_BLOCKSTEP_BIT 30
-#define PT_BLOCKSTEP (1<<PT_BLOCKSTEP_BIT)
-
extern long arch_ptrace(struct task_struct *child, long request,
unsigned long addr, unsigned long data);
extern int ptrace_readdata(struct task_struct *tsk, unsigned long src, char __user *dst, int len);
--
2.35.3
commit cc8f7fe1f5eab010191aa4570f27641876fa1267 upstream
Add __GFP_ZERO flag for alloc_page in function bio_copy_kern to initialize
the buffer of a bio.
Signed-off-by: Haimin Zhang <tcs.kernel(a)gmail.com>
Reviewed-by: Chaitanya Kulkarni <kch(a)nvidia.com>
Reviewed-by: Christoph Hellwig <hch(a)lst.de>
Link: https://lore.kernel.org/r/20220216084038.15635-1-tcs.kernel@gmail.com
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
[nobelbarakat: Backported to 5.4: Manually added __GFP_ZERO flag]
Signed-off-by: Nobel Barakat <nobelbarakat(a)google.com>
---
This changes fixes a kernel info leak since it's possible for bio_copy_kern to
copy unitialized memory into userspace.
This change had to be manually backported since bio_copy_kern is in a different
file (bio.c) than the upstream commit (blk-map.c)
block/bio.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/bio.c b/block/bio.c
index b1170ec18464..363294afc394 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1570,7 +1570,7 @@ struct bio *bio_copy_kern(struct request_queue *q, void *data, unsigned int len,
if (bytes > len)
bytes = len;
- page = alloc_page(q->bounce_gfp | gfp_mask);
+ page = alloc_page(q->bounce_gfp | __GFP_ZERO | gfp_mask);
if (!page)
goto cleanup;
--
2.36.0.464.gb9c8b46e94-goog