On Fri, Jan 04, 2019 at 09:03:10PM +0000, Sasha Levin wrote:
> Hi,
>
> [This is an automated email]
>
> This commit has been processed because it contains a "Fixes:" tag,
> fixing commit: a7d85e06ed80 crypto: cfb - add support for Cipher FeedBack mode.
>
> The bot has tested the following trees: v4.20.0, v4.19.13.
>
> v4.20.0: Failed to apply! Possible dependencies:
> 7da66670775d ("crypto: testmgr - add AES-CFB tests")
>
> v4.19.13: Failed to apply! Possible dependencies:
> 7da66670775d ("crypto: testmgr - add AES-CFB tests")
> dfb89ab3f0a7 ("crypto: tcrypt - add OFB functional tests")
>
>
> How should we proceed with this patch?
>
> --
> Thanks,
> Sasha
The following will need to be applied to 4.19 and 4.20 first. Both had Cc stable:
fa4600734b74 ("crypto: cfb - fix decryption")
7da66670775d ("crypto: testmgr - add AES-CFB tests")
Herbert, why was CFB accepted without any test vectors in the first place?
- Eric
The patch titled
Subject: mm/vmalloc: fix size check for remap_vmalloc_range_partial()
has been added to the -mm tree. Its filename is
mm-vmalloc-fix-size-check-for-remap_vmalloc_range_partial.patch
This patch should soon appear at
http://ozlabs.org/~akpm/mmots/broken-out/mm-vmalloc-fix-size-check-for-rema…
and later at
http://ozlabs.org/~akpm/mmotm/broken-out/mm-vmalloc-fix-size-check-for-rema…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Roman Penyaev <rpenyaev(a)suse.de>
Subject: mm/vmalloc: fix size check for remap_vmalloc_range_partial()
When VM_NO_GUARD is not set area->size includes adjacent guard page,
thus for correct size checking get_vm_area_size() should be used, but
not area->size.
This fixes possible kernel oops when userspace tries to mmap an area on
1 page bigger than was allocated by vmalloc_user() call: the size check
inside remap_vmalloc_range_partial() accounts non-existing guard page
also, so check successfully passes but vmalloc_to_page() returns NULL
(guard page does not physically exist).
The following code pattern example should trigger an oops:
static int oops_mmap(struct file *file, struct vm_area_struct *vma)
{
void *mem;
mem = vmalloc_user(4096);
BUG_ON(!mem);
/* Do not care about mem leak */
return remap_vmalloc_range(vma, mem, 0);
}
And userspace simply mmaps size + PAGE_SIZE:
mmap(NULL, 8192, PROT_WRITE|PROT_READ, MAP_PRIVATE, fd, 0);
Possible candidates for oops which do not have any explicit size
checks:
*** drivers/media/usb/stkwebcam/stk-webcam.c:
v4l_stk_mmap[789] ret = remap_vmalloc_range(vma, sbuf->buffer, 0);
Or the following one:
*** drivers/video/fbdev/core/fbmem.c
static int
fb_mmap(struct file *file, struct vm_area_struct * vma)
...
res = fb->fb_mmap(info, vma);
Where fb_mmap callback calls remap_vmalloc_range() directly without any
explicit checks:
*** drivers/video/fbdev/vfb.c
static int vfb_mmap(struct fb_info *info,
struct vm_area_struct *vma)
{
return remap_vmalloc_range(vma, (void *)info->fix.smem_start, vma->vm_pgoff);
}
Link: http://lkml.kernel.org/r/20190103145954.16942-2-rpenyaev@suse.de
Signed-off-by: Roman Penyaev <rpenyaev(a)suse.de>
Acked-by: Michal Hocko <mhocko(a)suse.com>
Cc: Andrey Ryabinin <aryabinin(a)virtuozzo.com>
Cc: Joe Perches <joe(a)perches.com>
Cc: "Luis R. Rodriguez" <mcgrof(a)kernel.org>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
--- a/mm/vmalloc.c~mm-vmalloc-fix-size-check-for-remap_vmalloc_range_partial
+++ a/mm/vmalloc.c
@@ -2248,7 +2248,7 @@ int remap_vmalloc_range_partial(struct v
if (!(area->flags & VM_USERMAP))
return -EINVAL;
- if (kaddr + size > area->addr + area->size)
+ if (kaddr + size > area->addr + get_vm_area_size(area))
return -EINVAL;
do {
_
Patches currently in -mm which might be from rpenyaev(a)suse.de are
epoll-make-sure-all-elements-in-ready-list-are-in-fifo-order.patch
epoll-loosen-irq-safety-in-ep_poll_callback.patch
epoll-unify-awaking-of-wakeup-source-on-ep_poll_callback-path.patch
epoll-use-rwlock-in-order-to-reduce-ep_poll_callback-contention.patch
mm-vmalloc-fix-size-check-for-remap_vmalloc_range_partial.patch
mm-vmalloc-do-not-call-kmemleak_free-on-not-yet-accounted-memory.patch
mm-vmalloc-pass-vm_usermap-flags-directly-to-__vmalloc_node_range.patch
The patch titled
Subject: mm: page_mapped: don't assume compound page is huge or THP
has been added to the -mm tree. Its filename is
mm-page_mapped-dont-assume-compound-page-is-huge-or-thp.patch
This patch should soon appear at
http://ozlabs.org/~akpm/mmots/broken-out/mm-page_mapped-dont-assume-compoun…
and later at
http://ozlabs.org/~akpm/mmotm/broken-out/mm-page_mapped-dont-assume-compoun…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Jan Stancek <jstancek(a)redhat.com>
Subject: mm: page_mapped: don't assume compound page is huge or THP
LTP proc01 testcase has been observed to rarely trigger crashes
on arm64:
page_mapped+0x78/0xb4
stable_page_flags+0x27c/0x338
kpageflags_read+0xfc/0x164
proc_reg_read+0x7c/0xb8
__vfs_read+0x58/0x178
vfs_read+0x90/0x14c
SyS_read+0x60/0xc0
Issue is that page_mapped() assumes that if compound page is not huge,
then it must be THP. But if this is 'normal' compound page
(COMPOUND_PAGE_DTOR), then following loop can keep running (for
HPAGE_PMD_NR iterations) until it tries to read from memory that isn't
mapped and triggers a panic:
for (i = 0; i < hpage_nr_pages(page); i++) {
if (atomic_read(&page[i]._mapcount) >= 0)
return true;
}
I could replicate this on x86 (v4.20-rc4-98-g60b548237fed) only
with a custom kernel module [1] which:
- allocates compound page (PAGEC) of order 1
- allocates 2 normal pages (COPY), which are initialized to 0xff
(to satisfy _mapcount >= 0)
- 2 PAGEC page structs are copied to address of first COPY page
- second page of COPY is marked as not present
- call to page_mapped(COPY) now triggers fault on access to 2nd
COPY page at offset 0x30 (_mapcount)
[1] https://github.com/jstancek/reproducers/blob/master/kernel/page_mapped_cras…
Fix the loop to iterate for "1 << compound_order" pages.
Kirrill said "IIRC, sound subsystem can producuce custom mapped compound
pages".
Link: http://lkml.kernel.org/r/c440d69879e34209feba21e12d236d06bc0a25db.154357715…
Fixes: e1534ae95004 ("mm: differentiate page_mapped() from page_mapcount() for compound pages")
Signed-off-by: Jan Stancek <jstancek(a)redhat.com>
Debugged-by: Laszlo Ersek <lersek(a)redhat.com>
Suggested-by: "Kirill A. Shutemov" <kirill(a)shutemov.name>
Acked-by: Michal Hocko <mhocko(a)suse.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov(a)linux.intel.com>
Reviewed-by: David Hildenbrand <david(a)redhat.com>
Reviewed-by: Andrea Arcangeli <aarcange(a)redhat.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
--- a/mm/util.c~mm-page_mapped-dont-assume-compound-page-is-huge-or-thp
+++ a/mm/util.c
@@ -478,7 +478,7 @@ bool page_mapped(struct page *page)
return true;
if (PageHuge(page))
return false;
- for (i = 0; i < hpage_nr_pages(page); i++) {
+ for (i = 0; i < (1 << compound_order(page)); i++) {
if (atomic_read(&page[i]._mapcount) >= 0)
return true;
}
_
Patches currently in -mm which might be from jstancek(a)redhat.com are
mm-page_mapped-dont-assume-compound-page-is-huge-or-thp.patch
Commit 5eed6f1dff87 ("fork,memcg: fix crash in free_thread_stack on
memcg charge fail") fixes a crash caused due to failed memcg charge of
the kernel stack. However the fix misses the cached_stacks case which
this patch fixes. So, the same crash can happen if the memcg charge of
a cached stack is failed.
Fixes: 5eed6f1dff87 ("fork,memcg: fix crash in free_thread_stack on memcg charge fail")
Signed-off-by: Shakeel Butt <shakeelb(a)google.com>
Cc: Rik van Riel <riel(a)surriel.com>
Cc: Roman Gushchin <guro(a)fb.com>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Johannes Weiner <hannes(a)cmpxchg.org>
Cc: Tejun Heo <tj(a)kernel.org>
Cc: <stable(a)vger.kernel.org>
---
kernel/fork.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/kernel/fork.c b/kernel/fork.c
index e4a51124661a..593cd1577dff 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -216,6 +216,7 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node)
memset(s->addr, 0, THREAD_SIZE);
tsk->stack_vm_area = s;
+ tsk->stack = s->addr;
return s->addr;
}
--
2.20.1.415.g653613c723-goog
From: Oliver Hartkopp <socketcan(a)hartkopp.net>
Muyu Yu provided a POC where user root with CAP_NET_ADMIN can create a CAN
frame modification rule that makes the data length code a higher value than
the available CAN frame data size. In combination with a configured checksum
calculation where the result is stored relatively to the end of the data
(e.g. cgw_csum_xor_rel) the tail of the skb (e.g. frag_list pointer in
skb_shared_info) can be rewritten which finally can cause a system crash.
Michael Kubecek suggested to drop frames that have a DLC exceeding the
available space after the modification process and provided a patch that can
handle CAN FD frames too. Within this patch we also limit the length for the
checksum calculations to the maximum of Classic CAN data length (8).
CAN frames that are dropped by these additional checks are counted with the
CGW_DELETED counter which indicates misconfigurations in can-gw rules.
This fixes CVE-2019-3701.
Reported-by: Muyu Yu <ieatmuttonchuan(a)gmail.com>
Reported-by: Marcus Meissner <meissner(a)suse.de>
Suggested-by: Michal Kubecek <mkubecek(a)suse.cz>
Tested-by: Muyu Yu <ieatmuttonchuan(a)gmail.com>
Tested-by: Oliver Hartkopp <socketcan(a)hartkopp.net>
Signed-off-by: Oliver Hartkopp <socketcan(a)hartkopp.net>
Cc: linux-stable <stable(a)vger.kernel.org> # >= v3.2
Signed-off-by: Marc Kleine-Budde <mkl(a)pengutronix.de>
---
Hello,
I've removed the else from dlc length check. Keeps the code and the
patch more readable.
Marc
net/can/gw.c | 29 ++++++++++++++++++++++++++---
1 file changed, 26 insertions(+), 3 deletions(-)
diff --git a/net/can/gw.c b/net/can/gw.c
index faa3da88a127..bb85d815f092 100644
--- a/net/can/gw.c
+++ b/net/can/gw.c
@@ -416,13 +416,28 @@ static void can_can_gw_rcv(struct sk_buff *skb, void *data)
while (modidx < MAX_MODFUNCTIONS && gwj->mod.modfunc[modidx])
(*gwj->mod.modfunc[modidx++])(cf, &gwj->mod);
- /* check for checksum updates when the CAN frame has been modified */
+ /* Has the CAN frame been modified? */
if (modidx) {
- if (gwj->mod.csumfunc.crc8)
+ /* get available space for the processed CAN frame type */
+ int max_len = nskb->len - offsetof(struct can_frame, data);
+
+ /* dlc may have changed, make sure it fits to the CAN frame */
+ if (cf->can_dlc > max_len)
+ goto out_delete;
+
+ /* check for checksum updates in classic CAN length only */
+ if (gwj->mod.csumfunc.crc8) {
+ if (cf->can_dlc > 8)
+ goto out_delete;
+
(*gwj->mod.csumfunc.crc8)(cf, &gwj->mod.csum.crc8);
+ }
- if (gwj->mod.csumfunc.xor)
+ if (gwj->mod.csumfunc.xor) {
+ if (cf->can_dlc > 8)
+ goto out_delete;
(*gwj->mod.csumfunc.xor)(cf, &gwj->mod.csum.xor);
+ }
}
/* clear the skb timestamp if not configured the other way */
@@ -434,6 +449,14 @@ static void can_can_gw_rcv(struct sk_buff *skb, void *data)
gwj->dropped_frames++;
else
gwj->handled_frames++;
+
+ return;
+
+ out_delete:
+ /* delete frame due to misconfiguration */
+ gwj->deleted_frames++;
+ kfree_skb(nskb);
+ return;
}
static inline int cgw_register_filter(struct net *net, struct cgw_job *gwj)
--
2.20.1
Muyu Yu provided a POC where user root with CAP_NET_ADMIN can create a CAN
frame modification rule that makes the data length code a higher value than
the available CAN frame data size. In combination with a configured checksum
calculation where the result is stored relatively to the end of the data
(e.g. cgw_csum_xor_rel) the tail of the skb (e.g. frag_list pointer in
skb_shared_info) can be rewritten which finally can cause a system crash.
Michael Kubecek suggested to drop frames that have a DLC exceeding the
available space after the modification process and provided a patch that can
handle CAN FD frames too. Within this patch we also limit the length for the
checksum calculations to the maximum of Classic CAN data length (8).
CAN frames that are dropped by these additional checks are counted with the
CGW_DELETED counter which indicates misconfigurations in can-gw rules.
This fixes CVE-2019-3701.
Reported-by: Muyu Yu <ieatmuttonchuan(a)gmail.com>
Reported-by: Marcus Meissner <meissner(a)suse.de>
Suggested-by: Michal Kubecek <mkubecek(a)suse.cz>
Tested-by: Muyu Yu <ieatmuttonchuan(a)gmail.com>
Tested-by: Oliver Hartkopp <socketcan(a)hartkopp.net>
Signed-off-by: Oliver Hartkopp <socketcan(a)hartkopp.net>
Cc: linux-stable <stable(a)vger.kernel.org> # >= v3.2
---
net/can/gw.c | 36 +++++++++++++++++++++++++++++++-----
1 file changed, 31 insertions(+), 5 deletions(-)
diff --git a/net/can/gw.c b/net/can/gw.c
index faa3da88a127..180c389af5b1 100644
--- a/net/can/gw.c
+++ b/net/can/gw.c
@@ -416,13 +416,31 @@ static void can_can_gw_rcv(struct sk_buff *skb, void *data)
while (modidx < MAX_MODFUNCTIONS && gwj->mod.modfunc[modidx])
(*gwj->mod.modfunc[modidx++])(cf, &gwj->mod);
- /* check for checksum updates when the CAN frame has been modified */
+ /* Has the CAN frame been modified? */
if (modidx) {
- if (gwj->mod.csumfunc.crc8)
- (*gwj->mod.csumfunc.crc8)(cf, &gwj->mod.csum.crc8);
+ /* get available space for the processed CAN frame type */
+ int max_len = nskb->len - offsetof(struct can_frame, data);
- if (gwj->mod.csumfunc.xor)
- (*gwj->mod.csumfunc.xor)(cf, &gwj->mod.csum.xor);
+ /* dlc may have changed, make sure it fits to the CAN frame */
+ if (cf->can_dlc > max_len)
+ goto out_delete;
+
+ /* check for checksum updates in classic CAN length only */
+ if (gwj->mod.csumfunc.crc8) {
+ if (cf->can_dlc > 8)
+ goto out_delete;
+ else
+ (*gwj->mod.csumfunc.crc8)
+ (cf, &gwj->mod.csum.crc8);
+ }
+
+ if (gwj->mod.csumfunc.xor) {
+ if (cf->can_dlc > 8)
+ goto out_delete;
+ else
+ (*gwj->mod.csumfunc.xor)
+ (cf, &gwj->mod.csum.xor);
+ }
}
/* clear the skb timestamp if not configured the other way */
@@ -434,6 +452,14 @@ static void can_can_gw_rcv(struct sk_buff *skb, void *data)
gwj->dropped_frames++;
else
gwj->handled_frames++;
+
+ return;
+
+out_delete:
+ /* delete frame due to misconfiguration */
+ gwj->deleted_frames++;
+ kfree_skb(nskb);
+ return;
}
static inline int cgw_register_filter(struct net *net, struct cgw_job *gwj)
--
2.19.2
The ARM Linux kernel handles the EABI syscall numbers as follows:
0 - NR_SYSCALLS-1 : Invoke syscall via syscall table
NR_SYSCALLS - 0xeffff : -ENOSYS (to be allocated in future)
0xf0000 - 0xf07ff : Private syscall or -ENOSYS if not allocated
> 0xf07ff : SIGILL
Our compat code gets this wrong and ends up sending SIGILL in response
to all syscalls greater than NR_SYSCALLS which have a value greater
than 0x7ff in the bottom 16 bits.
Fix this by defining the end of the ARM private syscall region and
checking the syscall number against that directly. Update the comment
while we're at it.
Cc: <stable(a)vger.kernel.org>
Cc: Dave Martin <Dave.Martin(a)arm.com>
Reported-by: Pi-Hsun Shih <pihsun(a)chromium.org>
Signed-off-by: Will Deacon <will.deacon(a)arm.com>
---
arch/arm64/include/asm/unistd.h | 5 +++--
arch/arm64/kernel/sys_compat.c | 4 ++--
2 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/unistd.h b/arch/arm64/include/asm/unistd.h
index b13ca091f833..be66a54ee3a1 100644
--- a/arch/arm64/include/asm/unistd.h
+++ b/arch/arm64/include/asm/unistd.h
@@ -40,8 +40,9 @@
* The following SVCs are ARM private.
*/
#define __ARM_NR_COMPAT_BASE 0x0f0000
-#define __ARM_NR_compat_cacheflush (__ARM_NR_COMPAT_BASE+2)
-#define __ARM_NR_compat_set_tls (__ARM_NR_COMPAT_BASE+5)
+#define __ARM_NR_compat_cacheflush (__ARM_NR_COMPAT_BASE + 2)
+#define __ARM_NR_compat_set_tls (__ARM_NR_COMPAT_BASE + 5)
+#define __ARM_NR_compat_syscall_end (__ARM_NR_COMPAT_BASE + 0x800)
#define __NR_compat_syscalls 399
#endif
diff --git a/arch/arm64/kernel/sys_compat.c b/arch/arm64/kernel/sys_compat.c
index 32653d156747..5972b7533fa0 100644
--- a/arch/arm64/kernel/sys_compat.c
+++ b/arch/arm64/kernel/sys_compat.c
@@ -102,12 +102,12 @@ long compat_arm_syscall(struct pt_regs *regs)
default:
/*
- * Calls 9f00xx..9f07ff are defined to return -ENOSYS
+ * Calls 0xf0xxx..0xf07ff are defined to return -ENOSYS
* if not implemented, rather than raising SIGILL. This
* way the calling program can gracefully determine whether
* a feature is supported.
*/
- if ((no & 0xffff) <= 0x7ff)
+ if (no < __ARM_NR_compat_syscall_end)
return -ENOSYS;
break;
}
--
2.1.4
The syscall number may have been changed by a tracer, so we should pass
the actual number in from the caller instead of pulling it from the
saved r7 value directly.
Cc: <stable(a)vger.kernel.org>
Cc: Dave Martin <Dave.Martin(a)arm.com>
Cc: Pi-Hsun Shih <pihsun(a)chromium.org>
Signed-off-by: Will Deacon <will.deacon(a)arm.com>
---
arch/arm64/kernel/sys_compat.c | 9 ++++-----
arch/arm64/kernel/syscall.c | 9 ++++-----
2 files changed, 8 insertions(+), 10 deletions(-)
diff --git a/arch/arm64/kernel/sys_compat.c b/arch/arm64/kernel/sys_compat.c
index 5972b7533fa0..54c29cd38ff9 100644
--- a/arch/arm64/kernel/sys_compat.c
+++ b/arch/arm64/kernel/sys_compat.c
@@ -66,12 +66,11 @@ do_compat_cache_op(unsigned long start, unsigned long end, int flags)
/*
* Handle all unrecognised system calls.
*/
-long compat_arm_syscall(struct pt_regs *regs)
+long compat_arm_syscall(struct pt_regs *regs, int scno)
{
- unsigned int no = regs->regs[7];
void __user *addr;
- switch (no) {
+ switch (scno) {
/*
* Flush a region from virtual address 'r0' to virtual address 'r1'
* _exclusive_. There is no alignment requirement on either address;
@@ -107,7 +106,7 @@ long compat_arm_syscall(struct pt_regs *regs)
* way the calling program can gracefully determine whether
* a feature is supported.
*/
- if (no < __ARM_NR_compat_syscall_end)
+ if (scno < __ARM_NR_compat_syscall_end)
return -ENOSYS;
break;
}
@@ -116,6 +115,6 @@ long compat_arm_syscall(struct pt_regs *regs)
(compat_thumb_mode(regs) ? 2 : 4);
arm64_notify_die("Oops - bad compat syscall(2)", regs,
- SIGILL, ILL_ILLTRP, addr, no);
+ SIGILL, ILL_ILLTRP, addr, scno);
return 0;
}
diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
index 032d22312881..5610ac01c1ec 100644
--- a/arch/arm64/kernel/syscall.c
+++ b/arch/arm64/kernel/syscall.c
@@ -13,16 +13,15 @@
#include <asm/thread_info.h>
#include <asm/unistd.h>
-long compat_arm_syscall(struct pt_regs *regs);
-
+long compat_arm_syscall(struct pt_regs *regs, int scno);
long sys_ni_syscall(void);
-asmlinkage long do_ni_syscall(struct pt_regs *regs)
+static long do_ni_syscall(struct pt_regs *regs, int scno)
{
#ifdef CONFIG_COMPAT
long ret;
if (is_compat_task()) {
- ret = compat_arm_syscall(regs);
+ ret = compat_arm_syscall(regs, scno);
if (ret != -ENOSYS)
return ret;
}
@@ -47,7 +46,7 @@ static void invoke_syscall(struct pt_regs *regs, unsigned int scno,
syscall_fn = syscall_table[array_index_nospec(scno, sc_nr)];
ret = __invoke_syscall(regs, syscall_fn);
} else {
- ret = do_ni_syscall(regs);
+ ret = do_ni_syscall(regs, scno);
}
regs->regs[0] = ret;
--
2.1.4
The CAN frame modification rules allow bitwise logical operations which can
be also applied to the can_dlc field. Ensure the manipulation result to
maintain the can_dlc boundaries so that the CAN drivers do not accidently
write arbitrary content beyond the data registers in the CAN controllers
I/O mem when processing can-gw manipulated outgoing frames. When passing these
frames to user space this issue did not have any effect to the kernel or any
leaked data as we always strictly copy sizeof(struct can_frame) bytes.
Reported-by: Muyu Yu <ieatmuttonchuan(a)gmail.com>
Reported-by: Marcus Meissner <meissner(a)suse.de>
Tested-by: Muyu Yu <ieatmuttonchuan(a)gmail.com>
Signed-off-by: Oliver Hartkopp <socketcan(a)hartkopp.net>
Cc: linux-stable <stable(a)vger.kernel.org> # >= v3.2
---
net/can/gw.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/net/can/gw.c b/net/can/gw.c
index faa3da88a127..9000d9b8a133 100644
--- a/net/can/gw.c
+++ b/net/can/gw.c
@@ -418,6 +418,10 @@ static void can_can_gw_rcv(struct sk_buff *skb, void *data)
/* check for checksum updates when the CAN frame has been modified */
if (modidx) {
+ /* ensure DLC boundaries after the different mods */
+ if (cf->can_dlc > 8)
+ cf->can_dlc = 8;
+
if (gwj->mod.csumfunc.crc8)
(*gwj->mod.csumfunc.crc8)(cf, &gwj->mod.csum.crc8);
--
2.19.2
Editing for photos. Retouching for photos. Cutting out for photos,
We are a photo team of 20 image editors and we can edit your photos today
if you need help,
We mainly work on:
Clipping path - image cut out
Shadow creation
Image masking
Image retouching
Beauty model retouching
Please reply if interested.
Thanks,
Jessica
Editing for photos. Retouching for photos. Cutting out for photos,
We are a photo team of 20 image editors and we can edit your photos today
if you need help,
We mainly work on:
Clipping path - image cut out
Shadow creation
Image masking
Image retouching
Beauty model retouching
Please reply if interested.
Thanks,
Jessica
Editing for photos. Retouching for photos. Cutting out for photos,
We are a photo team of 20 image editors and we can edit your photos today
if you need help,
We mainly work on:
Clipping path - image cut out
Shadow creation
Image masking
Image retouching
Beauty model retouching
Please reply if interested.
Thanks,
Jessica
From: Stanley Chu <stanley.chu(a)mediatek.com>
The commit 356fd2663cff ("scsi: Set request queue runtime PM status
back to active on resume") fixed up the inconsistent RPM status between
request queue and device. However changing request queue RPM status
shall be done only on successful resume, otherwise status may be still
inconsistent as below,
Request queue: RPM_ACTIVE
Device: RPM_SUSPENDED
This ends up soft lockup because requests can be submitted to
underlying devices but those devices and their required resource
are not resumed.
For example,
After above inconsistent status happens, IO request can be submitted
to UFS device driver but required resource (like clock) is not resumed
yet thus lead to warning as below call stack,
WARN_ON(hba->clk_gating.state != CLKS_ON);
ufshcd_queuecommand
scsi_dispatch_cmd
scsi_request_fn
__blk_run_queue
cfq_insert_request
__elv_add_request
blk_flush_plug_list
blk_finish_plug
jbd2_journal_commit_transaction
kjournald2
We may see all behind IO requests hang because of no response from
storage host or device and then soft lockup happens in system. In the
end, system may crash in many ways.
Fixes: 356fd2663cff (scsi: Set request queue runtime PM status
back to active on resume)
Cc: stable(a)vger.kernel.org
Signed-off-by: Stanley Chu <stanley.chu(a)mediatek.com>
---
drivers/scsi/scsi_pm.c | 26 +++++++++++++++-----------
1 file changed, 15 insertions(+), 11 deletions(-)
diff --git a/drivers/scsi/scsi_pm.c b/drivers/scsi/scsi_pm.c
index a2b4179bfdf7..7639df91b110 100644
--- a/drivers/scsi/scsi_pm.c
+++ b/drivers/scsi/scsi_pm.c
@@ -80,8 +80,22 @@ static int scsi_dev_type_resume(struct device *dev,
if (err == 0) {
pm_runtime_disable(dev);
- pm_runtime_set_active(dev);
+ err = pm_runtime_set_active(dev);
pm_runtime_enable(dev);
+
+ /*
+ * Forcibly set runtime PM status of request queue to "active"
+ * to make sure we can again get requests from the queue
+ * (see also blk_pm_peek_request()).
+ *
+ * The resume hook will correct runtime PM status of the disk.
+ */
+ if (!err && scsi_is_sdev_device(dev)) {
+ struct scsi_device *sdev = to_scsi_device(dev);
+
+ if (sdev->request_queue->dev)
+ blk_set_runtime_active(sdev->request_queue);
+ }
}
return err;
@@ -140,16 +154,6 @@ static int scsi_bus_resume_common(struct device *dev,
else
fn = NULL;
- /*
- * Forcibly set runtime PM status of request queue to "active" to
- * make sure we can again get requests from the queue (see also
- * blk_pm_peek_request()).
- *
- * The resume hook will correct runtime PM status of the disk.
- */
- if (scsi_is_sdev_device(dev) && pm_runtime_suspended(dev))
- blk_set_runtime_active(to_scsi_device(dev)->request_queue);
-
if (fn) {
async_schedule_domain(fn, dev, &scsi_sd_pm_domain);
--
2.18.0
From: Eric Biggers <ebiggers(a)google.com>
The memcpy() in crypto_cfb_decrypt_inplace() uses walk->iv as both the
source and destination, which has undefined behavior. It is unneeded
because walk->iv is already used to hold the previous ciphertext block;
thus, walk->iv is already updated to its final value. So, remove it.
Also, note that in-place decryption is the only case where the previous
ciphertext block is not directly available. Therefore, as a related
cleanup I also updated crypto_cfb_encrypt_segment() to directly use the
previous ciphertext block rather than save it into walk->iv. This makes
it consistent with in-place encryption and out-of-place decryption; now
only in-place decryption is different, because it has to be.
Fixes: a7d85e06ed80 ("crypto: cfb - add support for Cipher FeedBack mode")
Cc: <stable(a)vger.kernel.org> # v4.17+
Cc: James Bottomley <James.Bottomley(a)HansenPartnership.com>
Signed-off-by: Eric Biggers <ebiggers(a)google.com>
---
crypto/cfb.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/crypto/cfb.c b/crypto/cfb.c
index 183e8a9c33128..4abfe32ff8451 100644
--- a/crypto/cfb.c
+++ b/crypto/cfb.c
@@ -77,12 +77,14 @@ static int crypto_cfb_encrypt_segment(struct skcipher_walk *walk,
do {
crypto_cfb_encrypt_one(tfm, iv, dst);
crypto_xor(dst, src, bsize);
- memcpy(iv, dst, bsize);
+ iv = dst;
src += bsize;
dst += bsize;
} while ((nbytes -= bsize) >= bsize);
+ memcpy(walk->iv, iv, bsize);
+
return nbytes;
}
@@ -162,7 +164,7 @@ static int crypto_cfb_decrypt_inplace(struct skcipher_walk *walk,
const unsigned int bsize = crypto_cfb_bsize(tfm);
unsigned int nbytes = walk->nbytes;
u8 *src = walk->src.virt.addr;
- u8 *iv = walk->iv;
+ u8 * const iv = walk->iv;
u8 tmp[MAX_CIPHER_BLOCKSIZE];
do {
@@ -172,8 +174,6 @@ static int crypto_cfb_decrypt_inplace(struct skcipher_walk *walk,
src += bsize;
} while ((nbytes -= bsize) >= bsize);
- memcpy(walk->iv, iv, bsize);
-
return nbytes;
}
--
2.20.1
From: Eric Biggers <ebiggers(a)google.com>
Like some other block cipher mode implementations, the CFB
implementation assumes that while walking through the scatterlist, a
partial block does not occur until the end. But the walk is incorrectly
being done with a blocksize of 1, as 'cra_blocksize' is set to 1 (since
CFB is a stream cipher) but no 'chunksize' is set. This bug causes
incorrect encryption/decryption for some scatterlist layouts.
Fix it by setting the 'chunksize'. Also extend the CFB test vectors to
cover this bug as well as cases where the message length is not a
multiple of the block size.
Fixes: a7d85e06ed80 ("crypto: cfb - add support for Cipher FeedBack mode")
Cc: <stable(a)vger.kernel.org> # v4.17+
Cc: James Bottomley <James.Bottomley(a)HansenPartnership.com>
Signed-off-by: Eric Biggers <ebiggers(a)google.com>
---
crypto/cfb.c | 6 ++++++
crypto/testmgr.h | 25 +++++++++++++++++++++++++
2 files changed, 31 insertions(+)
diff --git a/crypto/cfb.c b/crypto/cfb.c
index e81e456734985..183e8a9c33128 100644
--- a/crypto/cfb.c
+++ b/crypto/cfb.c
@@ -298,6 +298,12 @@ static int crypto_cfb_create(struct crypto_template *tmpl, struct rtattr **tb)
inst->alg.base.cra_blocksize = 1;
inst->alg.base.cra_alignmask = alg->cra_alignmask;
+ /*
+ * To simplify the implementation, configure the skcipher walk to only
+ * give a partial block at the very end, never earlier.
+ */
+ inst->alg.chunksize = alg->cra_blocksize;
+
inst->alg.ivsize = alg->cra_blocksize;
inst->alg.min_keysize = alg->cra_cipher.cia_min_keysize;
inst->alg.max_keysize = alg->cra_cipher.cia_max_keysize;
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index e8f47d7b92cdd..7f4dae7a57a1c 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -12870,6 +12870,31 @@ static const struct cipher_testvec aes_cfb_tv_template[] = {
"\x75\xa3\x85\x74\x1a\xb9\xce\xf8"
"\x20\x31\x62\x3d\x55\xb1\xe4\x71",
.len = 64,
+ .also_non_np = 1,
+ .np = 2,
+ .tap = { 31, 33 },
+ }, { /* > 16 bytes, not a multiple of 16 bytes */
+ .key = "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
+ "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
+ .klen = 16,
+ .iv = "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
+ .ptext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96"
+ "\xe9\x3d\x7e\x11\x73\x93\x17\x2a"
+ "\xae",
+ .ctext = "\x3b\x3f\xd9\x2e\xb7\x2d\xad\x20"
+ "\x33\x34\x49\xf8\xe8\x3c\xfb\x4a"
+ "\xc8",
+ .len = 17,
+ }, { /* < 16 bytes */
+ .key = "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
+ "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
+ .klen = 16,
+ .iv = "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
+ .ptext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f",
+ .ctext = "\x3b\x3f\xd9\x2e\xb7\x2d\xad",
+ .len = 7,
},
};
--
2.20.1
From: Nicholas Kazlauskas <nicholas.kazlauskas(a)amd.com>
[Why]
If the "max bpc" isn't explicitly set in the atomic state then it
have a value of 0. This has the correct behavior of limiting a panel
to 8bpc in the case where the panel supports 8bpc. In the case of eDP
panels this isn't a true assumption - there are panels that can only
do 6bpc.
Banding occurs for these displays.
[How]
Initialize the max_bpc when the connector resets to 8bpc. Also carry
over the value when the state is duplicated.
Bugzilla: https://bugs.freedesktop.org/108825
Fixes: 307638884f72 ("drm/amd/display: Support amdgpu "max bpc" connector property")
Backport of commit 49f1c44b581b08e3289127ffe58bd208c3166701 upstream.
Fixes regression caused by the automatic patch select of these two patches:
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/dri…https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/dri…
Bug: https://bugs.freedesktop.org/show_bug.cgi?id=109122
Cc: Sasha Levin <sashal(a)kernel.org>
Signed-off-by: Nicholas Kazlauskas <nicholas.kazlauskas(a)amd.com>
Reviewed-by: Alex Deucher <alexander.deucher(a)amd.com>
Signed-off-by: Alex Deucher <alexander.deucher(a)amd.com>
(cherry picked from commit 49f1c44b581b08e3289127ffe58bd208c3166701)
Cc: <stable(a)vger.kernel.org> # 4.19.x
---
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 299def84e69c..d792735f1365 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -2894,6 +2894,7 @@ void amdgpu_dm_connector_funcs_reset(struct drm_connector *connector)
state->underscan_enable = false;
state->underscan_hborder = 0;
state->underscan_vborder = 0;
+ state->max_bpc = 8;
__drm_atomic_helper_connector_reset(connector, &state->base);
}
@@ -2911,6 +2912,7 @@ amdgpu_dm_connector_atomic_duplicate_state(struct drm_connector *connector)
if (new_state) {
__drm_atomic_helper_connector_duplicate_state(connector,
&new_state->base);
+ new_state->max_bpc = state->max_bpc;
return &new_state->base;
}
--
2.20.1
The patch titled
Subject: mm/vmalloc: fix size check for remap_vmalloc_range_partial()
has been added to the -mm tree. Its filename is
mm-vmalloc-fix-size-check-for-remap_vmalloc_range_partial.patch
This patch should soon appear at
http://ozlabs.org/~akpm/mmots/broken-out/mm-vmalloc-fix-size-check-for-rema…
and later at
http://ozlabs.org/~akpm/mmotm/broken-out/mm-vmalloc-fix-size-check-for-rema…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Roman Penyaev <rpenyaev(a)suse.de>
Subject: mm/vmalloc: fix size check for remap_vmalloc_range_partial()
area->size can include adjacent guard page but get_vm_area_size() returns
actual size of the area.
This fixes possible kernel crash when userspace tries to map area on 1
page bigger: size check passes but the following vmalloc_to_page() returns
NULL on last guard (non-existing) page.
Link: http://lkml.kernel.org/r/20190103145954.16942-2-rpenyaev@suse.de
Signed-off-by: Roman Penyaev <rpenyaev(a)suse.de>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Andrey Ryabinin <aryabinin(a)virtuozzo.com>
Cc: Joe Perches <joe(a)perches.com>
Cc: "Luis R. Rodriguez" <mcgrof(a)kernel.org>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/vmalloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/mm/vmalloc.c~mm-vmalloc-fix-size-check-for-remap_vmalloc_range_partial
+++ a/mm/vmalloc.c
@@ -2248,7 +2248,7 @@ int remap_vmalloc_range_partial(struct v
if (!(area->flags & VM_USERMAP))
return -EINVAL;
- if (kaddr + size > area->addr + area->size)
+ if (kaddr + size > area->addr + get_vm_area_size(area))
return -EINVAL;
do {
_
Patches currently in -mm which might be from rpenyaev(a)suse.de are
epoll-make-sure-all-elements-in-ready-list-are-in-fifo-order.patch
epoll-loosen-irq-safety-in-ep_poll_callback.patch
epoll-unify-awaking-of-wakeup-source-on-ep_poll_callback-path.patch
epoll-use-rwlock-in-order-to-reduce-ep_poll_callback-contention.patch
mm-vmalloc-fix-size-check-for-remap_vmalloc_range_partial.patch
mm-vmalloc-do-not-call-kmemleak_free-on-not-yet-accounted-memory.patch
mm-vmalloc-pass-vm_usermap-flags-directly-to-__vmalloc_node_range.patch
On 1/3/19 5:52 AM, Sasha Levin wrote:
> This commit has been processed because it contains a "Fixes:" tag,
> fixing commit: e8c24d3a23a4 x86/pkeys: Allocation/free syscalls.
>
> The bot has tested the following trees: v4.20.0, v4.19.13, v4.14.91, v4.9.148,
>
> v4.20.0: Build OK!
> v4.19.13: Build OK!
> v4.14.91: Build OK!
> v4.9.148: Failed to apply! Possible dependencies:
> c10e83f598d0 ("arch, mm: Allow arch_dup_mmap() to fail")
>
>
> How should we proceed with this patch?
The 4.9 version of arch_dup_mmap() does not contain the
ldt_dup_context() line. We just need to add arch_dup_pkeys(), like so:
void arch_dup_mmap(struct mm_struct *oldmm,
struct mm_struct *mm)
{
+ arch_dup_pkeys(oldmm, mm);
paravirt_arch_dup_mmap(oldmm, mm);
}
Should be a pretty simple merge. We can basically ignore the
ldt_dup_context() changes.
The patch titled
Subject: mm/usercopy.c: no check page span for stack objects
has been added to the -mm tree. Its filename is
usercopy-no-check-page-span-for-stack-objects.patch
This patch should soon appear at
http://ozlabs.org/~akpm/mmots/broken-out/usercopy-no-check-page-span-for-st…
and later at
http://ozlabs.org/~akpm/mmotm/broken-out/usercopy-no-check-page-span-for-st…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Qian Cai <cai(a)lca.pw>
Subject: mm/usercopy.c: no check page span for stack objects
It is easy to trigger this with CONFIG_HARDENED_USERCOPY_PAGESPAN=y,
usercopy: Kernel memory overwrite attempt detected to spans multiple
pages (offset 0, size 23)!
kernel BUG at mm/usercopy.c:102!
For example,
print_worker_info
char name[WQ_NAME_LEN] = { };
char desc[WORKER_DESC_LEN] = { };
probe_kernel_read(name, wq->name, sizeof(name) - 1);
probe_kernel_read(desc, worker->desc, sizeof(desc) - 1);
__copy_from_user_inatomic
check_object_size
check_heap_object
check_page_span
This is because on-stack variables could cross PAGE_SIZE boundary, and
failed this check,
if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
((unsigned long)end & (unsigned long)PAGE_MASK)))
ptr = FFFF889007D7EFF8
end = FFFF889007D7F00E
Hence, fix it by checking if it is a stack object first.
Link: http://lkml.kernel.org/r/20181231030254.99441-1-cai@lca.pw
Signed-off-by: Qian Cai <cai(a)lca.pw>
Cc: Kees Cook <keescook(a)chromium.org>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/usercopy.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
--- a/mm/usercopy.c~usercopy-no-check-page-span-for-stack-objects
+++ a/mm/usercopy.c
@@ -262,9 +262,6 @@ void __check_object_size(const void *ptr
/* Check for invalid addresses. */
check_bogus_address((const unsigned long)ptr, n, to_user);
- /* Check for bad heap object. */
- check_heap_object(ptr, n, to_user);
-
/* Check for bad stack object. */
switch (check_stack_object(ptr, n)) {
case NOT_STACK:
@@ -282,6 +279,9 @@ void __check_object_size(const void *ptr
usercopy_abort("process stack", NULL, to_user, 0, n);
}
+ /* Check for bad heap object. */
+ check_heap_object(ptr, n, to_user);
+
/* Check for object in kernel to avoid text exposure. */
check_kernel_text_object((const unsigned long)ptr, n, to_user);
}
_
Patches currently in -mm which might be from cai(a)lca.pw are
mm-page_owner-fix-for-deferred-struct-page-init.patch
usercopy-no-check-page-span-for-stack-objects.patch