From: Zhao Heming heming.zhao@suse.com
[ Upstream commit 1383b347a8ae4a69c04ae3746e6cb5c8d38e2585 ]
Callers of get_bitmap_from_slot() are responsible to free the bitmap.
Suggested-by: Guoqing Jiang guoqing.jiang@cloud.ionos.com Signed-off-by: Zhao Heming heming.zhao@suse.com Signed-off-by: Song Liu songliubraving@fb.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/md/md-bitmap.c | 3 ++- drivers/md/md-cluster.c | 1 + 2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c index b10c51988c8ee..c61ab86a28b52 100644 --- a/drivers/md/md-bitmap.c +++ b/drivers/md/md-bitmap.c @@ -1949,6 +1949,7 @@ int md_bitmap_load(struct mddev *mddev) } EXPORT_SYMBOL_GPL(md_bitmap_load);
+/* caller need to free returned bitmap with md_bitmap_free() */ struct bitmap *get_bitmap_from_slot(struct mddev *mddev, int slot) { int rv = 0; @@ -2012,6 +2013,7 @@ int md_bitmap_copy_from_slot(struct mddev *mddev, int slot, md_bitmap_unplug(mddev->bitmap); *low = lo; *high = hi; + md_bitmap_free(bitmap);
return rv; } @@ -2615,4 +2617,3 @@ struct attribute_group md_bitmap_group = { .name = "bitmap", .attrs = md_bitmap_attrs, }; - diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c index d50737ec40394..afbbc552c3275 100644 --- a/drivers/md/md-cluster.c +++ b/drivers/md/md-cluster.c @@ -1166,6 +1166,7 @@ static int resize_bitmaps(struct mddev *mddev, sector_t newsize, sector_t oldsiz * can't resize bitmap */ goto out; + md_bitmap_free(bitmap); }
return 0;
From: Tetsuo Handa penguin-kernel@I-love.SAKURA.ne.jp
[ Upstream commit f4ac712e4fe009635344b9af5d890fe25fcc8c0d ]
syzbot is reporting unkillable task [1], for the caller is failing to handle a corrupted filesystem image which attempts to access beyond the end of the device. While we need to fix the caller, flooding the console with handle_bad_sector() message is unlikely useful.
[1] https://syzkaller.appspot.com/bug?id=f1f49fb971d7a3e01bd8ab8cff2ff4572ccf309...
Signed-off-by: Tetsuo Handa penguin-kernel@I-love.SAKURA.ne.jp Reviewed-by: Christoph Hellwig hch@lst.de Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- block/blk-core.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c index 10c08ac506978..0014e7caae3d2 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -803,11 +803,10 @@ static void handle_bad_sector(struct bio *bio, sector_t maxsector) { char b[BDEVNAME_SIZE];
- printk(KERN_INFO "attempt to access beyond end of device\n"); - printk(KERN_INFO "%s: rw=%d, want=%Lu, limit=%Lu\n", - bio_devname(bio, b), bio->bi_opf, - (unsigned long long)bio_end_sector(bio), - (long long)maxsector); + pr_info_ratelimited("attempt to access beyond end of device\n" + "%s: rw=%d, want=%llu, limit=%llu\n", + bio_devname(bio, b), bio->bi_opf, + bio_end_sector(bio), maxsector); }
#ifdef CONFIG_FAIL_MAKE_REQUEST
From: Mark Mossberg mark.mossberg@gmail.com
[ Upstream commit 238c91115cd05c71447ea071624a4c9fe661f970 ]
Printing "Bad RIP value" if copy_code() fails can be misleading for userspace pointers, since copy_code() can fail if the instruction pointer is valid but the code is paged out. This is because copy_code() calls copy_from_user_nmi() for userspace pointers, which disables page fault handling.
This is reproducible in OOM situations, where it's plausible that the code may be reclaimed in the time between entry into the kernel and when this message is printed. This leaves a misleading log in dmesg that suggests instruction pointer corruption has occurred, which may alarm users.
Change the message to state the error condition more precisely.
[ bp: Massage a bit. ]
Signed-off-by: Mark Mossberg mark.mossberg@gmail.com Signed-off-by: Borislav Petkov bp@suse.de Link: https://lkml.kernel.org/r/20201002042915.403558-1-mark.mossberg@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/kernel/dumpstack.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c index 48ce44576947c..ea8d51ec251bb 100644 --- a/arch/x86/kernel/dumpstack.c +++ b/arch/x86/kernel/dumpstack.c @@ -115,7 +115,8 @@ void show_opcodes(struct pt_regs *regs, const char *loglvl) unsigned long prologue = regs->ip - PROLOGUE_SIZE;
if (copy_code(regs, opcodes, prologue, sizeof(opcodes))) { - printk("%sCode: Bad RIP value.\n", loglvl); + printk("%sCode: Unable to access opcode bytes at RIP 0x%lx.\n", + loglvl, prologue); } else { printk("%sCode: %" __stringify(PROLOGUE_SIZE) "ph <%02x> %" __stringify(EPILOGUE_SIZE) "ph\n", loglvl, opcodes,
From: Pavel Machek pavel@denx.de
[ Upstream commit e356c49c6cf0db3f00e1558749170bd56e47652d ]
Fix resource leak in error handling.
Signed-off-by: Pavel Machek (CIP) pavel@denx.de Acked-by: John Allen john.allen@amd.com Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/crypto/ccp/ccp-ops.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c index bd270e66185e9..40869ea1ed20f 100644 --- a/drivers/crypto/ccp/ccp-ops.c +++ b/drivers/crypto/ccp/ccp-ops.c @@ -1744,7 +1744,7 @@ ccp_run_sha_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd) break; default: ret = -EINVAL; - goto e_ctx; + goto e_data; } } else { /* Stash the context */
From: Arvind Sankar nivedita@alum.mit.edu
[ Upstream commit aa5cacdc29d76a005cbbee018a47faa6e724dd2d ]
The CRn accessor functions use __force_order as a dummy operand to prevent the compiler from reordering CRn reads/writes with respect to each other.
The fact that the asm is volatile should be enough to prevent this: volatile asm statements should be executed in program order. However GCC 4.9.x and 5.x have a bug that might result in reordering. This was fixed in 8.1, 7.3 and 6.5. Versions prior to these, including 5.x and 4.9.x, may reorder volatile asm statements with respect to each other.
There are some issues with __force_order as implemented: - It is used only as an input operand for the write functions, and hence doesn't do anything additional to prevent reordering writes. - It allows memory accesses to be cached/reordered across write functions, but CRn writes affect the semantics of memory accesses, so this could be dangerous. - __force_order is not actually defined in the kernel proper, but the LLVM toolchain can in some cases require a definition: LLVM (as well as GCC 4.9) requires it for PIE code, which is why the compressed kernel has a definition, but also the clang integrated assembler may consider the address of __force_order to be significant, resulting in a reference that requires a definition.
Fix this by: - Using a memory clobber for the write functions to additionally prevent caching/reordering memory accesses across CRn writes. - Using a dummy input operand with an arbitrary constant address for the read functions, instead of a global variable. This will prevent reads from being reordered across writes, while allowing memory loads to be cached/reordered across CRn reads, which should be safe.
Signed-off-by: Arvind Sankar nivedita@alum.mit.edu Signed-off-by: Borislav Petkov bp@suse.de Reviewed-by: Kees Cook keescook@chromium.org Reviewed-by: Miguel Ojeda miguel.ojeda.sandonis@gmail.com Tested-by: Nathan Chancellor natechancellor@gmail.com Tested-by: Sedat Dilek sedat.dilek@gmail.com Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82602 Link: https://lore.kernel.org/lkml/20200527135329.1172644-1-arnd@arndb.de/ Link: https://lkml.kernel.org/r/20200902232152.3709896-1-nivedita@alum.mit.edu Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/boot/compressed/pgtable_64.c | 9 --------- arch/x86/include/asm/special_insns.h | 28 ++++++++++++++------------- arch/x86/kernel/cpu/common.c | 4 ++-- 3 files changed, 17 insertions(+), 24 deletions(-)
diff --git a/arch/x86/boot/compressed/pgtable_64.c b/arch/x86/boot/compressed/pgtable_64.c index c8862696a47b9..7d0394f4ebf97 100644 --- a/arch/x86/boot/compressed/pgtable_64.c +++ b/arch/x86/boot/compressed/pgtable_64.c @@ -5,15 +5,6 @@ #include "pgtable.h" #include "../string.h"
-/* - * __force_order is used by special_insns.h asm code to force instruction - * serialization. - * - * It is not referenced from the code, but GCC < 5 with -fPIE would fail - * due to an undefined symbol. Define it to make these ancient GCCs work. - */ -unsigned long __force_order; - #define BIOS_START_MIN 0x20000U /* 128K, less than this is insane */ #define BIOS_START_MAX 0x9f000U /* 640K, absolute maximum */
diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index 59a3e13204c34..d6e3bb9363d22 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -11,45 +11,47 @@ #include <linux/jump_label.h>
/* - * Volatile isn't enough to prevent the compiler from reordering the - * read/write functions for the control registers and messing everything up. - * A memory clobber would solve the problem, but would prevent reordering of - * all loads stores around it, which can hurt performance. Solution is to - * use a variable and mimic reads and writes to it to enforce serialization + * The compiler should not reorder volatile asm statements with respect to each + * other: they should execute in program order. However GCC 4.9.x and 5.x have + * a bug (which was fixed in 8.1, 7.3 and 6.5) where they might reorder + * volatile asm. The write functions are not affected since they have memory + * clobbers preventing reordering. To prevent reads from being reordered with + * respect to writes, use a dummy memory operand. */ -extern unsigned long __force_order; + +#define __FORCE_ORDER "m"(*(unsigned int *)0x1000UL)
void native_write_cr0(unsigned long val);
static inline unsigned long native_read_cr0(void) { unsigned long val; - asm volatile("mov %%cr0,%0\n\t" : "=r" (val), "=m" (__force_order)); + asm volatile("mov %%cr0,%0\n\t" : "=r" (val) : __FORCE_ORDER); return val; }
static __always_inline unsigned long native_read_cr2(void) { unsigned long val; - asm volatile("mov %%cr2,%0\n\t" : "=r" (val), "=m" (__force_order)); + asm volatile("mov %%cr2,%0\n\t" : "=r" (val) : __FORCE_ORDER); return val; }
static __always_inline void native_write_cr2(unsigned long val) { - asm volatile("mov %0,%%cr2": : "r" (val), "m" (__force_order)); + asm volatile("mov %0,%%cr2": : "r" (val) : "memory"); }
static inline unsigned long __native_read_cr3(void) { unsigned long val; - asm volatile("mov %%cr3,%0\n\t" : "=r" (val), "=m" (__force_order)); + asm volatile("mov %%cr3,%0\n\t" : "=r" (val) : __FORCE_ORDER); return val; }
static inline void native_write_cr3(unsigned long val) { - asm volatile("mov %0,%%cr3": : "r" (val), "m" (__force_order)); + asm volatile("mov %0,%%cr3": : "r" (val) : "memory"); }
static inline unsigned long native_read_cr4(void) @@ -64,10 +66,10 @@ static inline unsigned long native_read_cr4(void) asm volatile("1: mov %%cr4, %0\n" "2:\n" _ASM_EXTABLE(1b, 2b) - : "=r" (val), "=m" (__force_order) : "0" (0)); + : "=r" (val) : "0" (0), __FORCE_ORDER); #else /* CR4 always exists on x86_64. */ - asm volatile("mov %%cr4,%0\n\t" : "=r" (val), "=m" (__force_order)); + asm volatile("mov %%cr4,%0\n\t" : "=r" (val) : __FORCE_ORDER); #endif return val; } diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index c5d6f17d9b9d3..178499f903661 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -359,7 +359,7 @@ void native_write_cr0(unsigned long val) unsigned long bits_missing = 0;
set_register: - asm volatile("mov %0,%%cr0": "+r" (val), "+m" (__force_order)); + asm volatile("mov %0,%%cr0": "+r" (val) : : "memory");
if (static_branch_likely(&cr_pinning)) { if (unlikely((val & X86_CR0_WP) != X86_CR0_WP)) { @@ -378,7 +378,7 @@ void native_write_cr4(unsigned long val) unsigned long bits_changed = 0;
set_register: - asm volatile("mov %0,%%cr4": "+r" (val), "+m" (cr4_pinned_bits)); + asm volatile("mov %0,%%cr4": "+r" (val) : : "memory");
if (static_branch_likely(&cr_pinning)) { if (unlikely((val & cr4_pinned_mask) != cr4_pinned_bits)) {
From: Borislav Petkov bp@suse.de
[ Upstream commit fd258dc4442c5c1c069c6b5b42bfe7d10cddda95 ]
The patrol scrubber in Skylake and Cascade Lake systems can be configured to report uncorrected errors using a special signature in the machine check bank and to signal using CMCI instead of machine check.
Update the severity calculation mechanism to allow specifying the model, minimum stepping and range of machine check bank numbers.
Add a new rule to detect the special signature (on model 0x55, stepping
=4 in any of the memory controller banks).
[ bp: Rewrite it. aegl: Productize it. ]
Suggested-by: Youquan Song youquan.song@intel.com Signed-off-by: Borislav Petkov bp@suse.de Co-developed-by: Tony Luck tony.luck@intel.com Signed-off-by: Tony Luck tony.luck@intel.com Signed-off-by: Borislav Petkov bp@suse.de Link: https://lkml.kernel.org/r/20200930021313.31810-2-tony.luck@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/kernel/cpu/mce/severity.c | 28 ++++++++++++++++++++++++++-- 1 file changed, 26 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/severity.c b/arch/x86/kernel/cpu/mce/severity.c index e1da619add192..567ce09a02868 100644 --- a/arch/x86/kernel/cpu/mce/severity.c +++ b/arch/x86/kernel/cpu/mce/severity.c @@ -9,9 +9,11 @@ #include <linux/seq_file.h> #include <linux/init.h> #include <linux/debugfs.h> -#include <asm/mce.h> #include <linux/uaccess.h>
+#include <asm/mce.h> +#include <asm/intel-family.h> + #include "internal.h"
/* @@ -40,9 +42,14 @@ static struct severity { unsigned char context; unsigned char excp; unsigned char covered; + unsigned char cpu_model; + unsigned char cpu_minstepping; + unsigned char bank_lo, bank_hi; char *msg; } severities[] = { #define MCESEV(s, m, c...) { .sev = MCE_ ## s ## _SEVERITY, .msg = m, ## c } +#define BANK_RANGE(l, h) .bank_lo = l, .bank_hi = h +#define MODEL_STEPPING(m, s) .cpu_model = m, .cpu_minstepping = s #define KERNEL .context = IN_KERNEL #define USER .context = IN_USER #define KERNEL_RECOV .context = IN_KERNEL_RECOV @@ -97,7 +104,6 @@ static struct severity { KEEP, "Corrected error", NOSER, BITCLR(MCI_STATUS_UC) ), - /* * known AO MCACODs reported via MCE or CMC: * @@ -113,6 +119,18 @@ static struct severity { AO, "Action optional: last level cache writeback error", SER, MASK(MCI_UC_AR|MCACOD, MCI_STATUS_UC|MCACOD_L3WB) ), + /* + * Quirk for Skylake/Cascade Lake. Patrol scrubber may be configured + * to report uncorrected errors using CMCI with a special signature. + * UC=0, MSCOD=0x0010, MCACOD=binary(000X 0000 1100 XXXX) reported + * in one of the memory controller banks. + * Set severity to "AO" for same action as normal patrol scrub error. + */ + MCESEV( + AO, "Uncorrected Patrol Scrub Error", + SER, MASK(MCI_STATUS_UC|MCI_ADDR|0xffffeff0, MCI_ADDR|0x001000c0), + MODEL_STEPPING(INTEL_FAM6_SKYLAKE_X, 4), BANK_RANGE(13, 18) + ),
/* ignore OVER for UCNA */ MCESEV( @@ -324,6 +342,12 @@ static int mce_severity_intel(struct mce *m, int tolerant, char **msg, bool is_e continue; if (s->excp && excp != s->excp) continue; + if (s->cpu_model && boot_cpu_data.x86_model != s->cpu_model) + continue; + if (s->cpu_minstepping && boot_cpu_data.x86_stepping < s->cpu_minstepping) + continue; + if (s->bank_lo && (m->bank < s->bank_lo || m->bank > s->bank_hi)) + continue; if (msg) *msg = s->msg; s->covered = 1;
From: Pavel Machek pavel@ucw.cz
[ Upstream commit b28e32798c78a346788d412f1958f36bb760ec03 ]
Fix memory leak in node_probe.
Signed-off-by: Pavel Machek (CIP) pavel@denx.de Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/firewire/firedtv-fw.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/media/firewire/firedtv-fw.c b/drivers/media/firewire/firedtv-fw.c index 3f1ca40b9b987..8a8585261bb80 100644 --- a/drivers/media/firewire/firedtv-fw.c +++ b/drivers/media/firewire/firedtv-fw.c @@ -272,8 +272,10 @@ static int node_probe(struct fw_unit *unit, const struct ieee1394_device_id *id)
name_len = fw_csr_string(unit->directory, CSR_MODEL, name, sizeof(name)); - if (name_len < 0) - return name_len; + if (name_len < 0) { + err = name_len; + goto fail_free; + } for (i = ARRAY_SIZE(model_names); --i; ) if (strlen(model_names[i]) <= name_len && strncmp(name, model_names[i], name_len) == 0)
From: Oliver Neukum oneukum@suse.com
[ Upstream commit a8be80053ea74bd9c3f9a3810e93b802236d6498 ]
If you do sanity checks, you should do them for both endpoints. Hence introduce checking for endpoint type for the output endpoint, too.
Reported-by: syzbot+998261c2ae5932458f6c@syzkaller.appspotmail.com Signed-off-by: Oliver Neukum oneukum@suse.com Signed-off-by: Sean Young sean@mess.org Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/rc/ati_remote.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/drivers/media/rc/ati_remote.c b/drivers/media/rc/ati_remote.c index 9cdef17b4793f..c12dda73cdd53 100644 --- a/drivers/media/rc/ati_remote.c +++ b/drivers/media/rc/ati_remote.c @@ -835,6 +835,10 @@ static int ati_remote_probe(struct usb_interface *interface, err("%s: endpoint_in message size==0? \n", __func__); return -ENODEV; } + if (!usb_endpoint_is_int_out(endpoint_out)) { + err("%s: Unexpected endpoint_out\n", __func__); + return -ENODEV; + }
ati_remote = kzalloc(sizeof (struct ati_remote), GFP_KERNEL); rc_dev = rc_allocate_device(RC_DRIVER_SCANCODE);
From: Aditya Pakki pakki001@umn.edu
[ Upstream commit 57cc666d36adc7b45e37ba4cd7bc4e44ec4c43d7 ]
delta_run_work() calls delta_get_sync() that increments the reference counter. In case of failure, decrement the reference count by calling delta_put_autosuspend().
Signed-off-by: Aditya Pakki pakki001@umn.edu Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/platform/sti/delta/delta-v4l2.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/media/platform/sti/delta/delta-v4l2.c b/drivers/media/platform/sti/delta/delta-v4l2.c index 2503224eeee51..c691b3d81549d 100644 --- a/drivers/media/platform/sti/delta/delta-v4l2.c +++ b/drivers/media/platform/sti/delta/delta-v4l2.c @@ -954,8 +954,10 @@ static void delta_run_work(struct work_struct *work) /* enable the hardware */ if (!dec->pm) { ret = delta_get_sync(ctx); - if (ret) + if (ret) { + delta_put_autosuspend(ctx); goto err; + } }
/* decode this access unit */
From: Qiushi Wu wu000273@umn.edu
[ Upstream commit 6f4432bae9f2d12fc1815b5e26cc07e69bcad0df ]
pm_runtime_get_sync() increments the runtime PM usage counter even when it returns an error code, causing incorrect ref count if pm_runtime_put_noidle() is not called in error handling paths. Thus call pm_runtime_put_noidle() if pm_runtime_get_sync() fails.
Signed-off-by: Qiushi Wu wu000273@umn.edu Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/platform/sti/hva/hva-hw.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/media/platform/sti/hva/hva-hw.c b/drivers/media/platform/sti/hva/hva-hw.c index 401aaafa17109..bb13348be0832 100644 --- a/drivers/media/platform/sti/hva/hva-hw.c +++ b/drivers/media/platform/sti/hva/hva-hw.c @@ -272,6 +272,7 @@ static unsigned long int hva_hw_get_ip_version(struct hva_dev *hva)
if (pm_runtime_get_sync(dev) < 0) { dev_err(dev, "%s failed to get pm_runtime\n", HVA_PREFIX); + pm_runtime_put_noidle(dev); mutex_unlock(&hva->protect_mutex); return -EFAULT; } @@ -553,6 +554,7 @@ void hva_hw_dump_regs(struct hva_dev *hva, struct seq_file *s)
if (pm_runtime_get_sync(dev) < 0) { seq_puts(s, "Cannot wake up IP\n"); + pm_runtime_put_noidle(dev); mutex_unlock(&hva->protect_mutex); return; }
From: Qiushi Wu wu000273@umn.edu
[ Upstream commit 7ef64ceea0008c17e94a8a2c60c5d6d46f481996 ]
On calling pm_runtime_get_sync() the reference count of the device is incremented. In case of failure, decrement the reference count before returning the error.
Signed-off-by: Qiushi Wu wu000273@umn.edu Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/platform/exynos4-is/fimc-isp.c | 4 +++- drivers/media/platform/exynos4-is/fimc-lite.c | 2 +- 2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/media/platform/exynos4-is/fimc-isp.c b/drivers/media/platform/exynos4-is/fimc-isp.c index cde0d254ec1c4..a77c49b185115 100644 --- a/drivers/media/platform/exynos4-is/fimc-isp.c +++ b/drivers/media/platform/exynos4-is/fimc-isp.c @@ -305,8 +305,10 @@ static int fimc_isp_subdev_s_power(struct v4l2_subdev *sd, int on)
if (on) { ret = pm_runtime_get_sync(&is->pdev->dev); - if (ret < 0) + if (ret < 0) { + pm_runtime_put(&is->pdev->dev); return ret; + } set_bit(IS_ST_PWR_ON, &is->state);
ret = fimc_is_start_firmware(is); diff --git a/drivers/media/platform/exynos4-is/fimc-lite.c b/drivers/media/platform/exynos4-is/fimc-lite.c index 9c666f663ab43..fdd0d369b1925 100644 --- a/drivers/media/platform/exynos4-is/fimc-lite.c +++ b/drivers/media/platform/exynos4-is/fimc-lite.c @@ -471,7 +471,7 @@ static int fimc_lite_open(struct file *file) set_bit(ST_FLITE_IN_USE, &fimc->state); ret = pm_runtime_get_sync(&fimc->pdev->dev); if (ret < 0) - goto unlock; + goto err_pm;
ret = v4l2_fh_open(file); if (ret < 0)
From: Qiushi Wu wu000273@umn.edu
[ Upstream commit c47f7c779ef0458a58583f00c9ed71b7f5a4d0a2 ]
On calling pm_runtime_get_sync() the reference count of the device is incremented. In case of failure, decrement the reference count before returning the error.
Signed-off-by: Qiushi Wu wu000273@umn.edu Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/platform/exynos4-is/media-dev.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/media/platform/exynos4-is/media-dev.c b/drivers/media/platform/exynos4-is/media-dev.c index 16dd660137a8d..0bc8c23112db4 100644 --- a/drivers/media/platform/exynos4-is/media-dev.c +++ b/drivers/media/platform/exynos4-is/media-dev.c @@ -484,8 +484,10 @@ static int fimc_md_register_sensor_entities(struct fimc_md *fmd) return -ENXIO;
ret = pm_runtime_get_sync(fmd->pmf); - if (ret < 0) + if (ret < 0) { + pm_runtime_put(fmd->pmf); return ret; + }
fmd->num_sensors = 0;
From: Qiushi Wu wu000273@umn.edu
[ Upstream commit 64157b2cb1940449e7df2670e85781c690266588 ]
pm_runtime_get_sync() increments the runtime PM usage counter even when it returns an error code, causing incorrect ref count if pm_runtime_put_noidle() is not called in error handling paths. Thus call pm_runtime_put_noidle() if pm_runtime_get_sync() fails.
Signed-off-by: Qiushi Wu wu000273@umn.edu Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/platform/exynos4-is/mipi-csis.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/media/platform/exynos4-is/mipi-csis.c b/drivers/media/platform/exynos4-is/mipi-csis.c index 540151bbf58f2..1aac167abb175 100644 --- a/drivers/media/platform/exynos4-is/mipi-csis.c +++ b/drivers/media/platform/exynos4-is/mipi-csis.c @@ -510,8 +510,10 @@ static int s5pcsis_s_stream(struct v4l2_subdev *sd, int enable) if (enable) { s5pcsis_clear_counters(state); ret = pm_runtime_get_sync(&state->pdev->dev); - if (ret && ret != 1) + if (ret && ret != 1) { + pm_runtime_put_noidle(&state->pdev->dev); return ret; + } }
mutex_lock(&state->lock);
From: Dinghao Liu dinghao.liu@zju.edu.cn
[ Upstream commit 98fae901c8883640202802174a4bd70a1b9118bd ]
pm_runtime_get_sync() increments the runtime PM usage counter even when it returns an error code. Thus a pairing decrement is needed on the error handling path to keep the counter balanced.
Signed-off-by: Dinghao Liu dinghao.liu@zju.edu.cn Reviewed-by: Kieran Bingham kieran.bingham+renesas@ideasonboard.com Reviewed-by: Laurent Pinchart laurent.pinchart@ideasonboard.com Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/platform/vsp1/vsp1_drv.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/drivers/media/platform/vsp1/vsp1_drv.c b/drivers/media/platform/vsp1/vsp1_drv.c index c650e45bb0ad1..dc62533cf32ce 100644 --- a/drivers/media/platform/vsp1/vsp1_drv.c +++ b/drivers/media/platform/vsp1/vsp1_drv.c @@ -562,7 +562,12 @@ int vsp1_device_get(struct vsp1_device *vsp1) int ret;
ret = pm_runtime_get_sync(vsp1->dev); - return ret < 0 ? ret : 0; + if (ret < 0) { + pm_runtime_put_noidle(vsp1->dev); + return ret; + } + + return 0; }
/* @@ -845,12 +850,12 @@ static int vsp1_probe(struct platform_device *pdev) /* Configure device parameters based on the version register. */ pm_runtime_enable(&pdev->dev);
- ret = pm_runtime_get_sync(&pdev->dev); + ret = vsp1_device_get(vsp1); if (ret < 0) goto done;
vsp1->version = vsp1_read(vsp1, VI6_IP_VERSION); - pm_runtime_put_sync(&pdev->dev); + vsp1_device_put(vsp1);
for (i = 0; i < ARRAY_SIZE(vsp1_device_infos); ++i) { if ((vsp1->version & VI6_IP_VERSION_MODEL_MASK) ==
From: Dinghao Liu dinghao.liu@zju.edu.cn
[ Upstream commit dafa3605fe60d5a61239d670919b2a36e712481e ]
pm_runtime_get_sync() increments the runtime PM usage counter even when it returns an error code. Thus a pairing decrement is needed on the error handling path to keep the counter balanced.
Also, call pm_runtime_disable() when pm_runtime_get_sync() returns an error code.
Signed-off-by: Dinghao Liu dinghao.liu@zju.edu.cn Reviewed-by: Sylwester Nawrocki snawrocki@kernel.org Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/platform/s3c-camif/camif-core.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/media/platform/s3c-camif/camif-core.c b/drivers/media/platform/s3c-camif/camif-core.c index 92f43c0cbc0c0..422fd549e9c87 100644 --- a/drivers/media/platform/s3c-camif/camif-core.c +++ b/drivers/media/platform/s3c-camif/camif-core.c @@ -464,7 +464,7 @@ static int s3c_camif_probe(struct platform_device *pdev)
ret = camif_media_dev_init(camif); if (ret < 0) - goto err_alloc; + goto err_pm;
ret = camif_register_sensor(camif); if (ret < 0) @@ -498,10 +498,9 @@ static int s3c_camif_probe(struct platform_device *pdev) media_device_unregister(&camif->media_dev); media_device_cleanup(&camif->media_dev); camif_unregister_media_entities(camif); -err_alloc: +err_pm: pm_runtime_put(dev); pm_runtime_disable(dev); -err_pm: camif_clk_put(camif); err_clk: s3c_camif_unregister_subdev(camif);
From: Dinghao Liu dinghao.liu@zju.edu.cn
[ Upstream commit d912a1d9e9afe69c6066c1ceb6bfc09063074075 ]
pm_runtime_get_sync() increments the runtime PM usage counter even when it returns an error code. Thus a pairing decrement is needed on the error handling path to keep the counter balanced.
Signed-off-by: Dinghao Liu dinghao.liu@zju.edu.cn Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/platform/sti/hva/hva-hw.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/media/platform/sti/hva/hva-hw.c b/drivers/media/platform/sti/hva/hva-hw.c index bb13348be0832..43f279e2a6a38 100644 --- a/drivers/media/platform/sti/hva/hva-hw.c +++ b/drivers/media/platform/sti/hva/hva-hw.c @@ -389,7 +389,7 @@ int hva_hw_probe(struct platform_device *pdev, struct hva_dev *hva) ret = pm_runtime_get_sync(dev); if (ret < 0) { dev_err(dev, "%s failed to set PM\n", HVA_PREFIX); - goto err_clk; + goto err_pm; }
/* check IP hardware version */
From: Dinghao Liu dinghao.liu@zju.edu.cn
[ Upstream commit dbd2f2dc025f9be8ae063e4f270099677238f620 ]
pm_runtime_get_sync() increments the runtime PM usage counter even when it returns an error code. Thus a pairing decrement is needed on the error handling path to keep the counter balanced.
Signed-off-by: Dinghao Liu dinghao.liu@zju.edu.cn Reviewed-by: Fabien Dessenne fabien.dessenne@st.com Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/platform/sti/bdisp/bdisp-v4l2.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/media/platform/sti/bdisp/bdisp-v4l2.c b/drivers/media/platform/sti/bdisp/bdisp-v4l2.c index af2d5eb782cee..e1d150584bdc2 100644 --- a/drivers/media/platform/sti/bdisp/bdisp-v4l2.c +++ b/drivers/media/platform/sti/bdisp/bdisp-v4l2.c @@ -1371,7 +1371,7 @@ static int bdisp_probe(struct platform_device *pdev) ret = pm_runtime_get_sync(dev); if (ret < 0) { dev_err(dev, "failed to set PM\n"); - goto err_dbg; + goto err_pm; }
/* Filters */ @@ -1399,7 +1399,6 @@ static int bdisp_probe(struct platform_device *pdev) bdisp_hw_free_filters(bdisp->dev); err_pm: pm_runtime_put(dev); -err_dbg: bdisp_debugfs_remove(bdisp); err_v4l2: v4l2_device_unregister(&bdisp->v4l2_dev);
From: Xiaolong Huang butterflyhuangxx@gmail.com
[ Upstream commit 7b817585b730665126b45df5508dd69526448bc8 ]
In bttv_probe if some functions such as pci_enable_device, pci_set_dma_mask and request_mem_region fails the allocated memory for btv should be released.
Signed-off-by: Xiaolong Huang butterflyhuangxx@gmail.com Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/pci/bt8xx/bttv-driver.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/drivers/media/pci/bt8xx/bttv-driver.c b/drivers/media/pci/bt8xx/bttv-driver.c index 9144f795fb933..b721720f9845a 100644 --- a/drivers/media/pci/bt8xx/bttv-driver.c +++ b/drivers/media/pci/bt8xx/bttv-driver.c @@ -4013,11 +4013,13 @@ static int bttv_probe(struct pci_dev *dev, const struct pci_device_id *pci_id) btv->id = dev->device; if (pci_enable_device(dev)) { pr_warn("%d: Can't enable device\n", btv->c.nr); - return -EIO; + result = -EIO; + goto free_mem; } if (pci_set_dma_mask(dev, DMA_BIT_MASK(32))) { pr_warn("%d: No suitable DMA available\n", btv->c.nr); - return -EIO; + result = -EIO; + goto free_mem; } if (!request_mem_region(pci_resource_start(dev,0), pci_resource_len(dev,0), @@ -4025,7 +4027,8 @@ static int bttv_probe(struct pci_dev *dev, const struct pci_device_id *pci_id) pr_warn("%d: can't request iomem (0x%llx)\n", btv->c.nr, (unsigned long long)pci_resource_start(dev, 0)); - return -EBUSY; + result = -EBUSY; + goto free_mem; } pci_set_master(dev); pci_set_command(dev); @@ -4211,6 +4214,10 @@ static int bttv_probe(struct pci_dev *dev, const struct pci_device_id *pci_id) release_mem_region(pci_resource_start(btv->c.pci,0), pci_resource_len(btv->c.pci,0)); pci_disable_device(btv->c.pci); + +free_mem: + bttvs[btv->c.nr] = NULL; + kfree(btv); return result; }
From: Borislav Petkov bp@suse.de
[ Upstream commit e100777016fdf6ec3a9d7c1773b15a2b5eca6c55 ]
They do get called from the #MC handler which is already marked "noinstr".
Commit
e2def7d49d08 ("x86/mce: Make mce_rdmsrl() panic on an inaccessible MSR")
already got rid of the instrumentation in the MSR accessors, fix the annotation now too, in order to get rid of:
vmlinux.o: warning: objtool: do_machine_check()+0x4a: call to mce_rdmsrl() leaves .noinstr.text section
Signed-off-by: Borislav Petkov bp@suse.de Link: https://lkml.kernel.org/r/20200915194020.28807-1-bp@alien8.de Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/kernel/cpu/mce/core.c | 27 +++++++++++++++++++++------ 1 file changed, 21 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index fc4f8c04bdb56..4288645425f15 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -374,16 +374,25 @@ static int msr_to_offset(u32 msr) }
/* MSR access wrappers used for error injection */ -static u64 mce_rdmsrl(u32 msr) +static noinstr u64 mce_rdmsrl(u32 msr) { u64 v;
if (__this_cpu_read(injectm.finished)) { - int offset = msr_to_offset(msr); + int offset; + u64 ret;
+ instrumentation_begin(); + + offset = msr_to_offset(msr); if (offset < 0) - return 0; - return *(u64 *)((char *)this_cpu_ptr(&injectm) + offset); + ret = 0; + else + ret = *(u64 *)((char *)this_cpu_ptr(&injectm) + offset); + + instrumentation_end(); + + return ret; }
if (rdmsrl_safe(msr, &v)) { @@ -399,13 +408,19 @@ static u64 mce_rdmsrl(u32 msr) return v; }
-static void mce_wrmsrl(u32 msr, u64 v) +static noinstr void mce_wrmsrl(u32 msr, u64 v) { if (__this_cpu_read(injectm.finished)) { - int offset = msr_to_offset(msr); + int offset;
+ instrumentation_begin(); + + offset = msr_to_offset(msr); if (offset >= 0) *(u64 *)((char *)this_cpu_ptr(&injectm) + offset) = v; + + instrumentation_end(); + return; } wrmsrl(msr, v);
From: Longfang Liu liulongfang@huawei.com
[ Upstream commit 24efcec2919afa7d56f848c83a605b46c8042a53 ]
1. Fix the bug of 'mac' memory leak as allocating 'pbuf' failing. 2. Fix the bug of 'qps' leak as allocating 'qp_ctx' failing.
Signed-off-by: Longfang Liu liulongfang@huawei.com Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/crypto/hisilicon/sec2/sec_crypto.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-)
diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.c b/drivers/crypto/hisilicon/sec2/sec_crypto.c index 497969ae8b230..b9973d152a24a 100644 --- a/drivers/crypto/hisilicon/sec2/sec_crypto.c +++ b/drivers/crypto/hisilicon/sec2/sec_crypto.c @@ -342,11 +342,14 @@ static int sec_alg_resource_alloc(struct sec_ctx *ctx, ret = sec_alloc_pbuf_resource(dev, res); if (ret) { dev_err(dev, "fail to alloc pbuf dma resource!\n"); - goto alloc_fail; + goto alloc_pbuf_fail; } }
return 0; +alloc_pbuf_fail: + if (ctx->alg_type == SEC_AEAD) + sec_free_mac_resource(dev, qp_ctx->res); alloc_fail: sec_free_civ_resource(dev, res);
@@ -457,8 +460,10 @@ static int sec_ctx_base_init(struct sec_ctx *ctx) ctx->fake_req_limit = QM_Q_DEPTH >> 1; ctx->qp_ctx = kcalloc(sec->ctx_q_num, sizeof(struct sec_qp_ctx), GFP_KERNEL); - if (!ctx->qp_ctx) - return -ENOMEM; + if (!ctx->qp_ctx) { + ret = -ENOMEM; + goto err_destroy_qps; + }
for (i = 0; i < sec->ctx_q_num; i++) { ret = sec_create_qp_ctx(&sec->qm, ctx, i, 0); @@ -467,12 +472,15 @@ static int sec_ctx_base_init(struct sec_ctx *ctx) }
return 0; + err_sec_release_qp_ctx: for (i = i - 1; i >= 0; i--) sec_release_qp_ctx(ctx, &ctx->qp_ctx[i]);
- sec_destroy_qps(ctx->qps, sec->ctx_q_num); kfree(ctx->qp_ctx); +err_destroy_qps: + sec_destroy_qps(ctx->qps, sec->ctx_q_num); + return ret; }
From: Brad Bishop bradleyb@fuzziesquirrel.com
[ Upstream commit 0b546bbe9474ff23e6843916ad6d567f703b2396 ]
Use a clock divider tuned to a 200MHz FSI bus frequency (the maximum). Use of the previous divider at 200MHz results in corrupt data from endpoint devices. Ideally the clock divider would be calculated from the FSI clock, but that would require some significant work on the FSI driver. With FSI frequencies slower than 200MHz, the SPI clock will simply run slower, but safely.
Signed-off-by: Brad Bishop bradleyb@fuzziesquirrel.com Signed-off-by: Eddie James eajames@linux.ibm.com Signed-off-by: Joel Stanley joel@jms.id.au Link: https://lore.kernel.org/r/20200909222857.28653-3-eajames@linux.ibm.com Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/spi/spi-fsi.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/spi/spi-fsi.c b/drivers/spi/spi-fsi.c index 37a3e0f8e7526..6e6b3c6437288 100644 --- a/drivers/spi/spi-fsi.c +++ b/drivers/spi/spi-fsi.c @@ -350,7 +350,7 @@ static int fsi_spi_transfer_init(struct fsi_spi *ctx) u64 status = 0ULL; u64 wanted_clock_cfg = SPI_FSI_CLOCK_CFG_ECC_DISABLE | SPI_FSI_CLOCK_CFG_SCK_NO_DEL | - FIELD_PREP(SPI_FSI_CLOCK_CFG_SCK_DIV, 4); + FIELD_PREP(SPI_FSI_CLOCK_CFG_SCK_DIV, 19);
end = jiffies + msecs_to_jiffies(SPI_FSI_INIT_TIMEOUT_MS); do {
From: Ming Lei ming.lei@redhat.com
[ Upstream commit 285008501c65a3fcee05d2c2c26cbf629ceff2f0 ]
NVMe shares tagset between fabric queue and admin queue or between connect_q and NS queue, so hctx_may_queue() can be called to allocate request for these queues.
Tags can be reserved in these tagset. Before error recovery, there is often lots of in-flight requests which can't be completed, and new reserved request may be needed in error recovery path. However, hctx_may_queue() can always return false because there is too many in-flight requests which can't be completed during error handling. Finally, nothing can proceed.
Fix this issue by always allowing reserved tag allocation in hctx_may_queue(). This is reasonable because reserved tags are supposed to always be available.
Reviewed-by: Christoph Hellwig hch@lst.de Reviewed-by: Hannes Reinecke hare@suse.de Cc: David Milburn dmilburn@redhat.com Cc: Ewan D. Milne emilne@redhat.com Signed-off-by: Ming Lei ming.lei@redhat.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- block/blk-mq-tag.c | 3 ++- block/blk-mq.c | 5 +++-- 2 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index 32d82e23b0953..a1c1e7c611f7b 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -59,7 +59,8 @@ void __blk_mq_tag_idle(struct blk_mq_hw_ctx *hctx) static int __blk_mq_get_tag(struct blk_mq_alloc_data *data, struct sbitmap_queue *bt) { - if (!data->q->elevator && !hctx_may_queue(data->hctx, bt)) + if (!data->q->elevator && !(data->flags & BLK_MQ_REQ_RESERVED) && + !hctx_may_queue(data->hctx, bt)) return BLK_MQ_NO_TAG;
if (data->shallow_depth) diff --git a/block/blk-mq.c b/block/blk-mq.c index cdced4aca2e81..96425e4bd2b2c 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1105,10 +1105,11 @@ static bool __blk_mq_get_driver_tag(struct request *rq) if (blk_mq_tag_is_reserved(rq->mq_hctx->sched_tags, rq->internal_tag)) { bt = &rq->mq_hctx->tags->breserved_tags; tag_offset = 0; + } else { + if (!hctx_may_queue(rq->mq_hctx, bt)) + return false; }
- if (!hctx_may_queue(rq->mq_hctx, bt)) - return false; tag = __sbitmap_queue_get(bt); if (tag == BLK_MQ_NO_TAG) return false;
From: Borislav Petkov bp@suse.de
[ Upstream commit e2def7d49d0812ea40a224161b2001b2e815dce2 ]
If an exception needs to be handled while reading an MSR - which is in most of the cases caused by a #GP on a non-existent MSR - then this is most likely the incarnation of a BIOS or a hardware bug. Such bug violates the architectural guarantee that MCA banks are present with all MSRs belonging to them.
The proper fix belongs in the hardware/firmware - not in the kernel.
Handling an #MC exception which is raised while an NMI is being handled would cause the nasty NMI nesting issue because of the shortcoming of IRET of reenabling NMIs when executed. And the machine is in an #MC context already so <Deity> be at its side.
Tracing MSR accesses while in #MC is another no-no due to tracing being inherently a bad idea in atomic context:
vmlinux.o: warning: objtool: do_machine_check()+0x4a: call to mce_rdmsrl() leaves .noinstr.text section
so remove all that "additional" functionality from mce_rdmsrl() and provide it with a special exception handler which panics the machine when that MSR is not accessible.
The exception handler prints a human-readable message explaining what the panic reason is but, what is more, it panics while in the #GP handler and latter won't have executed an IRET, thus opening the NMI nesting issue in the case when the #MC has happened while handling an NMI. (#MC itself won't be reenabled until MCG_STATUS hasn't been cleared).
Suggested-by: Andy Lutomirski luto@kernel.org Suggested-by: Peter Zijlstra peterz@infradead.org [ Add missing prototypes for ex_handler_* ] Reported-by: kernel test robot lkp@intel.com Signed-off-by: Borislav Petkov bp@suse.de Reviewed-by: Tony Luck tony.luck@intel.com Link: https://lkml.kernel.org/r/20200906212130.GA28456@zn.tnic Signed-off-by: Sasha Levin sashal@kernel.org --- arch/x86/kernel/cpu/mce/core.c | 72 +++++++++++++++++++++++++----- arch/x86/kernel/cpu/mce/internal.h | 10 +++++ 2 files changed, 70 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 4288645425f15..84eef4fa95990 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -373,10 +373,28 @@ static int msr_to_offset(u32 msr) return -1; }
+__visible bool ex_handler_rdmsr_fault(const struct exception_table_entry *fixup, + struct pt_regs *regs, int trapnr, + unsigned long error_code, + unsigned long fault_addr) +{ + pr_emerg("MSR access error: RDMSR from 0x%x at rIP: 0x%lx (%pS)\n", + (unsigned int)regs->cx, regs->ip, (void *)regs->ip); + + show_stack_regs(regs); + + panic("MCA architectural violation!\n"); + + while (true) + cpu_relax(); + + return true; +} + /* MSR access wrappers used for error injection */ static noinstr u64 mce_rdmsrl(u32 msr) { - u64 v; + DECLARE_ARGS(val, low, high);
if (__this_cpu_read(injectm.finished)) { int offset; @@ -395,21 +413,43 @@ static noinstr u64 mce_rdmsrl(u32 msr) return ret; }
- if (rdmsrl_safe(msr, &v)) { - WARN_ONCE(1, "mce: Unable to read MSR 0x%x!\n", msr); - /* - * Return zero in case the access faulted. This should - * not happen normally but can happen if the CPU does - * something weird, or if the code is buggy. - */ - v = 0; - } + /* + * RDMSR on MCA MSRs should not fault. If they do, this is very much an + * architectural violation and needs to be reported to hw vendor. Panic + * the box to not allow any further progress. + */ + asm volatile("1: rdmsr\n" + "2:\n" + _ASM_EXTABLE_HANDLE(1b, 2b, ex_handler_rdmsr_fault) + : EAX_EDX_RET(val, low, high) : "c" (msr));
- return v; + + return EAX_EDX_VAL(val, low, high); +} + +__visible bool ex_handler_wrmsr_fault(const struct exception_table_entry *fixup, + struct pt_regs *regs, int trapnr, + unsigned long error_code, + unsigned long fault_addr) +{ + pr_emerg("MSR access error: WRMSR to 0x%x (tried to write 0x%08x%08x) at rIP: 0x%lx (%pS)\n", + (unsigned int)regs->cx, (unsigned int)regs->dx, (unsigned int)regs->ax, + regs->ip, (void *)regs->ip); + + show_stack_regs(regs); + + panic("MCA architectural violation!\n"); + + while (true) + cpu_relax(); + + return true; }
static noinstr void mce_wrmsrl(u32 msr, u64 v) { + u32 low, high; + if (__this_cpu_read(injectm.finished)) { int offset;
@@ -423,7 +463,15 @@ static noinstr void mce_wrmsrl(u32 msr, u64 v)
return; } - wrmsrl(msr, v); + + low = (u32)v; + high = (u32)(v >> 32); + + /* See comment in mce_rdmsrl() */ + asm volatile("1: wrmsr\n" + "2:\n" + _ASM_EXTABLE_HANDLE(1b, 2b, ex_handler_wrmsr_fault) + : : "c" (msr), "a"(low), "d" (high) : "memory"); }
/* diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h index 6473070b5da49..b122610e9046a 100644 --- a/arch/x86/kernel/cpu/mce/internal.h +++ b/arch/x86/kernel/cpu/mce/internal.h @@ -185,4 +185,14 @@ extern bool amd_filter_mce(struct mce *m); static inline bool amd_filter_mce(struct mce *m) { return false; }; #endif
+__visible bool ex_handler_rdmsr_fault(const struct exception_table_entry *fixup, + struct pt_regs *regs, int trapnr, + unsigned long error_code, + unsigned long fault_addr); + +__visible bool ex_handler_wrmsr_fault(const struct exception_table_entry *fixup, + struct pt_regs *regs, int trapnr, + unsigned long error_code, + unsigned long fault_addr); + #endif /* __X86_MCE_INTERNAL_H__ */
From: Laurent Pinchart laurent.pinchart+renesas@ideasonboard.com
[ Upstream commit cdd4f7824994c9254acc6e415750529ea2d2cfe0 ]
The fwnode reference corresponding to the endpoint is leaked in an error path of the rcar_drif_parse_subdevs() function. Fix it, and reorganize fwnode reference handling in the function to release references early, simplifying error paths.
Signed-off-by: Laurent Pinchart laurent.pinchart+renesas@ideasonboard.com Signed-off-by: Sakari Ailus sakari.ailus@linux.intel.com Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/platform/rcar_drif.c | 16 +++++----------- 1 file changed, 5 insertions(+), 11 deletions(-)
diff --git a/drivers/media/platform/rcar_drif.c b/drivers/media/platform/rcar_drif.c index 3d2451ac347d7..3f1e5cb8b1976 100644 --- a/drivers/media/platform/rcar_drif.c +++ b/drivers/media/platform/rcar_drif.c @@ -1227,28 +1227,22 @@ static int rcar_drif_parse_subdevs(struct rcar_drif_sdr *sdr) if (!ep) return 0;
+ /* Get the endpoint properties */ + rcar_drif_get_ep_properties(sdr, ep); + fwnode = fwnode_graph_get_remote_port_parent(ep); + fwnode_handle_put(ep); if (!fwnode) { dev_warn(sdr->dev, "bad remote port parent\n"); - fwnode_handle_put(ep); return -EINVAL; }
sdr->ep.asd.match.fwnode = fwnode; sdr->ep.asd.match_type = V4L2_ASYNC_MATCH_FWNODE; ret = v4l2_async_notifier_add_subdev(notifier, &sdr->ep.asd); - if (ret) { - fwnode_handle_put(fwnode); - return ret; - } - - /* Get the endpoint properties */ - rcar_drif_get_ep_properties(sdr, ep); - fwnode_handle_put(fwnode); - fwnode_handle_put(ep);
- return 0; + return ret; }
/* Check if the given device is the primary bond */
From: Adam Goode agoode@google.com
[ Upstream commit 8a652a17e3c005dcdae31b6c8fdf14382a29cbbe ]
bFrameIndex and bFormatIndex can be negotiated by the camera during probing, resulting in the camera choosing a different format than expected. v4l2 can already accommodate such changes, but the code was not updating the proper fields.
Without such a change, v4l2 would potentially interpret the payload incorrectly, causing corrupted output. This was happening on the Elgato HD60 S+, which currently always renegotiates to format 1.
As an aside, the Elgato firmware is buggy and should not be renegotating, but it is still a valid thing for the camera to do. Both macOS and Windows will properly probe and read uncorrupted images from this camera.
With this change, both qv4l2 and chromium can now read uncorrupted video from the Elgato HD60 S+.
[Add blank lines, remove periods at the of messages]
Signed-off-by: Adam Goode agoode@google.com Signed-off-by: Laurent Pinchart laurent.pinchart@ideasonboard.com Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/usb/uvc/uvc_v4l2.c | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+)
diff --git a/drivers/media/usb/uvc/uvc_v4l2.c b/drivers/media/usb/uvc/uvc_v4l2.c index 0335e69b70abe..5e6f3153b5ff8 100644 --- a/drivers/media/usb/uvc/uvc_v4l2.c +++ b/drivers/media/usb/uvc/uvc_v4l2.c @@ -247,11 +247,41 @@ static int uvc_v4l2_try_format(struct uvc_streaming *stream, if (ret < 0) goto done;
+ /* After the probe, update fmt with the values returned from + * negotiation with the device. + */ + for (i = 0; i < stream->nformats; ++i) { + if (probe->bFormatIndex == stream->format[i].index) { + format = &stream->format[i]; + break; + } + } + + if (i == stream->nformats) { + uvc_trace(UVC_TRACE_FORMAT, "Unknown bFormatIndex %u\n", + probe->bFormatIndex); + return -EINVAL; + } + + for (i = 0; i < format->nframes; ++i) { + if (probe->bFrameIndex == format->frame[i].bFrameIndex) { + frame = &format->frame[i]; + break; + } + } + + if (i == format->nframes) { + uvc_trace(UVC_TRACE_FORMAT, "Unknown bFrameIndex %u\n", + probe->bFrameIndex); + return -EINVAL; + } + fmt->fmt.pix.width = frame->wWidth; fmt->fmt.pix.height = frame->wHeight; fmt->fmt.pix.field = V4L2_FIELD_NONE; fmt->fmt.pix.bytesperline = uvc_v4l2_get_bytesperline(format, frame); fmt->fmt.pix.sizeimage = probe->dwMaxVideoFrameSize; + fmt->fmt.pix.pixelformat = format->fcc; fmt->fmt.pix.colorspace = format->colorspace;
if (uvc_format != NULL)
From: Rich Felker dalias@libc.org
[ Upstream commit 4d671d922d51907bc41f1f7f2dc737c928ae78fd ]
Asynchronous termination of a thread outside of the userspace thread library's knowledge is an unsafe operation that leaves the process in an inconsistent, corrupt, and possibly unrecoverable state. In order to make new actions that may be added in the future safe on kernels not aware of them, change the default action from SECCOMP_RET_KILL_THREAD to SECCOMP_RET_KILL_PROCESS.
Signed-off-by: Rich Felker dalias@libc.org Link: https://lore.kernel.org/r/20200829015609.GA32566@brightrain.aerifal.cx [kees: Fixed up coredump selection logic to match] Signed-off-by: Kees Cook keescook@chromium.org Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/seccomp.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/seccomp.c b/kernel/seccomp.c index 676d4af621038..f754c1087e413 100644 --- a/kernel/seccomp.c +++ b/kernel/seccomp.c @@ -1020,7 +1020,7 @@ static int __seccomp_filter(int this_syscall, const struct seccomp_data *sd, default: seccomp_log(this_syscall, SIGSYS, action, true); /* Dump core only if this is the last remaining thread. */ - if (action == SECCOMP_RET_KILL_PROCESS || + if (action != SECCOMP_RET_KILL_THREAD || get_nr_threads(current) == 1) { kernel_siginfo_t info;
@@ -1030,10 +1030,10 @@ static int __seccomp_filter(int this_syscall, const struct seccomp_data *sd, seccomp_init_siginfo(&info, this_syscall, data); do_coredump(&info); } - if (action == SECCOMP_RET_KILL_PROCESS) - do_group_exit(SIGSYS); - else + if (action == SECCOMP_RET_KILL_THREAD) do_exit(SIGSYS); + else + do_group_exit(SIGSYS); }
unreachable();
Hi Sasha,
I'd prefer this was not backported -- it's not a bug fix, and if the behavioral change is actually disruptive, I'd like to keep the fall-out contained. :)
Thanks!
-Kees
On Sun, Oct 18, 2020 at 03:16:42PM -0400, Sasha Levin wrote:
From: Rich Felker dalias@libc.org
[ Upstream commit 4d671d922d51907bc41f1f7f2dc737c928ae78fd ]
Asynchronous termination of a thread outside of the userspace thread library's knowledge is an unsafe operation that leaves the process in an inconsistent, corrupt, and possibly unrecoverable state. In order to make new actions that may be added in the future safe on kernels not aware of them, change the default action from SECCOMP_RET_KILL_THREAD to SECCOMP_RET_KILL_PROCESS.
Signed-off-by: Rich Felker dalias@libc.org Link: https://lore.kernel.org/r/20200829015609.GA32566@brightrain.aerifal.cx [kees: Fixed up coredump selection logic to match] Signed-off-by: Kees Cook keescook@chromium.org Signed-off-by: Sasha Levin sashal@kernel.org
kernel/seccomp.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/seccomp.c b/kernel/seccomp.c index 676d4af621038..f754c1087e413 100644 --- a/kernel/seccomp.c +++ b/kernel/seccomp.c @@ -1020,7 +1020,7 @@ static int __seccomp_filter(int this_syscall, const struct seccomp_data *sd, default: seccomp_log(this_syscall, SIGSYS, action, true); /* Dump core only if this is the last remaining thread. */
if (action == SECCOMP_RET_KILL_PROCESS ||
get_nr_threads(current) == 1) { kernel_siginfo_t info;if (action != SECCOMP_RET_KILL_THREAD ||
@@ -1030,10 +1030,10 @@ static int __seccomp_filter(int this_syscall, const struct seccomp_data *sd, seccomp_init_siginfo(&info, this_syscall, data); do_coredump(&info); }
if (action == SECCOMP_RET_KILL_PROCESS)
do_group_exit(SIGSYS);
else
if (action == SECCOMP_RET_KILL_THREAD) do_exit(SIGSYS);
else
}do_group_exit(SIGSYS);
unreachable(); -- 2.25.1
On Mon, Oct 19, 2020 at 04:28:19PM -0700, Kees Cook wrote:
Hi Sasha,
I'd prefer this was not backported -- it's not a bug fix, and if the behavioral change is actually disruptive, I'd like to keep the fall-out contained. :)
Now dropped, thanks!
From: Pali Rohár pali@kernel.org
[ Upstream commit 8ebe2607965d3e2dc02029e8c7dd35fbe508ffd0 ]
Before parsing CISTPL_VERS_1 structure check that its size is at least two bytes to prevent buffer overflow.
Signed-off-by: Pali Rohár pali@kernel.org Link: https://lore.kernel.org/r/20200727133837.19086-2-pali@kernel.org Signed-off-by: Ulf Hansson ulf.hansson@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/mmc/core/sdio_cis.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/mmc/core/sdio_cis.c b/drivers/mmc/core/sdio_cis.c index e0655278c5c32..3efaa9534a777 100644 --- a/drivers/mmc/core/sdio_cis.c +++ b/drivers/mmc/core/sdio_cis.c @@ -26,6 +26,9 @@ static int cistpl_vers_1(struct mmc_card *card, struct sdio_func *func, unsigned i, nr_strings; char **buffer, *string;
+ if (size < 2) + return 0; + /* Find all null-terminated (including zero length) strings in the TPLLV1_INFO field. Trailing garbage is ignored. */ buf += 2;
From: Mauro Carvalho Chehab mchehab+huawei@kernel.org
[ Upstream commit 15a36aae1ec1c1f17149b6113b92631791830740 ]
As reported by smatch: drivers/media/pci/saa7134//saa7134-tvaudio.c:686 saa_dsp_writel() warn: should 'reg << 2' be a 64 bit type?
On a 64-bits Kernel, the shift might be bigger than 32 bits.
In real, this should never happen, but let's shut up the warning.
Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/pci/saa7134/saa7134-tvaudio.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/media/pci/saa7134/saa7134-tvaudio.c b/drivers/media/pci/saa7134/saa7134-tvaudio.c index 79e1afb710758..5cc4ef21f9d37 100644 --- a/drivers/media/pci/saa7134/saa7134-tvaudio.c +++ b/drivers/media/pci/saa7134/saa7134-tvaudio.c @@ -683,7 +683,8 @@ int saa_dsp_writel(struct saa7134_dev *dev, int reg, u32 value) { int err;
- audio_dbg(2, "dsp write reg 0x%x = 0x%06x\n", reg << 2, value); + audio_dbg(2, "dsp write reg 0x%x = 0x%06x\n", + (reg << 2) & 0xffffffff, value); err = saa_dsp_wait_bit(dev,SAA7135_DSP_RWSTATE_WRR); if (err < 0) return err;
From: Dinghao Liu dinghao.liu@zju.edu.cn
[ Upstream commit c1bca5b5ced0cbd779d56f60cdbc9f5e6f6449fe ]
When aspect_ratio_crop_init() fails, curr_stream needs to be freed just like what we've done in the following error paths. However, current code is returning directly and ends up leaking memory.
Signed-off-by: Dinghao Liu dinghao.liu@zju.edu.cn Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/staging/media/atomisp/pci/sh_css.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/staging/media/atomisp/pci/sh_css.c b/drivers/staging/media/atomisp/pci/sh_css.c index a68cbb4995f0f..33a0f8ff82aa8 100644 --- a/drivers/staging/media/atomisp/pci/sh_css.c +++ b/drivers/staging/media/atomisp/pci/sh_css.c @@ -9521,7 +9521,7 @@ ia_css_stream_create(const struct ia_css_stream_config *stream_config, if (err) { IA_CSS_LEAVE_ERR(err); - return err; + goto ERR; } #endif for (i = 0; i < num_pipes; i++)
From: Vikash Garodia vgarodia@codeaurora.org
[ Upstream commit e1c69c4eef61ffe295b747992c6fd849e6cd747d ]
There are few list handling issues while adding and deleting node in the registered buf list in the driver. 1. list addition - buffer added into the list during buf_init while not deleted during cleanup. 2. list deletion - In capture streamoff, the list was reinitialized. As a result, if any node was present in the list, it would lead to issue while cleaning up that node during buf_cleanup.
Corresponding call traces below: [ 165.751014] Call trace: [ 165.753541] __list_add_valid+0x58/0x88 [ 165.757532] venus_helper_vb2_buf_init+0x74/0xa8 [venus_core] [ 165.763450] vdec_buf_init+0x34/0xb4 [venus_dec] [ 165.768271] __buf_prepare+0x598/0x8a0 [videobuf2_common] [ 165.773820] vb2_core_qbuf+0xb4/0x334 [videobuf2_common] [ 165.779298] vb2_qbuf+0x78/0xb8 [videobuf2_v4l2] [ 165.784053] v4l2_m2m_qbuf+0x80/0xf8 [v4l2_mem2mem] [ 165.789067] v4l2_m2m_ioctl_qbuf+0x2c/0x38 [v4l2_mem2mem] [ 165.794624] v4l_qbuf+0x48/0x58
[ 1797.556001] Call trace: [ 1797.558516] __list_del_entry_valid+0x88/0x9c [ 1797.562989] vdec_buf_cleanup+0x54/0x228 [venus_dec] [ 1797.568088] __buf_prepare+0x270/0x8a0 [videobuf2_common] [ 1797.573625] vb2_core_qbuf+0xb4/0x338 [videobuf2_common] [ 1797.579082] vb2_qbuf+0x78/0xb8 [videobuf2_v4l2] [ 1797.583830] v4l2_m2m_qbuf+0x80/0xf8 [v4l2_mem2mem] [ 1797.588843] v4l2_m2m_ioctl_qbuf+0x2c/0x38 [v4l2_mem2mem] [ 1797.594389] v4l_qbuf+0x48/0x58
Signed-off-by: Vikash Garodia vgarodia@codeaurora.org Reviewed-by: Fritz Koenig frkoenig@chromium.org Signed-off-by: Stanimir Varbanov stanimir.varbanov@linaro.org Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/platform/qcom/venus/vdec.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/media/platform/qcom/venus/vdec.c b/drivers/media/platform/qcom/venus/vdec.c index 7c4c483d54389..76be14efbfb09 100644 --- a/drivers/media/platform/qcom/venus/vdec.c +++ b/drivers/media/platform/qcom/venus/vdec.c @@ -1088,8 +1088,6 @@ static int vdec_stop_capture(struct venus_inst *inst) break; }
- INIT_LIST_HEAD(&inst->registeredbufs); - return ret; }
@@ -1189,6 +1187,14 @@ static int vdec_buf_init(struct vb2_buffer *vb) static void vdec_buf_cleanup(struct vb2_buffer *vb) { struct venus_inst *inst = vb2_get_drv_priv(vb->vb2_queue); + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb); + struct venus_buffer *buf = to_venus_buffer(vbuf); + + mutex_lock(&inst->lock); + if (vb->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE) + if (!list_empty(&inst->registeredbufs)) + list_del_init(&buf->reg_list); + mutex_unlock(&inst->lock);
inst->buf_count--; if (!inst->buf_count)
From: Peter Zijlstra peterz@infradead.org
[ Upstream commit 70d932985757fbe978024db313001218e9f8fe5c ]
The current notifiers have the following error handling pattern all over the place:
int err, nr;
err = __foo_notifier_call_chain(&chain, val_up, v, -1, &nr); if (err & NOTIFIER_STOP_MASK) __foo_notifier_call_chain(&chain, val_down, v, nr-1, NULL)
And aside from the endless repetition thereof, it is broken. Consider blocking notifiers; both calls take and drop the rwsem, this means that the notifier list can change in between the two calls, making @nr meaningless.
Fix this by replacing all the __foo_notifier_call_chain() functions with foo_notifier_call_chain_robust() that embeds the above pattern, but ensures it is inside a single lock region.
Note: I switched atomic_notifier_call_chain_robust() to use the spinlock, since RCU cannot provide the guarantee required for the recovery.
Note: software_resume() error handling was broken afaict.
Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Ingo Molnar mingo@kernel.org Acked-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Link: https://lore.kernel.org/r/20200818135804.325626653@infradead.org Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/notifier.h | 15 ++- kernel/cpu_pm.c | 48 ++++------ kernel/notifier.c | 144 ++++++++++++++++++----------- kernel/power/hibernate.c | 39 ++++---- kernel/power/main.c | 8 +- kernel/power/power.h | 3 +- kernel/power/suspend.c | 14 ++- kernel/power/user.c | 14 +-- tools/power/pm-graph/sleepgraph.py | 2 +- 9 files changed, 147 insertions(+), 140 deletions(-)
diff --git a/include/linux/notifier.h b/include/linux/notifier.h index 018947611483e..2fb373a5c1ede 100644 --- a/include/linux/notifier.h +++ b/include/linux/notifier.h @@ -161,20 +161,19 @@ extern int srcu_notifier_chain_unregister(struct srcu_notifier_head *nh,
extern int atomic_notifier_call_chain(struct atomic_notifier_head *nh, unsigned long val, void *v); -extern int __atomic_notifier_call_chain(struct atomic_notifier_head *nh, - unsigned long val, void *v, int nr_to_call, int *nr_calls); extern int blocking_notifier_call_chain(struct blocking_notifier_head *nh, unsigned long val, void *v); -extern int __blocking_notifier_call_chain(struct blocking_notifier_head *nh, - unsigned long val, void *v, int nr_to_call, int *nr_calls); extern int raw_notifier_call_chain(struct raw_notifier_head *nh, unsigned long val, void *v); -extern int __raw_notifier_call_chain(struct raw_notifier_head *nh, - unsigned long val, void *v, int nr_to_call, int *nr_calls); extern int srcu_notifier_call_chain(struct srcu_notifier_head *nh, unsigned long val, void *v); -extern int __srcu_notifier_call_chain(struct srcu_notifier_head *nh, - unsigned long val, void *v, int nr_to_call, int *nr_calls); + +extern int atomic_notifier_call_chain_robust(struct atomic_notifier_head *nh, + unsigned long val_up, unsigned long val_down, void *v); +extern int blocking_notifier_call_chain_robust(struct blocking_notifier_head *nh, + unsigned long val_up, unsigned long val_down, void *v); +extern int raw_notifier_call_chain_robust(struct raw_notifier_head *nh, + unsigned long val_up, unsigned long val_down, void *v);
#define NOTIFY_DONE 0x0000 /* Don't care */ #define NOTIFY_OK 0x0001 /* Suits me */ diff --git a/kernel/cpu_pm.c b/kernel/cpu_pm.c index 44a259338e33d..f7e1d0eccdbc6 100644 --- a/kernel/cpu_pm.c +++ b/kernel/cpu_pm.c @@ -15,18 +15,28 @@
static ATOMIC_NOTIFIER_HEAD(cpu_pm_notifier_chain);
-static int cpu_pm_notify(enum cpu_pm_event event, int nr_to_call, int *nr_calls) +static int cpu_pm_notify(enum cpu_pm_event event) { int ret;
/* - * __atomic_notifier_call_chain has a RCU read critical section, which + * atomic_notifier_call_chain has a RCU read critical section, which * could be disfunctional in cpu idle. Copy RCU_NONIDLE code to let * RCU know this. */ rcu_irq_enter_irqson(); - ret = __atomic_notifier_call_chain(&cpu_pm_notifier_chain, event, NULL, - nr_to_call, nr_calls); + ret = atomic_notifier_call_chain(&cpu_pm_notifier_chain, event, NULL); + rcu_irq_exit_irqson(); + + return notifier_to_errno(ret); +} + +static int cpu_pm_notify_robust(enum cpu_pm_event event_up, enum cpu_pm_event event_down) +{ + int ret; + + rcu_irq_enter_irqson(); + ret = atomic_notifier_call_chain_robust(&cpu_pm_notifier_chain, event_up, event_down, NULL); rcu_irq_exit_irqson();
return notifier_to_errno(ret); @@ -80,18 +90,7 @@ EXPORT_SYMBOL_GPL(cpu_pm_unregister_notifier); */ int cpu_pm_enter(void) { - int nr_calls = 0; - int ret = 0; - - ret = cpu_pm_notify(CPU_PM_ENTER, -1, &nr_calls); - if (ret) - /* - * Inform listeners (nr_calls - 1) about failure of CPU PM - * PM entry who are notified earlier to prepare for it. - */ - cpu_pm_notify(CPU_PM_ENTER_FAILED, nr_calls - 1, NULL); - - return ret; + return cpu_pm_notify_robust(CPU_PM_ENTER, CPU_PM_ENTER_FAILED); } EXPORT_SYMBOL_GPL(cpu_pm_enter);
@@ -109,7 +108,7 @@ EXPORT_SYMBOL_GPL(cpu_pm_enter); */ int cpu_pm_exit(void) { - return cpu_pm_notify(CPU_PM_EXIT, -1, NULL); + return cpu_pm_notify(CPU_PM_EXIT); } EXPORT_SYMBOL_GPL(cpu_pm_exit);
@@ -131,18 +130,7 @@ EXPORT_SYMBOL_GPL(cpu_pm_exit); */ int cpu_cluster_pm_enter(void) { - int nr_calls = 0; - int ret = 0; - - ret = cpu_pm_notify(CPU_CLUSTER_PM_ENTER, -1, &nr_calls); - if (ret) - /* - * Inform listeners (nr_calls - 1) about failure of CPU cluster - * PM entry who are notified earlier to prepare for it. - */ - cpu_pm_notify(CPU_CLUSTER_PM_ENTER_FAILED, nr_calls - 1, NULL); - - return ret; + return cpu_pm_notify_robust(CPU_CLUSTER_PM_ENTER, CPU_CLUSTER_PM_ENTER_FAILED); } EXPORT_SYMBOL_GPL(cpu_cluster_pm_enter);
@@ -163,7 +151,7 @@ EXPORT_SYMBOL_GPL(cpu_cluster_pm_enter); */ int cpu_cluster_pm_exit(void) { - return cpu_pm_notify(CPU_CLUSTER_PM_EXIT, -1, NULL); + return cpu_pm_notify(CPU_CLUSTER_PM_EXIT); } EXPORT_SYMBOL_GPL(cpu_cluster_pm_exit);
diff --git a/kernel/notifier.c b/kernel/notifier.c index 84c987dfbe036..1b019cbca594a 100644 --- a/kernel/notifier.c +++ b/kernel/notifier.c @@ -94,6 +94,34 @@ static int notifier_call_chain(struct notifier_block **nl, } NOKPROBE_SYMBOL(notifier_call_chain);
+/** + * notifier_call_chain_robust - Inform the registered notifiers about an event + * and rollback on error. + * @nl: Pointer to head of the blocking notifier chain + * @val_up: Value passed unmodified to the notifier function + * @val_down: Value passed unmodified to the notifier function when recovering + * from an error on @val_up + * @v Pointer passed unmodified to the notifier function + * + * NOTE: It is important the @nl chain doesn't change between the two + * invocations of notifier_call_chain() such that we visit the + * exact same notifier callbacks; this rules out any RCU usage. + * + * Returns: the return value of the @val_up call. + */ +static int notifier_call_chain_robust(struct notifier_block **nl, + unsigned long val_up, unsigned long val_down, + void *v) +{ + int ret, nr = 0; + + ret = notifier_call_chain(nl, val_up, v, -1, &nr); + if (ret & NOTIFY_STOP_MASK) + notifier_call_chain(nl, val_down, v, nr-1, NULL); + + return ret; +} + /* * Atomic notifier chain routines. Registration and unregistration * use a spinlock, and call_chain is synchronized by RCU (no locks). @@ -144,13 +172,30 @@ int atomic_notifier_chain_unregister(struct atomic_notifier_head *nh, } EXPORT_SYMBOL_GPL(atomic_notifier_chain_unregister);
+int atomic_notifier_call_chain_robust(struct atomic_notifier_head *nh, + unsigned long val_up, unsigned long val_down, void *v) +{ + unsigned long flags; + int ret; + + /* + * Musn't use RCU; because then the notifier list can + * change between the up and down traversal. + */ + spin_lock_irqsave(&nh->lock, flags); + ret = notifier_call_chain_robust(&nh->head, val_up, val_down, v); + spin_unlock_irqrestore(&nh->lock, flags); + + return ret; +} +EXPORT_SYMBOL_GPL(atomic_notifier_call_chain_robust); +NOKPROBE_SYMBOL(atomic_notifier_call_chain_robust); + /** - * __atomic_notifier_call_chain - Call functions in an atomic notifier chain + * atomic_notifier_call_chain - Call functions in an atomic notifier chain * @nh: Pointer to head of the atomic notifier chain * @val: Value passed unmodified to notifier function * @v: Pointer passed unmodified to notifier function - * @nr_to_call: See the comment for notifier_call_chain. - * @nr_calls: See the comment for notifier_call_chain. * * Calls each function in a notifier chain in turn. The functions * run in an atomic context, so they must not block. @@ -163,24 +208,16 @@ EXPORT_SYMBOL_GPL(atomic_notifier_chain_unregister); * Otherwise the return value is the return value * of the last notifier function called. */ -int __atomic_notifier_call_chain(struct atomic_notifier_head *nh, - unsigned long val, void *v, - int nr_to_call, int *nr_calls) +int atomic_notifier_call_chain(struct atomic_notifier_head *nh, + unsigned long val, void *v) { int ret;
rcu_read_lock(); - ret = notifier_call_chain(&nh->head, val, v, nr_to_call, nr_calls); + ret = notifier_call_chain(&nh->head, val, v, -1, NULL); rcu_read_unlock(); - return ret; -} -EXPORT_SYMBOL_GPL(__atomic_notifier_call_chain); -NOKPROBE_SYMBOL(__atomic_notifier_call_chain);
-int atomic_notifier_call_chain(struct atomic_notifier_head *nh, - unsigned long val, void *v) -{ - return __atomic_notifier_call_chain(nh, val, v, -1, NULL); + return ret; } EXPORT_SYMBOL_GPL(atomic_notifier_call_chain); NOKPROBE_SYMBOL(atomic_notifier_call_chain); @@ -250,13 +287,30 @@ int blocking_notifier_chain_unregister(struct blocking_notifier_head *nh, } EXPORT_SYMBOL_GPL(blocking_notifier_chain_unregister);
+int blocking_notifier_call_chain_robust(struct blocking_notifier_head *nh, + unsigned long val_up, unsigned long val_down, void *v) +{ + int ret = NOTIFY_DONE; + + /* + * We check the head outside the lock, but if this access is + * racy then it does not matter what the result of the test + * is, we re-check the list after having taken the lock anyway: + */ + if (rcu_access_pointer(nh->head)) { + down_read(&nh->rwsem); + ret = notifier_call_chain_robust(&nh->head, val_up, val_down, v); + up_read(&nh->rwsem); + } + return ret; +} +EXPORT_SYMBOL_GPL(blocking_notifier_call_chain_robust); + /** - * __blocking_notifier_call_chain - Call functions in a blocking notifier chain + * blocking_notifier_call_chain - Call functions in a blocking notifier chain * @nh: Pointer to head of the blocking notifier chain * @val: Value passed unmodified to notifier function * @v: Pointer passed unmodified to notifier function - * @nr_to_call: See comment for notifier_call_chain. - * @nr_calls: See comment for notifier_call_chain. * * Calls each function in a notifier chain in turn. The functions * run in a process context, so they are allowed to block. @@ -268,9 +322,8 @@ EXPORT_SYMBOL_GPL(blocking_notifier_chain_unregister); * Otherwise the return value is the return value * of the last notifier function called. */ -int __blocking_notifier_call_chain(struct blocking_notifier_head *nh, - unsigned long val, void *v, - int nr_to_call, int *nr_calls) +int blocking_notifier_call_chain(struct blocking_notifier_head *nh, + unsigned long val, void *v) { int ret = NOTIFY_DONE;
@@ -281,19 +334,11 @@ int __blocking_notifier_call_chain(struct blocking_notifier_head *nh, */ if (rcu_access_pointer(nh->head)) { down_read(&nh->rwsem); - ret = notifier_call_chain(&nh->head, val, v, nr_to_call, - nr_calls); + ret = notifier_call_chain(&nh->head, val, v, -1, NULL); up_read(&nh->rwsem); } return ret; } -EXPORT_SYMBOL_GPL(__blocking_notifier_call_chain); - -int blocking_notifier_call_chain(struct blocking_notifier_head *nh, - unsigned long val, void *v) -{ - return __blocking_notifier_call_chain(nh, val, v, -1, NULL); -} EXPORT_SYMBOL_GPL(blocking_notifier_call_chain);
/* @@ -335,13 +380,18 @@ int raw_notifier_chain_unregister(struct raw_notifier_head *nh, } EXPORT_SYMBOL_GPL(raw_notifier_chain_unregister);
+int raw_notifier_call_chain_robust(struct raw_notifier_head *nh, + unsigned long val_up, unsigned long val_down, void *v) +{ + return notifier_call_chain_robust(&nh->head, val_up, val_down, v); +} +EXPORT_SYMBOL_GPL(raw_notifier_call_chain_robust); + /** - * __raw_notifier_call_chain - Call functions in a raw notifier chain + * raw_notifier_call_chain - Call functions in a raw notifier chain * @nh: Pointer to head of the raw notifier chain * @val: Value passed unmodified to notifier function * @v: Pointer passed unmodified to notifier function - * @nr_to_call: See comment for notifier_call_chain. - * @nr_calls: See comment for notifier_call_chain * * Calls each function in a notifier chain in turn. The functions * run in an undefined context. @@ -354,18 +404,10 @@ EXPORT_SYMBOL_GPL(raw_notifier_chain_unregister); * Otherwise the return value is the return value * of the last notifier function called. */ -int __raw_notifier_call_chain(struct raw_notifier_head *nh, - unsigned long val, void *v, - int nr_to_call, int *nr_calls) -{ - return notifier_call_chain(&nh->head, val, v, nr_to_call, nr_calls); -} -EXPORT_SYMBOL_GPL(__raw_notifier_call_chain); - int raw_notifier_call_chain(struct raw_notifier_head *nh, unsigned long val, void *v) { - return __raw_notifier_call_chain(nh, val, v, -1, NULL); + return notifier_call_chain(&nh->head, val, v, -1, NULL); } EXPORT_SYMBOL_GPL(raw_notifier_call_chain);
@@ -437,12 +479,10 @@ int srcu_notifier_chain_unregister(struct srcu_notifier_head *nh, EXPORT_SYMBOL_GPL(srcu_notifier_chain_unregister);
/** - * __srcu_notifier_call_chain - Call functions in an SRCU notifier chain + * srcu_notifier_call_chain - Call functions in an SRCU notifier chain * @nh: Pointer to head of the SRCU notifier chain * @val: Value passed unmodified to notifier function * @v: Pointer passed unmodified to notifier function - * @nr_to_call: See comment for notifier_call_chain. - * @nr_calls: See comment for notifier_call_chain * * Calls each function in a notifier chain in turn. The functions * run in a process context, so they are allowed to block. @@ -454,25 +494,17 @@ EXPORT_SYMBOL_GPL(srcu_notifier_chain_unregister); * Otherwise the return value is the return value * of the last notifier function called. */ -int __srcu_notifier_call_chain(struct srcu_notifier_head *nh, - unsigned long val, void *v, - int nr_to_call, int *nr_calls) +int srcu_notifier_call_chain(struct srcu_notifier_head *nh, + unsigned long val, void *v) { int ret; int idx;
idx = srcu_read_lock(&nh->srcu); - ret = notifier_call_chain(&nh->head, val, v, nr_to_call, nr_calls); + ret = notifier_call_chain(&nh->head, val, v, -1, NULL); srcu_read_unlock(&nh->srcu, idx); return ret; } -EXPORT_SYMBOL_GPL(__srcu_notifier_call_chain); - -int srcu_notifier_call_chain(struct srcu_notifier_head *nh, - unsigned long val, void *v) -{ - return __srcu_notifier_call_chain(nh, val, v, -1, NULL); -} EXPORT_SYMBOL_GPL(srcu_notifier_call_chain);
/** diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c index e7aa57fb2fdc3..1dee70815f3cd 100644 --- a/kernel/power/hibernate.c +++ b/kernel/power/hibernate.c @@ -706,8 +706,8 @@ static int load_image_and_restore(void) */ int hibernate(void) { - int error, nr_calls = 0; bool snapshot_test = false; + int error;
if (!hibernation_available()) { pm_pr_dbg("Hibernation not available.\n"); @@ -723,11 +723,9 @@ int hibernate(void)
pr_info("hibernation entry\n"); pm_prepare_console(); - error = __pm_notifier_call_chain(PM_HIBERNATION_PREPARE, -1, &nr_calls); - if (error) { - nr_calls--; - goto Exit; - } + error = pm_notifier_call_chain_robust(PM_HIBERNATION_PREPARE, PM_POST_HIBERNATION); + if (error) + goto Restore;
ksys_sync_helper();
@@ -785,7 +783,8 @@ int hibernate(void) /* Don't bother checking whether freezer_test_done is true */ freezer_test_done = false; Exit: - __pm_notifier_call_chain(PM_POST_HIBERNATION, nr_calls, NULL); + pm_notifier_call_chain(PM_POST_HIBERNATION); + Restore: pm_restore_console(); hibernate_release(); Unlock: @@ -804,7 +803,7 @@ int hibernate(void) */ int hibernate_quiet_exec(int (*func)(void *data), void *data) { - int error, nr_calls = 0; + int error;
lock_system_sleep();
@@ -815,11 +814,9 @@ int hibernate_quiet_exec(int (*func)(void *data), void *data)
pm_prepare_console();
- error = __pm_notifier_call_chain(PM_HIBERNATION_PREPARE, -1, &nr_calls); - if (error) { - nr_calls--; - goto exit; - } + error = pm_notifier_call_chain_robust(PM_HIBERNATION_PREPARE, PM_POST_HIBERNATION); + if (error) + goto restore;
error = freeze_processes(); if (error) @@ -880,8 +877,9 @@ int hibernate_quiet_exec(int (*func)(void *data), void *data) thaw_processes();
exit: - __pm_notifier_call_chain(PM_POST_HIBERNATION, nr_calls, NULL); + pm_notifier_call_chain(PM_POST_HIBERNATION);
+restore: pm_restore_console();
hibernate_release(); @@ -910,7 +908,7 @@ EXPORT_SYMBOL_GPL(hibernate_quiet_exec); */ static int software_resume(void) { - int error, nr_calls = 0; + int error;
/* * If the user said "noresume".. bail out early. @@ -997,11 +995,9 @@ static int software_resume(void)
pr_info("resume from hibernation\n"); pm_prepare_console(); - error = __pm_notifier_call_chain(PM_RESTORE_PREPARE, -1, &nr_calls); - if (error) { - nr_calls--; - goto Close_Finish; - } + error = pm_notifier_call_chain_robust(PM_RESTORE_PREPARE, PM_POST_RESTORE); + if (error) + goto Restore;
pm_pr_dbg("Preparing processes for hibernation restore.\n"); error = freeze_processes(); @@ -1017,7 +1013,8 @@ static int software_resume(void) error = load_image_and_restore(); thaw_processes(); Finish: - __pm_notifier_call_chain(PM_POST_RESTORE, nr_calls, NULL); + pm_notifier_call_chain(PM_POST_RESTORE); + Restore: pm_restore_console(); pr_info("resume failed (%d)\n", error); hibernate_release(); diff --git a/kernel/power/main.c b/kernel/power/main.c index 40f86ec4ab30d..0aefd6f57e0ac 100644 --- a/kernel/power/main.c +++ b/kernel/power/main.c @@ -80,18 +80,18 @@ int unregister_pm_notifier(struct notifier_block *nb) } EXPORT_SYMBOL_GPL(unregister_pm_notifier);
-int __pm_notifier_call_chain(unsigned long val, int nr_to_call, int *nr_calls) +int pm_notifier_call_chain_robust(unsigned long val_up, unsigned long val_down) { int ret;
- ret = __blocking_notifier_call_chain(&pm_chain_head, val, NULL, - nr_to_call, nr_calls); + ret = blocking_notifier_call_chain_robust(&pm_chain_head, val_up, val_down, NULL);
return notifier_to_errno(ret); } + int pm_notifier_call_chain(unsigned long val) { - return __pm_notifier_call_chain(val, -1, NULL); + return blocking_notifier_call_chain(&pm_chain_head, val, NULL); }
/* If set, devices may be suspended and resumed asynchronously. */ diff --git a/kernel/power/power.h b/kernel/power/power.h index 32fc89ac96c30..24f12d534515f 100644 --- a/kernel/power/power.h +++ b/kernel/power/power.h @@ -210,8 +210,7 @@ static inline void suspend_test_finish(const char *label) {}
#ifdef CONFIG_PM_SLEEP /* kernel/power/main.c */ -extern int __pm_notifier_call_chain(unsigned long val, int nr_to_call, - int *nr_calls); +extern int pm_notifier_call_chain_robust(unsigned long val_up, unsigned long val_down); extern int pm_notifier_call_chain(unsigned long val); #endif
diff --git a/kernel/power/suspend.c b/kernel/power/suspend.c index 8b1bb5ee7e5d6..32391acc806bf 100644 --- a/kernel/power/suspend.c +++ b/kernel/power/suspend.c @@ -342,18 +342,16 @@ static int suspend_test(int level) */ static int suspend_prepare(suspend_state_t state) { - int error, nr_calls = 0; + int error;
if (!sleep_state_supported(state)) return -EPERM;
pm_prepare_console();
- error = __pm_notifier_call_chain(PM_SUSPEND_PREPARE, -1, &nr_calls); - if (error) { - nr_calls--; - goto Finish; - } + error = pm_notifier_call_chain_robust(PM_SUSPEND_PREPARE, PM_POST_SUSPEND); + if (error) + goto Restore;
trace_suspend_resume(TPS("freeze_processes"), 0, true); error = suspend_freeze_processes(); @@ -363,8 +361,8 @@ static int suspend_prepare(suspend_state_t state)
suspend_stats.failed_freeze++; dpm_save_failed_step(SUSPEND_FREEZE); - Finish: - __pm_notifier_call_chain(PM_POST_SUSPEND, nr_calls, NULL); + pm_notifier_call_chain(PM_POST_SUSPEND); + Restore: pm_restore_console(); return error; } diff --git a/kernel/power/user.c b/kernel/power/user.c index d5eedc2baa2a1..047f598f89a5c 100644 --- a/kernel/power/user.c +++ b/kernel/power/user.c @@ -46,7 +46,7 @@ int is_hibernate_resume_dev(const struct inode *bd_inode) static int snapshot_open(struct inode *inode, struct file *filp) { struct snapshot_data *data; - int error, nr_calls = 0; + int error;
if (!hibernation_available()) return -EPERM; @@ -73,9 +73,7 @@ static int snapshot_open(struct inode *inode, struct file *filp) swap_type_of(swsusp_resume_device, 0, NULL) : -1; data->mode = O_RDONLY; data->free_bitmaps = false; - error = __pm_notifier_call_chain(PM_HIBERNATION_PREPARE, -1, &nr_calls); - if (error) - __pm_notifier_call_chain(PM_POST_HIBERNATION, --nr_calls, NULL); + error = pm_notifier_call_chain_robust(PM_HIBERNATION_PREPARE, PM_POST_HIBERNATION); } else { /* * Resuming. We may need to wait for the image device to @@ -85,15 +83,11 @@ static int snapshot_open(struct inode *inode, struct file *filp)
data->swap = -1; data->mode = O_WRONLY; - error = __pm_notifier_call_chain(PM_RESTORE_PREPARE, -1, &nr_calls); + error = pm_notifier_call_chain_robust(PM_RESTORE_PREPARE, PM_POST_RESTORE); if (!error) { error = create_basic_memory_bitmaps(); data->free_bitmaps = !error; - } else - nr_calls--; - - if (error) - __pm_notifier_call_chain(PM_POST_RESTORE, nr_calls, NULL); + } } if (error) hibernate_release(); diff --git a/tools/power/pm-graph/sleepgraph.py b/tools/power/pm-graph/sleepgraph.py index 46ff97e909c6f..1bc36a1db14f6 100755 --- a/tools/power/pm-graph/sleepgraph.py +++ b/tools/power/pm-graph/sleepgraph.py @@ -171,7 +171,7 @@ class SystemValues: tracefuncs = { 'sys_sync': {}, 'ksys_sync': {}, - '__pm_notifier_call_chain': {}, + 'pm_notifier_call_chain_robust': {}, 'pm_prepare_console': {}, 'pm_notifier_call_chain': {}, 'freeze_processes': {},
From: Alexander Aring aahringo@redhat.com
[ Upstream commit 3d2825c8c6105b0f36f3ff72760799fa2e71420e ]
This patch fixes the following memory detected by kmemleak and umount gfs2 filesystem which removed the last lockspace:
unreferenced object 0xffff9264f482f600 (size 192): comm "dlm_controld", pid 325, jiffies 4294690276 (age 48.136s) hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 6e 6f 64 65 73 00 00 00 ........nodes... 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<00000000060481d7>] make_space+0x41/0x130 [<000000008d905d46>] configfs_mkdir+0x1a2/0x5f0 [<00000000729502cf>] vfs_mkdir+0x155/0x210 [<000000000369bcf1>] do_mkdirat+0x6d/0x110 [<00000000cc478a33>] do_syscall_64+0x33/0x40 [<00000000ce9ccf01>] entry_SYSCALL_64_after_hwframe+0x44/0xa9
The patch just remembers the "nodes" entry pointer in space as I think it's created as subdirectory when parent "spaces" is created. In function drop_space() we will lost the pointer reference to nds because configfs_remove_default_groups(). However as this subdirectory is always available when "spaces" exists it will just be freed when "spaces" will be freed.
Signed-off-by: Alexander Aring aahringo@redhat.com Signed-off-by: David Teigland teigland@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/dlm/config.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/fs/dlm/config.c b/fs/dlm/config.c index 47f0b98b707f8..f33a7e4ae917b 100644 --- a/fs/dlm/config.c +++ b/fs/dlm/config.c @@ -221,6 +221,7 @@ struct dlm_space { struct list_head members; struct mutex members_lock; int members_count; + struct dlm_nodes *nds; };
struct dlm_comms { @@ -430,6 +431,7 @@ static struct config_group *make_space(struct config_group *g, const char *name) INIT_LIST_HEAD(&sp->members); mutex_init(&sp->members_lock); sp->members_count = 0; + sp->nds = nds; return &sp->group;
fail: @@ -451,6 +453,7 @@ static void drop_space(struct config_group *g, struct config_item *i) static void release_space(struct config_item *i) { struct dlm_space *sp = config_item_to_space(i); + kfree(sp->nds); kfree(sp); }
From: Rajendra Nayak rnayak@codeaurora.org
[ Upstream commit 98cd831088c64aa8fe7e1d2a8bb94b6faba0462b ]
Post a successful pm_ops->core_get, an error in probe should exit by doing a pm_ops->core_put which seems to be missing. So fix it.
Signed-off-by: Rajendra Nayak rnayak@codeaurora.org Reviewed-by: Bjorn Andersson bjorn.andersson@linaro.org Signed-off-by: Stanimir Varbanov stanimir.varbanov@linaro.org Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/platform/qcom/venus/core.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/drivers/media/platform/qcom/venus/core.c b/drivers/media/platform/qcom/venus/core.c index 203c6538044fb..bfcaba37d60fe 100644 --- a/drivers/media/platform/qcom/venus/core.c +++ b/drivers/media/platform/qcom/venus/core.c @@ -224,13 +224,15 @@ static int venus_probe(struct platform_device *pdev)
ret = dma_set_mask_and_coherent(dev, core->res->dma_mask); if (ret) - return ret; + goto err_core_put;
if (!dev->dma_parms) { dev->dma_parms = devm_kzalloc(dev, sizeof(*dev->dma_parms), GFP_KERNEL); - if (!dev->dma_parms) - return -ENOMEM; + if (!dev->dma_parms) { + ret = -ENOMEM; + goto err_core_put; + } } dma_set_max_seg_size(dev, DMA_BIT_MASK(32));
@@ -242,11 +244,11 @@ static int venus_probe(struct platform_device *pdev) IRQF_TRIGGER_HIGH | IRQF_ONESHOT, "venus", core); if (ret) - return ret; + goto err_core_put;
ret = hfi_create(core, &venus_core_ops); if (ret) - return ret; + goto err_core_put;
pm_runtime_enable(dev);
@@ -302,6 +304,9 @@ static int venus_probe(struct platform_device *pdev) pm_runtime_set_suspended(dev); pm_runtime_disable(dev); hfi_destroy(core); +err_core_put: + if (core->pm_ops->core_put) + core->pm_ops->core_put(dev); return ret; }
From: Dinghao Liu dinghao.liu@zju.edu.cn
[ Upstream commit bbe516e976fce538db96bd2b7287df942faa14a3 ]
pm_runtime_get_sync() increments the runtime PM usage counter even when it returns an error code. Thus a pairing decrement is needed on the error handling path to keep the counter balanced. For other error paths after this call, things are the same.
Fix this by adding pm_runtime_put_noidle() after 'err_runtime_disable' label. But in this case, the error path after pm_runtime_put_sync() will decrease PM usage counter twice. Thus add an extra pm_runtime_get_noresume() in this path to balance PM counter.
Signed-off-by: Dinghao Liu dinghao.liu@zju.edu.cn Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/media/platform/qcom/venus/core.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/media/platform/qcom/venus/core.c b/drivers/media/platform/qcom/venus/core.c index bfcaba37d60fe..321ad77cb6cf4 100644 --- a/drivers/media/platform/qcom/venus/core.c +++ b/drivers/media/platform/qcom/venus/core.c @@ -289,8 +289,10 @@ static int venus_probe(struct platform_device *pdev) goto err_core_deinit;
ret = pm_runtime_put_sync(dev); - if (ret) + if (ret) { + pm_runtime_get_noresume(dev); goto err_dev_unregister; + }
return 0;
@@ -301,6 +303,7 @@ static int venus_probe(struct platform_device *pdev) err_venus_shutdown: venus_shutdown(core); err_runtime_disable: + pm_runtime_put_noidle(dev); pm_runtime_set_suspended(dev); pm_runtime_disable(dev); hfi_destroy(core);
From: Mathieu Desnoyers mathieu.desnoyers@efficios.com
[ Upstream commit 272928d1cdacfc3b55f605cb0e9115832ecfb20c ]
As per RFC4443, the destination address field for ICMPv6 error messages is copied from the source address field of the invoking packet.
In configurations with Virtual Routing and Forwarding tables, looking up which routing table to use for sending ICMPv6 error messages is currently done by using the destination net_device.
If the source and destination interfaces are within separate VRFs, or one in the global routing table and the other in a VRF, looking up the source address of the invoking packet in the destination interface's routing table will fail if the destination interface's routing table contains no route to the invoking packet's source address.
One observable effect of this issue is that traceroute6 does not work in the following cases:
- Route leaking between global routing table and VRF - Route leaking between VRFs
Use the source device routing table when sending ICMPv6 error messages.
[ In the context of ipv4, it has been pointed out that a similar issue may exist with ICMP errors triggered when forwarding between network namespaces. It would be worthwhile to investigate whether ipv6 has similar issues, but is outside of the scope of this investigation. ]
[ Testing shows that similar issues exist with ipv6 unreachable / fragmentation needed messages. However, investigation of this additional failure mode is beyond this investigation's scope. ]
Link: https://tools.ietf.org/html/rfc4443 Signed-off-by: Mathieu Desnoyers mathieu.desnoyers@efficios.com Reviewed-by: David Ahern dsahern@gmail.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv6/icmp.c | 7 +++++-- net/ipv6/ip6_output.c | 2 -- 2 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c index a4e4912ad607b..91209a2760aa5 100644 --- a/net/ipv6/icmp.c +++ b/net/ipv6/icmp.c @@ -501,8 +501,11 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info, if (__ipv6_addr_needs_scope_id(addr_type)) { iif = icmp6_iif(skb); } else { - dst = skb_dst(skb); - iif = l3mdev_master_ifindex(dst ? dst->dev : skb->dev); + /* + * The source device is used for looking up which routing table + * to use for sending an ICMP error. + */ + iif = l3mdev_master_ifindex(skb->dev); }
/* diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c index c78e67d7747fb..cd623068de536 100644 --- a/net/ipv6/ip6_output.c +++ b/net/ipv6/ip6_output.c @@ -468,8 +468,6 @@ int ip6_forward(struct sk_buff *skb) * check and decrement ttl */ if (hdr->hop_limit <= 1) { - /* Force OUTPUT device used as source address */ - skb->dev = dst->dev; icmpv6_send(skb, ICMPV6_TIME_EXCEED, ICMPV6_EXC_HOPLIMIT, 0); __IP6_INC_STATS(net, idev, IPSTATS_MIB_INHDRERRORS);
On Sun, 18 Oct 2020 15:16:51 -0400 Sasha Levin wrote:
From: Mathieu Desnoyers mathieu.desnoyers@efficios.com
[ Upstream commit 272928d1cdacfc3b55f605cb0e9115832ecfb20c ]
As per RFC4443, the destination address field for ICMPv6 error messages is copied from the source address field of the invoking packet.
In configurations with Virtual Routing and Forwarding tables, looking up which routing table to use for sending ICMPv6 error messages is currently done by using the destination net_device.
This one got applied a few days ago, and the urgency is low so it may be worth letting it see at least one -rc release ;)
Maybe Mathieu & David feel differently.
On 10/18/20 1:40 PM, Jakub Kicinski wrote:
This one got applied a few days ago, and the urgency is low so it may be worth letting it see at least one -rc release ;)
agreed
On Sun, Oct 18, 2020 at 07:40:12PM -0600, David Ahern wrote:
On 10/18/20 1:40 PM, Jakub Kicinski wrote:
This one got applied a few days ago, and the urgency is low so it may be worth letting it see at least one -rc release ;)
agreed
Definitely - AUTOSEL patches get extra soaking time before getting queued up. This is more of a request to make sure it's not doing anything silly.
On Mon, 19 Oct 2020 07:52:36 -0400 Sasha Levin wrote:
On Sun, Oct 18, 2020 at 07:40:12PM -0600, David Ahern wrote:
On 10/18/20 1:40 PM, Jakub Kicinski wrote:
This one got applied a few days ago, and the urgency is low so it may be worth letting it see at least one -rc release ;)
agreed
Definitely - AUTOSEL patches get extra soaking time before getting queued up. This is more of a request to make sure it's not doing anything silly.
Could you put a number on "extra soaking time"?
I'm asking mostly out of curiosity :)
On Mon, Oct 19, 2020 at 08:33:27AM -0700, Jakub Kicinski wrote:
On Mon, 19 Oct 2020 07:52:36 -0400 Sasha Levin wrote:
On Sun, Oct 18, 2020 at 07:40:12PM -0600, David Ahern wrote:
On 10/18/20 1:40 PM, Jakub Kicinski wrote:
This one got applied a few days ago, and the urgency is low so it may be worth letting it see at least one -rc release ;)
agreed
Definitely - AUTOSEL patches get extra soaking time before getting queued up. This is more of a request to make sure it's not doing anything silly.
Could you put a number on "extra soaking time"?
I'm asking mostly out of curiosity :)
The AUTOSEL process adds at least another week into the flow, this means that the fastest this patch will go in into a released kernel is about 2 weeks from now.
----- On Oct 18, 2020, at 9:40 PM, David Ahern dsahern@gmail.com wrote:
On 10/18/20 1:40 PM, Jakub Kicinski wrote:
This one got applied a few days ago, and the urgency is low so it may be worth letting it see at least one -rc release ;)
agreed
Likewise, I agree there is no need to hurry. Letting those patches live through a few -rc releases before picking them into stable is a wise course of action.
Thanks,
Mathieu
On Mon, Oct 19, 2020 at 09:19:33AM -0400, Mathieu Desnoyers wrote:
----- On Oct 18, 2020, at 9:40 PM, David Ahern dsahern@gmail.com wrote:
On 10/18/20 1:40 PM, Jakub Kicinski wrote:
This one got applied a few days ago, and the urgency is low so it may be worth letting it see at least one -rc release ;)
agreed
Likewise, I agree there is no need to hurry. Letting those patches live through a few -rc releases before picking them into stable is a wise course of action.
That's fair, and I've moved this patch into a different queue to make it go out later.
From: Rustam Kovhaev rkovhaev@gmail.com
[ Upstream commit 4f8c94022f0bc3babd0a124c0a7dcdd7547bd94e ]
Number of bytes allocated for mft record should be equal to the mft record size stored in ntfs superblock as reported by syzbot, userspace might trigger out-of-bounds read by dereferencing ctx->attr in ntfs_attr_find()
Reported-by: syzbot+aed06913f36eff9b544e@syzkaller.appspotmail.com Signed-off-by: Rustam Kovhaev rkovhaev@gmail.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Tested-by: syzbot+aed06913f36eff9b544e@syzkaller.appspotmail.com Acked-by: Anton Altaparmakov anton@tuxera.com Link: https://syzkaller.appspot.com/bug?extid=aed06913f36eff9b544e Link: https://lkml.kernel.org/r/20200824022804.226242-1-rkovhaev@gmail.com Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- fs/ntfs/inode.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/fs/ntfs/inode.c b/fs/ntfs/inode.c index 9bb9f0952b186..caf563981532b 100644 --- a/fs/ntfs/inode.c +++ b/fs/ntfs/inode.c @@ -1810,6 +1810,12 @@ int ntfs_read_inode_mount(struct inode *vi) brelse(bh); }
+ if (le32_to_cpu(m->bytes_allocated) != vol->mft_record_size) { + ntfs_error(sb, "Incorrect mft record size %u in superblock, should be %u.", + le32_to_cpu(m->bytes_allocated), vol->mft_record_size); + goto err_out; + } + /* Apply the mst fixups. */ if (post_read_mst_fixup((NTFS_RECORD*)m, vol->mft_record_size)) { /* FIXME: Try to use the $MFTMirr now. */
From: Cong Wang xiyou.wangcong@gmail.com
[ Upstream commit fdafed459998e2be0e877e6189b24cb7a0183224 ]
GRE tunnel has its own header_ops, ipgre_header_ops, and sets it conditionally. When it is set, it assumes the outer IP header is already created before ipgre_xmit().
This is not true when we send packets through a raw packet socket, where L2 headers are supposed to be constructed by user. Packet socket calls dev_validate_header() to validate the header. But GRE tunnel does not set dev->hard_header_len, so that check can be simply bypassed, therefore uninit memory could be passed down to ipgre_xmit(). Similar for dev->needed_headroom.
dev->hard_header_len is supposed to be the length of the header created by dev->header_ops->create(), so it should be used whenever header_ops is set, and dev->needed_headroom should be used when it is not set.
Reported-and-tested-by: syzbot+4a2c52677a8a1aa283cb@syzkaller.appspotmail.com Cc: William Tu u9012063@gmail.com Acked-by: Willem de Bruijn willemb@google.com Signed-off-by: Cong Wang xiyou.wangcong@gmail.com Acked-by: Xie He xie.he.0141@gmail.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/ipv4/ip_gre.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c index 4e31f23e4117e..e70291748889b 100644 --- a/net/ipv4/ip_gre.c +++ b/net/ipv4/ip_gre.c @@ -625,9 +625,7 @@ static netdev_tx_t ipgre_xmit(struct sk_buff *skb, }
if (dev->header_ops) { - /* Need space for new headers */ - if (skb_cow_head(skb, dev->needed_headroom - - (tunnel->hlen + sizeof(struct iphdr)))) + if (skb_cow_head(skb, 0)) goto free_skb;
tnl_params = (const struct iphdr *)skb->data; @@ -748,7 +746,11 @@ static void ipgre_link_update(struct net_device *dev, bool set_mtu) len = tunnel->tun_hlen - len; tunnel->hlen = tunnel->hlen + len;
- dev->needed_headroom = dev->needed_headroom + len; + if (dev->header_ops) + dev->hard_header_len += len; + else + dev->needed_headroom += len; + if (set_mtu) dev->mtu = max_t(int, dev->mtu - len, 68);
@@ -944,6 +946,7 @@ static void __gre_tunnel_init(struct net_device *dev) tunnel->parms.iph.protocol = IPPROTO_GRE;
tunnel->hlen = tunnel->tun_hlen + tunnel->encap_hlen; + dev->needed_headroom = tunnel->hlen + sizeof(tunnel->parms.iph);
dev->features |= GRE_FEATURES; dev->hw_features |= GRE_FEATURES; @@ -987,10 +990,14 @@ static int ipgre_tunnel_init(struct net_device *dev) return -EINVAL; dev->flags = IFF_BROADCAST; dev->header_ops = &ipgre_header_ops; + dev->hard_header_len = tunnel->hlen + sizeof(*iph); + dev->needed_headroom = 0; } #endif } else if (!tunnel->collect_md) { dev->header_ops = &ipgre_header_ops; + dev->hard_header_len = tunnel->hlen + sizeof(*iph); + dev->needed_headroom = 0; }
return ip_tunnel_init(dev);
From: Thomas Pedersen thomas@adapt-ip.com
[ Upstream commit 8b783d104e7f40684333d2ec155fac39219beb2f ]
Even though a driver or mac80211 shouldn't produce a legacy bitrate if sband->bitrates doesn't exist, don't crash if that is the case either.
This fixes a kernel panic if station dump is run before last_rate can be updated with a data frame when sband->bitrates is missing (eg. in S1G bands).
Signed-off-by: Thomas Pedersen thomas@adapt-ip.com Link: https://lore.kernel.org/r/20201005164522.18069-1-thomas@adapt-ip.com Signed-off-by: Johannes Berg johannes.berg@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- net/mac80211/cfg.c | 3 ++- net/mac80211/sta_info.c | 4 ++++ 2 files changed, 6 insertions(+), 1 deletion(-)
diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c index 87fddd84c621e..82d516d117385 100644 --- a/net/mac80211/cfg.c +++ b/net/mac80211/cfg.c @@ -709,7 +709,8 @@ void sta_set_rate_info_tx(struct sta_info *sta, u16 brate;
sband = ieee80211_get_sband(sta->sdata); - if (sband) { + WARN_ON_ONCE(sband && !sband->bitrates); + if (sband && sband->bitrates) { brate = sband->bitrates[rate->idx].bitrate; rinfo->legacy = DIV_ROUND_UP(brate, 1 << shift); } diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c index f2840d1d95cfb..fb4f2b9b294f0 100644 --- a/net/mac80211/sta_info.c +++ b/net/mac80211/sta_info.c @@ -2122,6 +2122,10 @@ static void sta_stats_decode_rate(struct ieee80211_local *local, u32 rate, int rate_idx = STA_STATS_GET(LEGACY_IDX, rate);
sband = local->hw.wiphy->bands[band]; + + if (WARN_ON_ONCE(!sband->bitrates)) + break; + brate = sband->bitrates[rate_idx].bitrate; if (rinfo->bw == RATE_INFO_BW_5) shift = 2;
From: Jérôme Pouiller jerome.pouiller@silabs.com
[ Upstream commit 8d350c14ee5eb62ecd40b0991248bfbce511954d ]
As expected, when the device detect a MMIC error, it returns a specific status. However, it also strip IV from the frame (don't ask me why).
So, with the current code, mac80211 detects a corrupted frame and it drops it before it handle the MMIC error. The expected behavior would be to detect MMIC error then to renegotiate the EAP session.
So, this patch correctly informs mac80211 that IV is not available. So, mac80211 correctly takes into account the MMIC error.
Signed-off-by: Jérôme Pouiller jerome.pouiller@silabs.com Link: https://lore.kernel.org/r/20201007101943.749898-2-Jerome.Pouiller@silabs.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/staging/wfx/data_rx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/staging/wfx/data_rx.c b/drivers/staging/wfx/data_rx.c index 6fb0788807426..81c37ec0f2834 100644 --- a/drivers/staging/wfx/data_rx.c +++ b/drivers/staging/wfx/data_rx.c @@ -41,7 +41,7 @@ void wfx_rx_cb(struct wfx_vif *wvif, memset(hdr, 0, sizeof(*hdr));
if (arg->status == HIF_STATUS_RX_FAIL_MIC) - hdr->flag |= RX_FLAG_MMIC_ERROR; + hdr->flag |= RX_FLAG_MMIC_ERROR | RX_FLAG_IV_STRIPPED; else if (arg->status) goto drop;
From: Hangbin Liu liuhangbin@gmail.com
[ Upstream commit a0f2b7acb4b1d29127ff99c714233b973afd1411 ]
Previously we forgot to close the map fd if bpf_map_update_elem() failed during map slot init, which will leak map fd.
Let's move map slot initialization to new function init_map_slots() to simplify the code. And close the map fd if init slot failed.
Reported-by: Andrii Nakryiko andrii.nakryiko@gmail.com Signed-off-by: Hangbin Liu liuhangbin@gmail.com Signed-off-by: Alexei Starovoitov ast@kernel.org Acked-by: Andrii Nakryiko andrii@kernel.org Link: https://lore.kernel.org/bpf/20201006021345.3817033-2-liuhangbin@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/lib/bpf/libbpf.c | 55 ++++++++++++++++++++++++++---------------- 1 file changed, 34 insertions(+), 21 deletions(-)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index e493d6048143f..d71d93c56363c 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -3841,6 +3841,36 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map) return 0; }
+static int init_map_slots(struct bpf_map *map) +{ + const struct bpf_map *targ_map; + unsigned int i; + int fd, err; + + for (i = 0; i < map->init_slots_sz; i++) { + if (!map->init_slots[i]) + continue; + + targ_map = map->init_slots[i]; + fd = bpf_map__fd(targ_map); + err = bpf_map_update_elem(map->fd, &i, &fd, 0); + if (err) { + err = -errno; + pr_warn("map '%s': failed to initialize slot [%d] to map '%s' fd=%d: %d\n", + map->name, i, targ_map->name, + fd, err); + return err; + } + pr_debug("map '%s': slot [%d] set to map '%s' fd=%d\n", + map->name, i, targ_map->name, fd); + } + + zfree(&map->init_slots); + map->init_slots_sz = 0; + + return 0; +} + static int bpf_object__create_maps(struct bpf_object *obj) { @@ -3883,28 +3913,11 @@ bpf_object__create_maps(struct bpf_object *obj) }
if (map->init_slots_sz) { - for (j = 0; j < map->init_slots_sz; j++) { - const struct bpf_map *targ_map; - int fd; - - if (!map->init_slots[j]) - continue; - - targ_map = map->init_slots[j]; - fd = bpf_map__fd(targ_map); - err = bpf_map_update_elem(map->fd, &j, &fd, 0); - if (err) { - err = -errno; - pr_warn("map '%s': failed to initialize slot [%d] to map '%s' fd=%d: %d\n", - map->name, j, targ_map->name, - fd, err); - goto err_out; - } - pr_debug("map '%s': slot [%d] set to map '%s' fd=%d\n", - map->name, j, targ_map->name, fd); + err = init_map_slots(map); + if (err < 0) { + zclose(map->fd); + goto err_out; } - zfree(&map->init_slots); - map->init_slots_sz = 0; }
if (map->pin_path && !map->pinned) {
From: Song Liu songliubraving@fb.com
[ Upstream commit 39d8f0d1026a990604770a658708f5845f7dbec0 ]
Recent improvements in LOCKDEP highlighted a potential A-A deadlock with pcpu_freelist in NMI:
./tools/testing/selftests/bpf/test_progs -t stacktrace_build_id_nmi
[ 18.984807] ================================ [ 18.984807] WARNING: inconsistent lock state [ 18.984808] 5.9.0-rc6-01771-g1466de1330e1 #2967 Not tainted [ 18.984809] -------------------------------- [ 18.984809] inconsistent {INITIAL USE} -> {IN-NMI} usage. [ 18.984810] test_progs/1990 [HC2[2]:SC0[0]:HE0:SE1] takes: [ 18.984810] ffffe8ffffc219c0 (&head->lock){....}-{2:2}, at: __pcpu_freelist_pop+0xe3/0x180 [ 18.984813] {INITIAL USE} state was registered at: [ 18.984814] lock_acquire+0x175/0x7c0 [ 18.984814] _raw_spin_lock+0x2c/0x40 [ 18.984815] __pcpu_freelist_pop+0xe3/0x180 [ 18.984815] pcpu_freelist_pop+0x31/0x40 [ 18.984816] htab_map_alloc+0xbbf/0xf40 [ 18.984816] __do_sys_bpf+0x5aa/0x3ed0 [ 18.984817] do_syscall_64+0x2d/0x40 [ 18.984818] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [ 18.984818] irq event stamp: 12 [...] [ 18.984822] other info that might help us debug this: [ 18.984823] Possible unsafe locking scenario: [ 18.984823] [ 18.984824] CPU0 [ 18.984824] ---- [ 18.984824] lock(&head->lock); [ 18.984826] <Interrupt> [ 18.984826] lock(&head->lock); [ 18.984827] [ 18.984828] *** DEADLOCK *** [ 18.984828] [ 18.984829] 2 locks held by test_progs/1990: [...] [ 18.984838] <NMI> [ 18.984838] dump_stack+0x9a/0xd0 [ 18.984839] lock_acquire+0x5c9/0x7c0 [ 18.984839] ? lock_release+0x6f0/0x6f0 [ 18.984840] ? __pcpu_freelist_pop+0xe3/0x180 [ 18.984840] _raw_spin_lock+0x2c/0x40 [ 18.984841] ? __pcpu_freelist_pop+0xe3/0x180 [ 18.984841] __pcpu_freelist_pop+0xe3/0x180 [ 18.984842] pcpu_freelist_pop+0x17/0x40 [ 18.984842] ? lock_release+0x6f0/0x6f0 [ 18.984843] __bpf_get_stackid+0x534/0xaf0 [ 18.984843] bpf_prog_1fd9e30e1438d3c5_oncpu+0x73/0x350 [ 18.984844] bpf_overflow_handler+0x12f/0x3f0
This is because pcpu_freelist_head.lock is accessed in both NMI and non-NMI context. Fix this issue by using raw_spin_trylock() in NMI.
Since NMI interrupts non-NMI context, when NMI context tries to lock the raw_spinlock, non-NMI context of the same CPU may already have locked a lock and is blocked from unlocking the lock. For a system with N CPUs, there could be N NMIs at the same time, and they may block N non-NMI raw_spinlocks. This is tricky for pcpu_freelist_push(), where unlike _pop(), failing _push() means leaking memory. This issue is more likely to trigger in non-SMP system.
Fix this issue with an extra list, pcpu_freelist.extralist. The extralist is primarily used to take _push() when raw_spin_trylock() failed on all the per CPU lists. It should be empty most of the time. The following table summarizes the behavior of pcpu_freelist in NMI and non-NMI:
non-NMI pop(): use _lock(); check per CPU lists first; if all per CPU lists are empty, check extralist; if extralist is empty, return NULL.
non-NMI push(): use _lock(); only push to per CPU lists.
NMI pop(): use _trylock(); check per CPU lists first; if all per CPU lists are locked or empty, check extralist; if extralist is locked or empty, return NULL.
NMI push(): use _trylock(); check per CPU lists first; if all per CPU lists are locked; try push to extralist; if extralist is also locked, keep trying on per CPU lists.
Reported-by: Alexei Starovoitov ast@kernel.org Signed-off-by: Song Liu songliubraving@fb.com Signed-off-by: Daniel Borkmann daniel@iogearbox.net Acked-by: Martin KaFai Lau kafai@fb.com Link: https://lore.kernel.org/bpf/20201005165838.3735218-1-songliubraving@fb.com Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/bpf/percpu_freelist.c | 101 +++++++++++++++++++++++++++++++++-- kernel/bpf/percpu_freelist.h | 1 + 2 files changed, 97 insertions(+), 5 deletions(-)
diff --git a/kernel/bpf/percpu_freelist.c b/kernel/bpf/percpu_freelist.c index b367430e611c7..3d897de890612 100644 --- a/kernel/bpf/percpu_freelist.c +++ b/kernel/bpf/percpu_freelist.c @@ -17,6 +17,8 @@ int pcpu_freelist_init(struct pcpu_freelist *s) raw_spin_lock_init(&head->lock); head->first = NULL; } + raw_spin_lock_init(&s->extralist.lock); + s->extralist.first = NULL; return 0; }
@@ -40,12 +42,50 @@ static inline void ___pcpu_freelist_push(struct pcpu_freelist_head *head, raw_spin_unlock(&head->lock); }
+static inline bool pcpu_freelist_try_push_extra(struct pcpu_freelist *s, + struct pcpu_freelist_node *node) +{ + if (!raw_spin_trylock(&s->extralist.lock)) + return false; + + pcpu_freelist_push_node(&s->extralist, node); + raw_spin_unlock(&s->extralist.lock); + return true; +} + +static inline void ___pcpu_freelist_push_nmi(struct pcpu_freelist *s, + struct pcpu_freelist_node *node) +{ + int cpu, orig_cpu; + + orig_cpu = cpu = raw_smp_processor_id(); + while (1) { + struct pcpu_freelist_head *head; + + head = per_cpu_ptr(s->freelist, cpu); + if (raw_spin_trylock(&head->lock)) { + pcpu_freelist_push_node(head, node); + raw_spin_unlock(&head->lock); + return; + } + cpu = cpumask_next(cpu, cpu_possible_mask); + if (cpu >= nr_cpu_ids) + cpu = 0; + + /* cannot lock any per cpu lock, try extralist */ + if (cpu == orig_cpu && + pcpu_freelist_try_push_extra(s, node)) + return; + } +} + void __pcpu_freelist_push(struct pcpu_freelist *s, struct pcpu_freelist_node *node) { - struct pcpu_freelist_head *head = this_cpu_ptr(s->freelist); - - ___pcpu_freelist_push(head, node); + if (in_nmi()) + ___pcpu_freelist_push_nmi(s, node); + else + ___pcpu_freelist_push(this_cpu_ptr(s->freelist), node); }
void pcpu_freelist_push(struct pcpu_freelist *s, @@ -81,7 +121,7 @@ void pcpu_freelist_populate(struct pcpu_freelist *s, void *buf, u32 elem_size, } }
-struct pcpu_freelist_node *__pcpu_freelist_pop(struct pcpu_freelist *s) +static struct pcpu_freelist_node *___pcpu_freelist_pop(struct pcpu_freelist *s) { struct pcpu_freelist_head *head; struct pcpu_freelist_node *node; @@ -102,8 +142,59 @@ struct pcpu_freelist_node *__pcpu_freelist_pop(struct pcpu_freelist *s) if (cpu >= nr_cpu_ids) cpu = 0; if (cpu == orig_cpu) - return NULL; + break; + } + + /* per cpu lists are all empty, try extralist */ + raw_spin_lock(&s->extralist.lock); + node = s->extralist.first; + if (node) + s->extralist.first = node->next; + raw_spin_unlock(&s->extralist.lock); + return node; +} + +static struct pcpu_freelist_node * +___pcpu_freelist_pop_nmi(struct pcpu_freelist *s) +{ + struct pcpu_freelist_head *head; + struct pcpu_freelist_node *node; + int orig_cpu, cpu; + + orig_cpu = cpu = raw_smp_processor_id(); + while (1) { + head = per_cpu_ptr(s->freelist, cpu); + if (raw_spin_trylock(&head->lock)) { + node = head->first; + if (node) { + head->first = node->next; + raw_spin_unlock(&head->lock); + return node; + } + raw_spin_unlock(&head->lock); + } + cpu = cpumask_next(cpu, cpu_possible_mask); + if (cpu >= nr_cpu_ids) + cpu = 0; + if (cpu == orig_cpu) + break; } + + /* cannot pop from per cpu lists, try extralist */ + if (!raw_spin_trylock(&s->extralist.lock)) + return NULL; + node = s->extralist.first; + if (node) + s->extralist.first = node->next; + raw_spin_unlock(&s->extralist.lock); + return node; +} + +struct pcpu_freelist_node *__pcpu_freelist_pop(struct pcpu_freelist *s) +{ + if (in_nmi()) + return ___pcpu_freelist_pop_nmi(s); + return ___pcpu_freelist_pop(s); }
struct pcpu_freelist_node *pcpu_freelist_pop(struct pcpu_freelist *s) diff --git a/kernel/bpf/percpu_freelist.h b/kernel/bpf/percpu_freelist.h index fbf8a8a289791..3c76553cfe571 100644 --- a/kernel/bpf/percpu_freelist.h +++ b/kernel/bpf/percpu_freelist.h @@ -13,6 +13,7 @@ struct pcpu_freelist_head {
struct pcpu_freelist { struct pcpu_freelist_head __percpu *freelist; + struct pcpu_freelist_head extralist; };
struct pcpu_freelist_node {
From: Christoph Hellwig hch@lst.de
[ Upstream commit 428805c0c5e76ef643b1fbc893edfb636b3d8aef ]
get_gendisk grabs a reference on the disk and file operation, so this code will leak both of them while having absolutely no use for the gendisk itself.
This effectively reverts commit 2df83fa4bce421f ("PM / Hibernate: Use get_gendisk to verify partition if resume_file is integer format")
Signed-off-by: Christoph Hellwig hch@lst.de Signed-off-by: Rafael J. Wysocki rafael.j.wysocki@intel.com Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/power/hibernate.c | 11 ----------- 1 file changed, 11 deletions(-)
diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c index 1dee70815f3cd..2fc7d509a34fc 100644 --- a/kernel/power/hibernate.c +++ b/kernel/power/hibernate.c @@ -946,17 +946,6 @@ static int software_resume(void)
/* Check if the device is there */ swsusp_resume_device = name_to_dev_t(resume_file); - - /* - * name_to_dev_t is ineffective to verify parition if resume_file is in - * integer format. (e.g. major:minor) - */ - if (isdigit(resume_file[0]) && resume_wait) { - int partno; - while (!get_gendisk(swsusp_resume_device, &partno)) - msleep(10); - } - if (!swsusp_resume_device) { /* * Some device discovery might still be in progress; we need
From: Jing Xiangfeng jingxiangfeng@huawei.com
[ Upstream commit 055f15ab2cb4a5cbc4c0a775ef3d0066e0fa9b34 ]
Return PTR_ERR() from the error handling case instead of 0.
Link: https://lore.kernel.org/r/20200910123848.93649-1-jingxiangfeng@huawei.com Signed-off-by: Jing Xiangfeng jingxiangfeng@huawei.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/mvumi.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/scsi/mvumi.c b/drivers/scsi/mvumi.c index 8906aceda4c43..0354898d7cac1 100644 --- a/drivers/scsi/mvumi.c +++ b/drivers/scsi/mvumi.c @@ -2425,6 +2425,7 @@ static int mvumi_io_attach(struct mvumi_hba *mhba) if (IS_ERR(mhba->dm_thread)) { dev_err(&mhba->pdev->dev, "failed to create device scan thread\n"); + ret = PTR_ERR(mhba->dm_thread); mutex_unlock(&mhba->sas_discovery_mutex); goto fail_create_thread; }
From: Roman Bolshakov r.bolshakov@yadro.com
[ Upstream commit 7010645ba7256992818b518163f46bd4cdf8002a ]
trace-cmd report doesn't show events from target subsystem because scsi_command_size() leaks through event format string:
[target:target_sequencer_start] function scsi_command_size not defined [target:target_cmd_complete] function scsi_command_size not defined
Addition of scsi_command_size() to plugin_scsi.c in trace-cmd doesn't help because an expression is used inside TP_printk(). trace-cmd event parser doesn't understand minus sign inside [ ]:
Error: expected ']' but read '-'
Rather than duplicating kernel code in plugin_scsi.c, provide a dedicated field for CONTROL byte.
Link: https://lore.kernel.org/r/20200929125957.83069-1-r.bolshakov@yadro.com Reviewed-by: Mike Christie michael.christie@oracle.com Signed-off-by: Roman Bolshakov r.bolshakov@yadro.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- include/scsi/scsi_common.h | 7 +++++++ include/trace/events/target.h | 12 ++++++------ 2 files changed, 13 insertions(+), 6 deletions(-)
diff --git a/include/scsi/scsi_common.h b/include/scsi/scsi_common.h index 731ac09ed2313..5b567b43e1b16 100644 --- a/include/scsi/scsi_common.h +++ b/include/scsi/scsi_common.h @@ -25,6 +25,13 @@ scsi_command_size(const unsigned char *cmnd) scsi_varlen_cdb_length(cmnd) : COMMAND_SIZE(cmnd[0]); }
+static inline unsigned char +scsi_command_control(const unsigned char *cmnd) +{ + return (cmnd[0] == VARIABLE_LENGTH_CMD) ? + cmnd[1] : cmnd[COMMAND_SIZE(cmnd[0]) - 1]; +} + /* Returns a human-readable name for the device */ extern const char *scsi_device_type(unsigned type);
diff --git a/include/trace/events/target.h b/include/trace/events/target.h index 77408edd29d2a..67fad2677ed55 100644 --- a/include/trace/events/target.h +++ b/include/trace/events/target.h @@ -141,6 +141,7 @@ TRACE_EVENT(target_sequencer_start, __field( unsigned int, opcode ) __field( unsigned int, data_length ) __field( unsigned int, task_attribute ) + __field( unsigned char, control ) __array( unsigned char, cdb, TCM_MAX_COMMAND_SIZE ) __string( initiator, cmd->se_sess->se_node_acl->initiatorname ) ), @@ -151,6 +152,7 @@ TRACE_EVENT(target_sequencer_start, __entry->opcode = cmd->t_task_cdb[0]; __entry->data_length = cmd->data_length; __entry->task_attribute = cmd->sam_task_attr; + __entry->control = scsi_command_control(cmd->t_task_cdb); memcpy(__entry->cdb, cmd->t_task_cdb, TCM_MAX_COMMAND_SIZE); __assign_str(initiator, cmd->se_sess->se_node_acl->initiatorname); ), @@ -160,9 +162,7 @@ TRACE_EVENT(target_sequencer_start, __entry->tag, show_opcode_name(__entry->opcode), __entry->data_length, __print_hex(__entry->cdb, 16), show_task_attribute_name(__entry->task_attribute), - scsi_command_size(__entry->cdb) <= 16 ? - __entry->cdb[scsi_command_size(__entry->cdb) - 1] : - __entry->cdb[1] + __entry->control ) );
@@ -178,6 +178,7 @@ TRACE_EVENT(target_cmd_complete, __field( unsigned int, opcode ) __field( unsigned int, data_length ) __field( unsigned int, task_attribute ) + __field( unsigned char, control ) __field( unsigned char, scsi_status ) __field( unsigned char, sense_length ) __array( unsigned char, cdb, TCM_MAX_COMMAND_SIZE ) @@ -191,6 +192,7 @@ TRACE_EVENT(target_cmd_complete, __entry->opcode = cmd->t_task_cdb[0]; __entry->data_length = cmd->data_length; __entry->task_attribute = cmd->sam_task_attr; + __entry->control = scsi_command_control(cmd->t_task_cdb); __entry->scsi_status = cmd->scsi_status; __entry->sense_length = cmd->scsi_status == SAM_STAT_CHECK_CONDITION ? min(18, ((u8 *) cmd->sense_buffer)[SPC_ADD_SENSE_LEN_OFFSET] + 8) : 0; @@ -208,9 +210,7 @@ TRACE_EVENT(target_cmd_complete, show_opcode_name(__entry->opcode), __entry->data_length, __print_hex(__entry->cdb, 16), show_task_attribute_name(__entry->task_attribute), - scsi_command_size(__entry->cdb) <= 16 ? - __entry->cdb[scsi_command_size(__entry->cdb) - 1] : - __entry->cdb[1] + __entry->control ) );
From: Sherry Sun sherry.sun@nxp.com
[ Upstream commit 675f0ad4046946e80412896436164d172cd92238 ]
Read and write io memory should address align on ARCH ARM. Change to use memcpy_toio to avoid kernel panic caused by the address un-align issue.
Signed-off-by: Sherry Sun sherry.sun@nxp.com Signed-off-by: Joakim Zhang qiangqing.zhang@nxp.com Link: https://lore.kernel.org/r/20200929091106.24624-5-sherry.sun@nxp.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/misc/mic/vop/vop_vringh.c | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-)
diff --git a/drivers/misc/mic/vop/vop_vringh.c b/drivers/misc/mic/vop/vop_vringh.c index 30eac172f0170..d069947b09345 100644 --- a/drivers/misc/mic/vop/vop_vringh.c +++ b/drivers/misc/mic/vop/vop_vringh.c @@ -602,6 +602,7 @@ static int vop_virtio_copy_from_user(struct vop_vdev *vdev, void __user *ubuf, size_t partlen; bool dma = VOP_USE_DMA && vi->dma_ch; int err = 0; + size_t offset = 0;
if (dma) { dma_alignment = 1 << vi->dma_ch->device->copy_align; @@ -655,13 +656,20 @@ static int vop_virtio_copy_from_user(struct vop_vdev *vdev, void __user *ubuf, * We are copying to IO below and should ideally use something * like copy_from_user_toio(..) if it existed. */ - if (copy_from_user((void __force *)dbuf, ubuf, len)) { - err = -EFAULT; - dev_err(vop_dev(vdev), "%s %d err %d\n", - __func__, __LINE__, err); - goto err; + while (len) { + partlen = min_t(size_t, len, VOP_INT_DMA_BUF_SIZE); + + if (copy_from_user(vvr->buf, ubuf + offset, partlen)) { + err = -EFAULT; + dev_err(vop_dev(vdev), "%s %d err %d\n", + __func__, __LINE__, err); + goto err; + } + memcpy_toio(dbuf + offset, vvr->buf, partlen); + offset += partlen; + vdev->out_bytes += partlen; + len -= partlen; } - vdev->out_bytes += len; err = 0; err: vpdev->hw_ops->unmap(vpdev, dbuf);
From: Sherry Sun sherry.sun@nxp.com
[ Upstream commit cc1a2679865a94b83804822996eed010a50a7c1d ]
Since struct _mic_vring_info and vring are allocated together and follow vring, if the vring_size() is not four bytes aligned, which will cause the start address of struct _mic_vring_info is not four byte aligned. For example, when vring entries is 128, the vring_size() will be 5126 bytes. The _mic_vring_info struct layout in ddr looks like: 0x90002400: 00000000 00390000 EE010000 0000C0FF Here 0x39 is the avail_idx member, and 0xC0FFEE01 is the magic member.
When EP use ioread32(magic) to reads the magic in RC's share memory, it will cause kernel panic on ARM64 platform due to the cross-byte io read. Here read magic in user space use le32toh(vr0->info->magic) will meet the same issue. So add round_up(x,4) for vring_size, then the struct _mic_vring_info will store in this way: 0x90002400: 00000000 00000000 00000039 C0FFEE01 Which will avoid kernel panic when read magic in struct _mic_vring_info.
Signed-off-by: Sherry Sun sherry.sun@nxp.com Signed-off-by: Joakim Zhang qiangqing.zhang@nxp.com Link: https://lore.kernel.org/r/20200929091106.24624-4-sherry.sun@nxp.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/misc/mic/vop/vop_main.c | 2 +- drivers/misc/mic/vop/vop_vringh.c | 4 ++-- samples/mic/mpssd/mpssd.c | 4 ++-- 3 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/misc/mic/vop/vop_main.c b/drivers/misc/mic/vop/vop_main.c index 55e7f21e51f44..6722c726b2590 100644 --- a/drivers/misc/mic/vop/vop_main.c +++ b/drivers/misc/mic/vop/vop_main.c @@ -320,7 +320,7 @@ static struct virtqueue *vop_find_vq(struct virtio_device *dev, /* First assign the vring's allocated in host memory */ vqconfig = _vop_vq_config(vdev->desc) + index; memcpy_fromio(&config, vqconfig, sizeof(config)); - _vr_size = vring_size(le16_to_cpu(config.num), MIC_VIRTIO_RING_ALIGN); + _vr_size = round_up(vring_size(le16_to_cpu(config.num), MIC_VIRTIO_RING_ALIGN), 4); vr_size = PAGE_ALIGN(_vr_size + sizeof(struct _mic_vring_info)); va = vpdev->hw_ops->remap(vpdev, le64_to_cpu(config.address), vr_size); if (!va) diff --git a/drivers/misc/mic/vop/vop_vringh.c b/drivers/misc/mic/vop/vop_vringh.c index d069947b09345..7014ffe88632e 100644 --- a/drivers/misc/mic/vop/vop_vringh.c +++ b/drivers/misc/mic/vop/vop_vringh.c @@ -296,7 +296,7 @@ static int vop_virtio_add_device(struct vop_vdev *vdev,
num = le16_to_cpu(vqconfig[i].num); mutex_init(&vvr->vr_mutex); - vr_size = PAGE_ALIGN(vring_size(num, MIC_VIRTIO_RING_ALIGN) + + vr_size = PAGE_ALIGN(round_up(vring_size(num, MIC_VIRTIO_RING_ALIGN), 4) + sizeof(struct _mic_vring_info)); vr->va = (void *) __get_free_pages(GFP_KERNEL | __GFP_ZERO, @@ -308,7 +308,7 @@ static int vop_virtio_add_device(struct vop_vdev *vdev, goto err; } vr->len = vr_size; - vr->info = vr->va + vring_size(num, MIC_VIRTIO_RING_ALIGN); + vr->info = vr->va + round_up(vring_size(num, MIC_VIRTIO_RING_ALIGN), 4); vr->info->magic = cpu_to_le32(MIC_MAGIC + vdev->virtio_id + i); vr_addr = dma_map_single(&vpdev->dev, vr->va, vr_size, DMA_BIDIRECTIONAL); diff --git a/samples/mic/mpssd/mpssd.c b/samples/mic/mpssd/mpssd.c index a11bf6c5b53b4..cd3f16a6f5caf 100644 --- a/samples/mic/mpssd/mpssd.c +++ b/samples/mic/mpssd/mpssd.c @@ -403,9 +403,9 @@ mic_virtio_copy(struct mic_info *mic, int fd,
static inline unsigned _vring_size(unsigned int num, unsigned long align) { - return ((sizeof(struct vring_desc) * num + sizeof(__u16) * (3 + num) + return _ALIGN_UP(((sizeof(struct vring_desc) * num + sizeof(__u16) * (3 + num) + align - 1) & ~(align - 1)) - + sizeof(__u16) * 3 + sizeof(struct vring_used_elem) * num; + + sizeof(__u16) * 3 + sizeof(struct vring_used_elem) * num, 4); }
/*
From: Yu Chen chenyu56@huawei.com
[ Upstream commit f580170f135af14e287560d94045624d4242d712 ]
SPLIT_BOUNDARY_DISABLE should be set for DesignWare USB3 DRD Core of Hisilicon Kirin Soc when dwc3 core act as host.
[mchehab: dropped a dev_dbg() as only traces are now allowwed on this driver]
Signed-off-by: Yu Chen chenyu56@huawei.com Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Felipe Balbi balbi@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/dwc3/core.c | 25 +++++++++++++++++++++++++ drivers/usb/dwc3/core.h | 7 +++++++ 2 files changed, 32 insertions(+)
diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c index 2eb34c8b4065f..dcf2783ba0ba4 100644 --- a/drivers/usb/dwc3/core.c +++ b/drivers/usb/dwc3/core.c @@ -119,6 +119,7 @@ static void __dwc3_set_mode(struct work_struct *work) struct dwc3 *dwc = work_to_dwc(work); unsigned long flags; int ret; + u32 reg;
if (dwc->dr_mode != USB_DR_MODE_OTG) return; @@ -172,6 +173,11 @@ static void __dwc3_set_mode(struct work_struct *work) otg_set_vbus(dwc->usb2_phy->otg, true); phy_set_mode(dwc->usb2_generic_phy, PHY_MODE_USB_HOST); phy_set_mode(dwc->usb3_generic_phy, PHY_MODE_USB_HOST); + if (dwc->dis_split_quirk) { + reg = dwc3_readl(dwc->regs, DWC3_GUCTL3); + reg |= DWC3_GUCTL3_SPLITDISABLE; + dwc3_writel(dwc->regs, DWC3_GUCTL3, reg); + } } break; case DWC3_GCTL_PRTCAP_DEVICE: @@ -1356,6 +1362,9 @@ static void dwc3_get_properties(struct dwc3 *dwc) dwc->dis_metastability_quirk = device_property_read_bool(dev, "snps,dis_metastability_quirk");
+ dwc->dis_split_quirk = device_property_read_bool(dev, + "snps,dis-split-quirk"); + dwc->lpm_nyet_threshold = lpm_nyet_threshold; dwc->tx_de_emphasis = tx_de_emphasis;
@@ -1865,10 +1874,26 @@ static int dwc3_resume(struct device *dev)
return 0; } + +static void dwc3_complete(struct device *dev) +{ + struct dwc3 *dwc = dev_get_drvdata(dev); + u32 reg; + + if (dwc->current_dr_role == DWC3_GCTL_PRTCAP_HOST && + dwc->dis_split_quirk) { + reg = dwc3_readl(dwc->regs, DWC3_GUCTL3); + reg |= DWC3_GUCTL3_SPLITDISABLE; + dwc3_writel(dwc->regs, DWC3_GUCTL3, reg); + } +} +#else +#define dwc3_complete NULL #endif /* CONFIG_PM_SLEEP */
static const struct dev_pm_ops dwc3_dev_pm_ops = { SET_SYSTEM_SLEEP_PM_OPS(dwc3_suspend, dwc3_resume) + .complete = dwc3_complete, SET_RUNTIME_PM_OPS(dwc3_runtime_suspend, dwc3_runtime_resume, dwc3_runtime_idle) }; diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h index 2f04b3e42bf1c..ba0f743f35528 100644 --- a/drivers/usb/dwc3/core.h +++ b/drivers/usb/dwc3/core.h @@ -138,6 +138,7 @@ #define DWC3_GEVNTCOUNT(n) (0xc40c + ((n) * 0x10))
#define DWC3_GHWPARAMS8 0xc600 +#define DWC3_GUCTL3 0xc60c #define DWC3_GFLADJ 0xc630
/* Device Registers */ @@ -380,6 +381,9 @@ /* Global User Control Register 2 */ #define DWC3_GUCTL2_RST_ACTBITLATER BIT(14)
+/* Global User Control Register 3 */ +#define DWC3_GUCTL3_SPLITDISABLE BIT(14) + /* Device Configuration Register */ #define DWC3_DCFG_DEVADDR(addr) ((addr) << 3) #define DWC3_DCFG_DEVADDR_MASK DWC3_DCFG_DEVADDR(0x7f) @@ -1052,6 +1056,7 @@ struct dwc3_scratchpad_array { * 2 - No de-emphasis * 3 - Reserved * @dis_metastability_quirk: set to disable metastability quirk. + * @dis_split_quirk: set to disable split boundary. * @imod_interval: set the interrupt moderation interval in 250ns * increments or 0 to disable. */ @@ -1245,6 +1250,8 @@ struct dwc3 {
unsigned dis_metastability_quirk:1;
+ unsigned dis_split_quirk:1; + u16 imod_interval; };
From: Zqiang qiang.zhang@windriver.com
[ Upstream commit e8d5f92b8d30bb4ade76494490c3c065e12411b1 ]
Fix this by increase object reference count.
BUG: KASAN: use-after-free in __lock_acquire+0x3fd4/0x4180 kernel/locking/lockdep.c:3831 Read of size 8 at addr ffff8880683b0018 by task syz-executor.0/3377
CPU: 1 PID: 3377 Comm: syz-executor.0 Not tainted 5.6.11 #1 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0xce/0x128 lib/dump_stack.c:118 print_address_description.constprop.4+0x21/0x3c0 mm/kasan/report.c:374 __kasan_report+0x131/0x1b0 mm/kasan/report.c:506 kasan_report+0x12/0x20 mm/kasan/common.c:641 __asan_report_load8_noabort+0x14/0x20 mm/kasan/generic_report.c:135 __lock_acquire+0x3fd4/0x4180 kernel/locking/lockdep.c:3831 lock_acquire+0x127/0x350 kernel/locking/lockdep.c:4488 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline] _raw_spin_lock_irqsave+0x35/0x50 kernel/locking/spinlock.c:159 printer_ioctl+0x4a/0x110 drivers/usb/gadget/function/f_printer.c:723 vfs_ioctl fs/ioctl.c:47 [inline] ksys_ioctl+0xfb/0x130 fs/ioctl.c:763 __do_sys_ioctl fs/ioctl.c:772 [inline] __se_sys_ioctl fs/ioctl.c:770 [inline] __x64_sys_ioctl+0x73/0xb0 fs/ioctl.c:770 do_syscall_64+0x9e/0x510 arch/x86/entry/common.c:294 entry_SYSCALL_64_after_hwframe+0x49/0xbe RIP: 0033:0x4531a9 Code: ed 60 fc ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 bb 60 fc ff c3 66 2e 0f 1f 84 00 00 00 00 RSP: 002b:00007fd14ad72c78 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 000000000073bfa8 RCX: 00000000004531a9 RDX: fffffffffffffff9 RSI: 000000000000009e RDI: 0000000000000003 RBP: 0000000000000003 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 00000000004bbd61 R13: 00000000004d0a98 R14: 00007fd14ad736d4 R15: 00000000ffffffff
Allocated by task 2393: save_stack+0x21/0x90 mm/kasan/common.c:72 set_track mm/kasan/common.c:80 [inline] __kasan_kmalloc.constprop.3+0xa7/0xd0 mm/kasan/common.c:515 kasan_kmalloc+0x9/0x10 mm/kasan/common.c:529 kmem_cache_alloc_trace+0xfa/0x2d0 mm/slub.c:2813 kmalloc include/linux/slab.h:555 [inline] kzalloc include/linux/slab.h:669 [inline] gprinter_alloc+0xa1/0x870 drivers/usb/gadget/function/f_printer.c:1416 usb_get_function+0x58/0xc0 drivers/usb/gadget/functions.c:61 config_usb_cfg_link+0x1ed/0x3e0 drivers/usb/gadget/configfs.c:444 configfs_symlink+0x527/0x11d0 fs/configfs/symlink.c:202 vfs_symlink+0x33d/0x5b0 fs/namei.c:4201 do_symlinkat+0x11b/0x1d0 fs/namei.c:4228 __do_sys_symlinkat fs/namei.c:4242 [inline] __se_sys_symlinkat fs/namei.c:4239 [inline] __x64_sys_symlinkat+0x73/0xb0 fs/namei.c:4239 do_syscall_64+0x9e/0x510 arch/x86/entry/common.c:294 entry_SYSCALL_64_after_hwframe+0x49/0xbe
Freed by task 3368: save_stack+0x21/0x90 mm/kasan/common.c:72 set_track mm/kasan/common.c:80 [inline] kasan_set_free_info mm/kasan/common.c:337 [inline] __kasan_slab_free+0x135/0x190 mm/kasan/common.c:476 kasan_slab_free+0xe/0x10 mm/kasan/common.c:485 slab_free_hook mm/slub.c:1444 [inline] slab_free_freelist_hook mm/slub.c:1477 [inline] slab_free mm/slub.c:3034 [inline] kfree+0xf7/0x410 mm/slub.c:3995 gprinter_free+0x49/0xd0 drivers/usb/gadget/function/f_printer.c:1353 usb_put_function+0x38/0x50 drivers/usb/gadget/functions.c:87 config_usb_cfg_unlink+0x2db/0x3b0 drivers/usb/gadget/configfs.c:485 configfs_unlink+0x3b9/0x7f0 fs/configfs/symlink.c:250 vfs_unlink+0x287/0x570 fs/namei.c:4073 do_unlinkat+0x4f9/0x620 fs/namei.c:4137 __do_sys_unlink fs/namei.c:4184 [inline] __se_sys_unlink fs/namei.c:4182 [inline] __x64_sys_unlink+0x42/0x50 fs/namei.c:4182 do_syscall_64+0x9e/0x510 arch/x86/entry/common.c:294 entry_SYSCALL_64_after_hwframe+0x49/0xbe
The buggy address belongs to the object at ffff8880683b0000 which belongs to the cache kmalloc-1k of size 1024 The buggy address is located 24 bytes inside of 1024-byte region [ffff8880683b0000, ffff8880683b0400) The buggy address belongs to the page: page:ffffea0001a0ec00 refcount:1 mapcount:0 mapping:ffff88806c00e300 index:0xffff8880683b1800 compound_mapcount: 0 flags: 0x100000000010200(slab|head) raw: 0100000000010200 0000000000000000 0000000600000001 ffff88806c00e300 raw: ffff8880683b1800 000000008010000a 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected
Reported-by: Kyungtae Kim kt0755@gmail.com Signed-off-by: Zqiang qiang.zhang@windriver.com Signed-off-by: Felipe Balbi balbi@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/gadget/function/f_printer.c | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/gadget/function/f_printer.c b/drivers/usb/gadget/function/f_printer.c index 68697f596066c..64a4112068fc8 100644 --- a/drivers/usb/gadget/function/f_printer.c +++ b/drivers/usb/gadget/function/f_printer.c @@ -31,6 +31,7 @@ #include <linux/types.h> #include <linux/ctype.h> #include <linux/cdev.h> +#include <linux/kref.h>
#include <asm/byteorder.h> #include <linux/io.h> @@ -64,7 +65,7 @@ struct printer_dev { struct usb_gadget *gadget; s8 interface; struct usb_ep *in_ep, *out_ep; - + struct kref kref; struct list_head rx_reqs; /* List of free RX structs */ struct list_head rx_reqs_active; /* List of Active RX xfers */ struct list_head rx_buffers; /* List of completed xfers */ @@ -218,6 +219,13 @@ static inline struct usb_endpoint_descriptor *ep_desc(struct usb_gadget *gadget,
/*-------------------------------------------------------------------------*/
+static void printer_dev_free(struct kref *kref) +{ + struct printer_dev *dev = container_of(kref, struct printer_dev, kref); + + kfree(dev); +} + static struct usb_request * printer_req_alloc(struct usb_ep *ep, unsigned len, gfp_t gfp_flags) { @@ -353,6 +361,7 @@ printer_open(struct inode *inode, struct file *fd)
spin_unlock_irqrestore(&dev->lock, flags);
+ kref_get(&dev->kref); DBG(dev, "printer_open returned %x\n", ret); return ret; } @@ -370,6 +379,7 @@ printer_close(struct inode *inode, struct file *fd) dev->printer_status &= ~PRINTER_SELECTED; spin_unlock_irqrestore(&dev->lock, flags);
+ kref_put(&dev->kref, printer_dev_free); DBG(dev, "printer_close\n");
return 0; @@ -1386,7 +1396,8 @@ static void gprinter_free(struct usb_function *f) struct f_printer_opts *opts;
opts = container_of(f->fi, struct f_printer_opts, func_inst); - kfree(dev); + + kref_put(&dev->kref, printer_dev_free); mutex_lock(&opts->lock); --opts->refcnt; mutex_unlock(&opts->lock); @@ -1455,6 +1466,7 @@ static struct usb_function *gprinter_alloc(struct usb_function_instance *fi) return ERR_PTR(-ENOMEM); }
+ kref_init(&dev->kref); ++opts->refcnt; dev->minor = opts->minor; dev->pnp_string = opts->pnp_string;
From: Kai-Heng Feng kai.heng.feng@canonical.com
[ Upstream commit 44492e70adc8086c42d3745d21d591657a427f04 ]
There are reports that 8822CE fails to work rtw88 with "failed to read DBI register" error. Also I have a system with 8723DE which freezes the whole system when the rtw88 is probing the device.
According to [1], platform firmware may not properly power manage the device during shutdown. I did some expirements and putting the device to D3 can workaround the issue.
So let's power cycle the device by putting the device to D3 at shutdown to prevent the issue from happening.
[1] https://bugzilla.kernel.org/show_bug.cgi?id=206411#c9
BugLink: https://bugs.launchpad.net/bugs/1872984 Signed-off-by: Kai-Heng Feng kai.heng.feng@canonical.com Signed-off-by: Kalle Valo kvalo@codeaurora.org Link: https://lore.kernel.org/r/20200928165508.20775-1-kai.heng.feng@canonical.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/realtek/rtw88/pci.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/net/wireless/realtek/rtw88/pci.c b/drivers/net/wireless/realtek/rtw88/pci.c index 3413973bc4750..7f1f5073b9f4d 100644 --- a/drivers/net/wireless/realtek/rtw88/pci.c +++ b/drivers/net/wireless/realtek/rtw88/pci.c @@ -1599,6 +1599,8 @@ void rtw_pci_shutdown(struct pci_dev *pdev)
if (chip->ops->shutdown) chip->ops->shutdown(rtwdev); + + pci_set_power_state(pdev, PCI_D3hot); } EXPORT_SYMBOL(rtw_pci_shutdown);
From: Jan Kara jack@suse.cz
[ Upstream commit 44ac6b829c4e173fdf6df18e6dd86aecf9a3dc99 ]
Although UDF standard allows it, we don't support sparing table larger than a single block. Check it during mount so that we don't try to access memory beyond end of buffer.
Reported-by: syzbot+9991561e714f597095da@syzkaller.appspotmail.com Signed-off-by: Jan Kara jack@suse.cz Signed-off-by: Sasha Levin sashal@kernel.org --- fs/udf/super.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/fs/udf/super.c b/fs/udf/super.c index 1c42f544096d8..a03b8ce5ef0fd 100644 --- a/fs/udf/super.c +++ b/fs/udf/super.c @@ -1353,6 +1353,12 @@ static int udf_load_sparable_map(struct super_block *sb, (int)spm->numSparingTables); return -EIO; } + if (le32_to_cpu(spm->sizeSparingTable) > sb->s_blocksize) { + udf_err(sb, "error loading logical volume descriptor: " + "Too big sparing table size (%u)\n", + le32_to_cpu(spm->sizeSparingTable)); + return -EIO; + }
for (i = 0; i < spm->numSparingTables; i++) { loc = le32_to_cpu(spm->locSparingTable[i]);
From: Jan Kara jack@suse.cz
[ Upstream commit 044e2e26f214e5ab26af85faffd8d1e4ec066931 ]
When we fail to read inode, some data accessed in udf_evict_inode() may be uninitialized. Move the accesses to !is_bad_inode() branch.
Reported-by: syzbot+91f02b28f9bb5f5f1341@syzkaller.appspotmail.com Signed-off-by: Jan Kara jack@suse.cz Signed-off-by: Sasha Levin sashal@kernel.org --- fs/udf/inode.c | 25 ++++++++++++++----------- 1 file changed, 14 insertions(+), 11 deletions(-)
diff --git a/fs/udf/inode.c b/fs/udf/inode.c index adaba8e8b326e..566118417e562 100644 --- a/fs/udf/inode.c +++ b/fs/udf/inode.c @@ -139,21 +139,24 @@ void udf_evict_inode(struct inode *inode) struct udf_inode_info *iinfo = UDF_I(inode); int want_delete = 0;
- if (!inode->i_nlink && !is_bad_inode(inode)) { - want_delete = 1; - udf_setsize(inode, 0); - udf_update_inode(inode, IS_SYNC(inode)); + if (!is_bad_inode(inode)) { + if (!inode->i_nlink) { + want_delete = 1; + udf_setsize(inode, 0); + udf_update_inode(inode, IS_SYNC(inode)); + } + if (iinfo->i_alloc_type != ICBTAG_FLAG_AD_IN_ICB && + inode->i_size != iinfo->i_lenExtents) { + udf_warn(inode->i_sb, + "Inode %lu (mode %o) has inode size %llu different from extent length %llu. Filesystem need not be standards compliant.\n", + inode->i_ino, inode->i_mode, + (unsigned long long)inode->i_size, + (unsigned long long)iinfo->i_lenExtents); + } } truncate_inode_pages_final(&inode->i_data); invalidate_inode_buffers(inode); clear_inode(inode); - if (iinfo->i_alloc_type != ICBTAG_FLAG_AD_IN_ICB && - inode->i_size != iinfo->i_lenExtents) { - udf_warn(inode->i_sb, "Inode %lu (mode %o) has inode size %llu different from extent length %llu. Filesystem need not be standards compliant.\n", - inode->i_ino, inode->i_mode, - (unsigned long long)inode->i_size, - (unsigned long long)iinfo->i_lenExtents); - } kfree(iinfo->i_ext.i_data); iinfo->i_ext.i_data = NULL; udf_clear_extent_cache(inode);
From: Tzu-En Huang tehuang@realtek.com
[ Upstream commit ee755732b7a16af018daa77d9562d2493fb7092f ]
The vht capability of MAX_MPDU_LENGTH is 11454 in rtw88; however, the rx buffer size for each packet is 8192. When receiving packets that are larger than rx buffer size, it will leads to rx buffer ring overflow.
Signed-off-by: Tzu-En Huang tehuang@realtek.com Signed-off-by: Kalle Valo kvalo@codeaurora.org Link: https://lore.kernel.org/r/20200925061219.23754-2-tehuang@realtek.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/realtek/rtw88/pci.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/wireless/realtek/rtw88/pci.h b/drivers/net/wireless/realtek/rtw88/pci.h index 024c2bc275cbe..ca17aa9cf7dc7 100644 --- a/drivers/net/wireless/realtek/rtw88/pci.h +++ b/drivers/net/wireless/realtek/rtw88/pci.h @@ -9,8 +9,8 @@ #define RTK_BEQ_TX_DESC_NUM 256
#define RTK_MAX_RX_DESC_NUM 512 -/* 8K + rx desc size */ -#define RTK_PCI_RX_BUF_SIZE (8192 + 24) +/* 11K + rx desc size */ +#define RTK_PCI_RX_BUF_SIZE (11454 + 24)
#define RTK_PCI_CTRL 0x300 #define BIT_RST_TRXDMA_INTF BIT(20)
From: Alan Maguire alan.maguire@oracle.com
[ Upstream commit eb58bbf2e5c7917aa30bf8818761f26bbeeb2290 ]
bpf iter size increase to PAGE_SIZE << 3 means overflow tests assuming page size need to be bumped also.
Signed-off-by: Alan Maguire alan.maguire@oracle.com Signed-off-by: Alexei Starovoitov ast@kernel.org Link: https://lore.kernel.org/bpf/1601292670-1616-7-git-send-email-alan.maguire@or... Signed-off-by: Sasha Levin sashal@kernel.org --- tools/testing/selftests/bpf/prog_tests/bpf_iter.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c index 7375d9a6d2427..a8924cbc7509d 100644 --- a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c +++ b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c @@ -331,7 +331,7 @@ static void test_overflow(bool test_e2big_overflow, bool ret1) struct bpf_map_info map_info = {}; struct bpf_iter_test_kern4 *skel; struct bpf_link *link; - __u32 page_size; + __u32 iter_size; char *buf;
skel = bpf_iter_test_kern4__open(); @@ -353,19 +353,19 @@ static void test_overflow(bool test_e2big_overflow, bool ret1) "map_creation failed: %s\n", strerror(errno))) goto free_map1;
- /* bpf_seq_printf kernel buffer is one page, so one map + /* bpf_seq_printf kernel buffer is 8 pages, so one map * bpf_seq_write will mostly fill it, and the other map * will partially fill and then trigger overflow and need * bpf_seq_read restart. */ - page_size = sysconf(_SC_PAGE_SIZE); + iter_size = sysconf(_SC_PAGE_SIZE) << 3;
if (test_e2big_overflow) { - skel->rodata->print_len = (page_size + 8) / 8; - expected_read_len = 2 * (page_size + 8); + skel->rodata->print_len = (iter_size + 8) / 8; + expected_read_len = 2 * (iter_size + 8); } else if (!ret1) { - skel->rodata->print_len = (page_size - 8) / 8; - expected_read_len = 2 * (page_size - 8); + skel->rodata->print_len = (iter_size - 8) / 8; + expected_read_len = 2 * (iter_size - 8); } else { skel->rodata->print_len = 1; expected_read_len = 2 * 8;
From: Johan Hovold johan@kernel.org
[ Upstream commit 960c7339de27c6d6fec13b54880501c3576bb08d ]
Handle broken union functional descriptors where the master-interface doesn't exist or where its class is of neither Communication or Data type (as required by the specification) by falling back to "combined-interface" probing.
Note that this still allows for handling union descriptors with switched interfaces.
This specifically makes the Whistler radio scanners TRX series devices work with the driver without adding further quirks to the device-id table.
Reported-by: Daniel Caujolle-Bert f1rmb.daniel@gmail.com Tested-by: Daniel Caujolle-Bert f1rmb.daniel@gmail.com Acked-by: Oliver Neukum oneukum@suse.com Signed-off-by: Johan Hovold johan@kernel.org Link: https://lore.kernel.org/r/20200921135951.24045-3-johan@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/class/cdc-acm.c | 12 ++++++++++++ 1 file changed, 12 insertions(+)
diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c index 7f6f3ab5b8a67..c8fb85a71c3ad 100644 --- a/drivers/usb/class/cdc-acm.c +++ b/drivers/usb/class/cdc-acm.c @@ -1243,9 +1243,21 @@ static int acm_probe(struct usb_interface *intf, } } } else { + int class = -1; + data_intf_num = union_header->bSlaveInterface0; control_interface = usb_ifnum_to_if(usb_dev, union_header->bMasterInterface0); data_interface = usb_ifnum_to_if(usb_dev, data_intf_num); + + if (control_interface) + class = control_interface->cur_altsetting->desc.bInterfaceClass; + + if (class != USB_CLASS_COMM && class != USB_CLASS_CDC_DATA) { + dev_dbg(&intf->dev, "Broken union descriptor, assuming single interface\n"); + combined_interfaces = 1; + control_interface = data_interface = intf; + goto look_for_collapsed_interface; + } }
if (!control_interface || !data_interface) {
On Sun, Oct 18, 2020 at 03:17:10PM -0400, Sasha Levin wrote:
From: Johan Hovold johan@kernel.org
[ Upstream commit 960c7339de27c6d6fec13b54880501c3576bb08d ]
Handle broken union functional descriptors where the master-interface doesn't exist or where its class is of neither Communication or Data type (as required by the specification) by falling back to "combined-interface" probing.
Note that this still allows for handling union descriptors with switched interfaces.
This specifically makes the Whistler radio scanners TRX series devices work with the driver without adding further quirks to the device-id table.
Reported-by: Daniel Caujolle-Bert f1rmb.daniel@gmail.com Tested-by: Daniel Caujolle-Bert f1rmb.daniel@gmail.com Acked-by: Oliver Neukum oneukum@suse.com Signed-off-by: Johan Hovold johan@kernel.org Link: https://lore.kernel.org/r/20200921135951.24045-3-johan@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org
I was surprised to see this picked up by AUTOSEL since I remember adding a stable tag to this patch (to v2, changed my mind since v1) -- and it's there in the lore link above.
Greg, just to make sure this wasn't due to a b4 bug; did you drop the stable tag on purpose when applying?
The tag-order has been reshuffled by b4 too it seems (I know, some people think that's ok) so maybe it fell out in the process.
Johan
On Mon, Oct 19, 2020 at 09:02:18AM +0200, Johan Hovold wrote:
On Sun, Oct 18, 2020 at 03:17:10PM -0400, Sasha Levin wrote:
From: Johan Hovold johan@kernel.org
[ Upstream commit 960c7339de27c6d6fec13b54880501c3576bb08d ]
Handle broken union functional descriptors where the master-interface doesn't exist or where its class is of neither Communication or Data type (as required by the specification) by falling back to "combined-interface" probing.
Note that this still allows for handling union descriptors with switched interfaces.
This specifically makes the Whistler radio scanners TRX series devices work with the driver without adding further quirks to the device-id table.
Reported-by: Daniel Caujolle-Bert f1rmb.daniel@gmail.com Tested-by: Daniel Caujolle-Bert f1rmb.daniel@gmail.com Acked-by: Oliver Neukum oneukum@suse.com Signed-off-by: Johan Hovold johan@kernel.org Link: https://lore.kernel.org/r/20200921135951.24045-3-johan@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org
I was surprised to see this picked up by AUTOSEL since I remember adding a stable tag to this patch (to v2, changed my mind since v1) -- and it's there in the lore link above.
Greg, just to make sure this wasn't due to a b4 bug; did you drop the stable tag on purpose when applying?
The tag-order has been reshuffled by b4 too it seems (I know, some people think that's ok) so maybe it fell out in the process.
Hmm. Apparently the Link-tag that I had added to keep a record of the descriptors provided in the original report also fell out. Another b4 bug?
Adding Konstantin just in case.
Johan
On Mon, Oct 19, 2020 at 09:10:34AM +0200, Johan Hovold wrote:
Hmm. Apparently the Link-tag that I had added to keep a record of the descriptors provided in the original report also fell out. Another b4 bug?
Adding Konstantin just in case.
Correct, looks like this was broken in the devel branch -- the latest commit fixes it.
Thanks for the report.
-K
On Mon, Oct 19, 2020 at 09:02:18AM +0200, Johan Hovold wrote:
On Sun, Oct 18, 2020 at 03:17:10PM -0400, Sasha Levin wrote:
From: Johan Hovold johan@kernel.org
[ Upstream commit 960c7339de27c6d6fec13b54880501c3576bb08d ]
Handle broken union functional descriptors where the master-interface doesn't exist or where its class is of neither Communication or Data type (as required by the specification) by falling back to "combined-interface" probing.
Note that this still allows for handling union descriptors with switched interfaces.
This specifically makes the Whistler radio scanners TRX series devices work with the driver without adding further quirks to the device-id table.
Reported-by: Daniel Caujolle-Bert f1rmb.daniel@gmail.com Tested-by: Daniel Caujolle-Bert f1rmb.daniel@gmail.com Acked-by: Oliver Neukum oneukum@suse.com Signed-off-by: Johan Hovold johan@kernel.org Link: https://lore.kernel.org/r/20200921135951.24045-3-johan@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org
I was surprised to see this picked up by AUTOSEL since I remember adding a stable tag to this patch (to v2, changed my mind since v1) -- and it's there in the lore link above.
Greg, just to make sure this wasn't due to a b4 bug; did you drop the stable tag on purpose when applying?
I did not, looks like b4 stripped it off!
The tag-order has been reshuffled by b4 too it seems (I know, some people think that's ok) so maybe it fell out in the process.
I think so, let me verify it's a bug and report it to Konstantin...
thanks,
greg k-h
From: Felix Fietkau nbd@nbd.name
[ Upstream commit 38b04398c532e9bb9aa90fc07846ad0b0845fe94 ]
Fixes a race condition where multiple tx cleanup or sta poll tasks could run in parallel.
Signed-off-by: Felix Fietkau nbd@nbd.name Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/mediatek/mt76/mt7915/dma.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/dma.c b/drivers/net/wireless/mediatek/mt76/mt7915/dma.c index a8832c5e60041..8a1ae08d9572e 100644 --- a/drivers/net/wireless/mediatek/mt76/mt7915/dma.c +++ b/drivers/net/wireless/mediatek/mt76/mt7915/dma.c @@ -95,16 +95,13 @@ static int mt7915_poll_tx(struct napi_struct *napi, int budget) dev = container_of(napi, struct mt7915_dev, mt76.tx_napi);
mt7915_tx_cleanup(dev); - - if (napi_complete_done(napi, 0)) - mt7915_irq_enable(dev, MT_INT_TX_DONE_ALL); - - mt7915_tx_cleanup(dev); - mt7915_mac_sta_poll(dev);
tasklet_schedule(&dev->mt76.tx_tasklet);
+ if (napi_complete_done(napi, 0)) + mt7915_irq_enable(dev, MT_INT_TX_DONE_ALL); + return 0; }
From: Mauro Carvalho Chehab mchehab+huawei@kernel.org
[ Upstream commit b68d9251561f33661e53dd618f1cafe7ec9ec3c2 ]
This binding driver is needed for Hikey 970 to work, as otherwise a Serror is produced:
[ 1.837458] SError Interrupt on CPU0, code 0xbf000002 -- SError [ 1.837462] CPU: 0 PID: 74 Comm: kworker/0:1 Not tainted 5.8.0+ #205 [ 1.837463] Hardware name: HiKey970 (DT) [ 1.837465] Workqueue: events deferred_probe_work_func [ 1.837467] pstate: 20000005 (nzCv daif -PAN -UAO BTYPE=--) [ 1.837468] pc : _raw_spin_unlock_irqrestore+0x18/0x50 [ 1.837469] lr : regmap_unlock_spinlock+0x14/0x20 [ 1.837470] sp : ffff8000124dba60 [ 1.837471] x29: ffff8000124dba60 x28: 0000000000000000 [ 1.837474] x27: ffff0001b7e854c8 x26: ffff80001204ea18 [ 1.837476] x25: 0000000000000005 x24: ffff800011f918f8 [ 1.837479] x23: ffff800011fbb588 x22: ffff0001b7e40e00 [ 1.837481] x21: 0000000000000100 x20: 0000000000000000 [ 1.837483] x19: ffff0001b767ec00 x18: 00000000ff10c000 [ 1.837485] x17: 0000000000000002 x16: 0000b0740fdb9950 [ 1.837488] x15: ffff8000116c1198 x14: ffffffffffffffff [ 1.837490] x13: 0000000000000030 x12: 0101010101010101 [ 1.837493] x11: 0000000000000020 x10: ffff0001bf17d130 [ 1.837495] x9 : 0000000000000000 x8 : ffff0001b6938080 [ 1.837497] x7 : 0000000000000000 x6 : 000000000000003f [ 1.837500] x5 : 0000000000000000 x4 : 0000000000000000 [ 1.837502] x3 : ffff80001096a880 x2 : 0000000000000000 [ 1.837505] x1 : ffff0001b7e40e00 x0 : 0000000100000001 [ 1.837507] Kernel panic - not syncing: Asynchronous SError Interrupt [ 1.837509] CPU: 0 PID: 74 Comm: kworker/0:1 Not tainted 5.8.0+ #205 [ 1.837510] Hardware name: HiKey970 (DT) [ 1.837511] Workqueue: events deferred_probe_work_func [ 1.837513] Call trace: [ 1.837514] dump_backtrace+0x0/0x1e0 [ 1.837515] show_stack+0x18/0x24 [ 1.837516] dump_stack+0xc0/0x11c [ 1.837517] panic+0x15c/0x324 [ 1.837518] nmi_panic+0x8c/0x90 [ 1.837519] arm64_serror_panic+0x78/0x84 [ 1.837520] do_serror+0x158/0x15c [ 1.837521] el1_error+0x84/0x100 [ 1.837522] _raw_spin_unlock_irqrestore+0x18/0x50 [ 1.837523] regmap_write+0x58/0x80 [ 1.837524] hi3660_reset_deassert+0x28/0x34 [ 1.837526] reset_control_deassert+0x50/0x260 [ 1.837527] reset_control_deassert+0xf4/0x260 [ 1.837528] dwc3_probe+0x5dc/0xe6c [ 1.837529] platform_drv_probe+0x54/0xb0 [ 1.837530] really_probe+0xe0/0x490 [ 1.837531] driver_probe_device+0xf4/0x160 [ 1.837532] __device_attach_driver+0x8c/0x114 [ 1.837533] bus_for_each_drv+0x78/0xcc [ 1.837534] __device_attach+0x108/0x1a0 [ 1.837535] device_initial_probe+0x14/0x20 [ 1.837537] bus_probe_device+0x98/0xa0 [ 1.837538] deferred_probe_work_func+0x88/0xe0 [ 1.837539] process_one_work+0x1cc/0x350 [ 1.837540] worker_thread+0x2c0/0x470 [ 1.837541] kthread+0x154/0x160 [ 1.837542] ret_from_fork+0x10/0x30 [ 1.837569] SMP: stopping secondary CPUs [ 1.837570] Kernel Offset: 0x1d0000 from 0xffff800010000000 [ 1.837571] PHYS_OFFSET: 0x0 [ 1.837572] CPU features: 0x240002,20882004 [ 1.837573] Memory Limit: none
Signed-off-by: Mauro Carvalho Chehab mchehab+huawei@kernel.org Signed-off-by: Felipe Balbi balbi@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/dwc3/dwc3-of-simple.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/usb/dwc3/dwc3-of-simple.c b/drivers/usb/dwc3/dwc3-of-simple.c index 7df1150129354..2816e4a9813ad 100644 --- a/drivers/usb/dwc3/dwc3-of-simple.c +++ b/drivers/usb/dwc3/dwc3-of-simple.c @@ -176,6 +176,7 @@ static const struct of_device_id of_dwc3_simple_match[] = { { .compatible = "cavium,octeon-7130-usb-uctl" }, { .compatible = "sprd,sc9860-dwc3" }, { .compatible = "allwinner,sun50i-h6-dwc3" }, + { .compatible = "hisilicon,hi3670-dwc3" }, { /* Sentinel */ } }; MODULE_DEVICE_TABLE(of, of_dwc3_simple_match);
From: Oded Gabbay oded.gabbay@gmail.com
[ Upstream commit f763946aefe67b3ea58696b75a930ba1ed886a83 ]
When shifting a boolean variable by more than 31 bits and putting the result into a u64 variable, we need to cast the boolean into unsigned 64 bits to prevent possible overflow.
Reported-by: kernel test robot lkp@intel.com Reported-by: Dan Carpenter dan.carpenter@oracle.com Signed-off-by: Oded Gabbay oded.gabbay@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/misc/habanalabs/gaudi/gaudi.c | 8 +++++--- drivers/misc/habanalabs/goya/goya.c | 8 +++++--- 2 files changed, 10 insertions(+), 6 deletions(-)
diff --git a/drivers/misc/habanalabs/gaudi/gaudi.c b/drivers/misc/habanalabs/gaudi/gaudi.c index 4009b7df4cafe..2e55890ad6a61 100644 --- a/drivers/misc/habanalabs/gaudi/gaudi.c +++ b/drivers/misc/habanalabs/gaudi/gaudi.c @@ -6099,7 +6099,7 @@ static bool gaudi_is_device_idle(struct hl_device *hdev, u32 *mask, is_idle &= is_eng_idle;
if (mask) - *mask |= !is_eng_idle << + *mask |= ((u64) !is_eng_idle) << (GAUDI_ENGINE_ID_DMA_0 + dma_id); if (s) seq_printf(s, fmt, dma_id, @@ -6122,7 +6122,8 @@ static bool gaudi_is_device_idle(struct hl_device *hdev, u32 *mask, is_idle &= is_eng_idle;
if (mask) - *mask |= !is_eng_idle << (GAUDI_ENGINE_ID_TPC_0 + i); + *mask |= ((u64) !is_eng_idle) << + (GAUDI_ENGINE_ID_TPC_0 + i); if (s) seq_printf(s, fmt, i, is_eng_idle ? "Y" : "N", @@ -6150,7 +6151,8 @@ static bool gaudi_is_device_idle(struct hl_device *hdev, u32 *mask, is_idle &= is_eng_idle;
if (mask) - *mask |= !is_eng_idle << (GAUDI_ENGINE_ID_MME_0 + i); + *mask |= ((u64) !is_eng_idle) << + (GAUDI_ENGINE_ID_MME_0 + i); if (s) { if (!is_slave) seq_printf(s, fmt, i, diff --git a/drivers/misc/habanalabs/goya/goya.c b/drivers/misc/habanalabs/goya/goya.c index 33cd2ae653d23..c09742f440f96 100644 --- a/drivers/misc/habanalabs/goya/goya.c +++ b/drivers/misc/habanalabs/goya/goya.c @@ -5166,7 +5166,8 @@ static bool goya_is_device_idle(struct hl_device *hdev, u32 *mask, is_idle &= is_eng_idle;
if (mask) - *mask |= !is_eng_idle << (GOYA_ENGINE_ID_DMA_0 + i); + *mask |= ((u64) !is_eng_idle) << + (GOYA_ENGINE_ID_DMA_0 + i); if (s) seq_printf(s, dma_fmt, i, is_eng_idle ? "Y" : "N", qm_glbl_sts0, dma_core_sts0); @@ -5189,7 +5190,8 @@ static bool goya_is_device_idle(struct hl_device *hdev, u32 *mask, is_idle &= is_eng_idle;
if (mask) - *mask |= !is_eng_idle << (GOYA_ENGINE_ID_TPC_0 + i); + *mask |= ((u64) !is_eng_idle) << + (GOYA_ENGINE_ID_TPC_0 + i); if (s) seq_printf(s, fmt, i, is_eng_idle ? "Y" : "N", qm_glbl_sts0, cmdq_glbl_sts0, tpc_cfg_sts); @@ -5209,7 +5211,7 @@ static bool goya_is_device_idle(struct hl_device *hdev, u32 *mask, is_idle &= is_eng_idle;
if (mask) - *mask |= !is_eng_idle << GOYA_ENGINE_ID_MME_0; + *mask |= ((u64) !is_eng_idle) << GOYA_ENGINE_ID_MME_0; if (s) { seq_printf(s, fmt, 0, is_eng_idle ? "Y" : "N", qm_glbl_sts0, cmdq_glbl_sts0, mme_arch_sts);
From: Joakim Zhang qiangqing.zhang@nxp.com
[ Upstream commit 9ad02c7f4f279504bdd38ab706fdc97d5f2b2a9c ]
This patch implements error handling and propagates the error value of flexcan_chip_stop(). This function will be called from flexcan_suspend() in an upcoming patch in some SoCs which support LPSR mode.
Add a new function flexcan_chip_stop_disable_on_error() that tries to disable the chip even in case of errors.
Signed-off-by: Joakim Zhang qiangqing.zhang@nxp.com [mkl: introduce flexcan_chip_stop_disable_on_error() and use it in flexcan_close()] Signed-off-by: Marc Kleine-Budde mkl@pengutronix.de Link: https://lore.kernel.org/r/20200922144429.2613631-11-mkl@pengutronix.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/can/flexcan.c | 34 ++++++++++++++++++++++++++++------ 1 file changed, 28 insertions(+), 6 deletions(-)
diff --git a/drivers/net/can/flexcan.c b/drivers/net/can/flexcan.c index 94d10ec954a05..2ac7a667bde35 100644 --- a/drivers/net/can/flexcan.c +++ b/drivers/net/can/flexcan.c @@ -1260,18 +1260,23 @@ static int flexcan_chip_start(struct net_device *dev) return err; }
-/* flexcan_chip_stop +/* __flexcan_chip_stop * - * this functions is entered with clocks enabled + * this function is entered with clocks enabled */ -static void flexcan_chip_stop(struct net_device *dev) +static int __flexcan_chip_stop(struct net_device *dev, bool disable_on_error) { struct flexcan_priv *priv = netdev_priv(dev); struct flexcan_regs __iomem *regs = priv->regs; + int err;
/* freeze + disable module */ - flexcan_chip_freeze(priv); - flexcan_chip_disable(priv); + err = flexcan_chip_freeze(priv); + if (err && !disable_on_error) + return err; + err = flexcan_chip_disable(priv); + if (err && !disable_on_error) + goto out_chip_unfreeze;
/* Disable all interrupts */ priv->write(0, ®s->imask2); @@ -1281,6 +1286,23 @@ static void flexcan_chip_stop(struct net_device *dev)
flexcan_transceiver_disable(priv); priv->can.state = CAN_STATE_STOPPED; + + return 0; + + out_chip_unfreeze: + flexcan_chip_unfreeze(priv); + + return err; +} + +static inline int flexcan_chip_stop_disable_on_error(struct net_device *dev) +{ + return __flexcan_chip_stop(dev, true); +} + +static inline int flexcan_chip_stop(struct net_device *dev) +{ + return __flexcan_chip_stop(dev, false); }
static int flexcan_open(struct net_device *dev) @@ -1362,7 +1384,7 @@ static int flexcan_close(struct net_device *dev)
netif_stop_queue(dev); can_rx_offload_disable(&priv->offload); - flexcan_chip_stop(dev); + flexcan_chip_stop_disable_on_error(dev);
can_rx_offload_del(&priv->offload); free_irq(dev->irq, dev);
From: Mikael Wikström leakim.wikstrom@gmail.com
[ Upstream commit 140958da9ab53a7df9e9ccc7678ea64655279ac1 ]
One more device that needs 40d5bb87 to resolve regression for the trackpoint and three mouse buttons on the type cover of the Lenovo X1 Tablet Gen3.
It is probably also needed for the Lenovo X1 Tablet Gen2 with PID 0x60a3
Signed-off-by: Mikael Wikström leakim.wikstrom@gmail.com Signed-off-by: Jiri Kosina jkosina@suse.cz Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/hid/hid-ids.h | 1 + drivers/hid/hid-multitouch.c | 6 ++++++ 2 files changed, 7 insertions(+)
diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h index 74fc1df6e3c27..6a6e2c1b60900 100644 --- a/drivers/hid/hid-ids.h +++ b/drivers/hid/hid-ids.h @@ -727,6 +727,7 @@ #define USB_DEVICE_ID_LENOVO_TP10UBKBD 0x6062 #define USB_DEVICE_ID_LENOVO_TPPRODOCK 0x6067 #define USB_DEVICE_ID_LENOVO_X1_COVER 0x6085 +#define USB_DEVICE_ID_LENOVO_X1_TAB3 0x60b5 #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_608D 0x608d #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_6019 0x6019 #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_602E 0x602e diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c index e3152155c4b85..99f041afd5c0c 100644 --- a/drivers/hid/hid-multitouch.c +++ b/drivers/hid/hid-multitouch.c @@ -1973,6 +1973,12 @@ static const struct hid_device_id mt_devices[] = { HID_DEVICE(BUS_I2C, HID_GROUP_GENERIC, USB_VENDOR_ID_LG, I2C_DEVICE_ID_LG_7010) },
+ /* Lenovo X1 TAB Gen 3 */ + { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT, + HID_DEVICE(BUS_USB, HID_GROUP_MULTITOUCH_WIN_8, + USB_VENDOR_ID_LENOVO, + USB_DEVICE_ID_LENOVO_X1_TAB3) }, + /* MosArt panels */ { .driver_data = MT_CLS_CONFIDENCE_MINUS_ONE, MT_USB_DEVICE(USB_VENDOR_ID_ASUS,
From: Brooke Basile brookebasile@gmail.com
[ Upstream commit 03fb92a432ea5abe5909bca1455b7e44a9380480 ]
Calls to usb_kill_anchored_urbs() after usb_kill_urb() on multiprocessor systems create a race condition in which usb_kill_anchored_urbs() deallocates the URB before the completer callback is called in usb_kill_urb(), resulting in a use-after-free. To fix this, add proper lock protection to usb_kill_urb() calls that can possibly run concurrently with usb_kill_anchored_urbs().
Reported-by: syzbot+89bd486af9427a9fc605@syzkaller.appspotmail.com Link: https://syzkaller.appspot.com/bug?id=cabffad18eb74197f84871802fd2c5117b61feb... Signed-off-by: Brooke Basile brookebasile@gmail.com Signed-off-by: Kalle Valo kvalo@codeaurora.org Link: https://lore.kernel.org/r/20200911071427.32354-1-brookebasile@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/ath/ath9k/hif_usb.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+)
diff --git a/drivers/net/wireless/ath/ath9k/hif_usb.c b/drivers/net/wireless/ath/ath9k/hif_usb.c index 3f563e02d17da..2ed98aaed6fb5 100644 --- a/drivers/net/wireless/ath/ath9k/hif_usb.c +++ b/drivers/net/wireless/ath/ath9k/hif_usb.c @@ -449,10 +449,19 @@ static void hif_usb_stop(void *hif_handle) spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
/* The pending URBs have to be canceled. */ + spin_lock_irqsave(&hif_dev->tx.tx_lock, flags); list_for_each_entry_safe(tx_buf, tx_buf_tmp, &hif_dev->tx.tx_pending, list) { + usb_get_urb(tx_buf->urb); + spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags); usb_kill_urb(tx_buf->urb); + list_del(&tx_buf->list); + usb_free_urb(tx_buf->urb); + kfree(tx_buf->buf); + kfree(tx_buf); + spin_lock_irqsave(&hif_dev->tx.tx_lock, flags); } + spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
usb_kill_anchored_urbs(&hif_dev->mgmt_submitted); } @@ -762,27 +771,37 @@ static void ath9k_hif_usb_dealloc_tx_urbs(struct hif_device_usb *hif_dev) struct tx_buf *tx_buf = NULL, *tx_buf_tmp = NULL; unsigned long flags;
+ spin_lock_irqsave(&hif_dev->tx.tx_lock, flags); list_for_each_entry_safe(tx_buf, tx_buf_tmp, &hif_dev->tx.tx_buf, list) { + usb_get_urb(tx_buf->urb); + spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags); usb_kill_urb(tx_buf->urb); list_del(&tx_buf->list); usb_free_urb(tx_buf->urb); kfree(tx_buf->buf); kfree(tx_buf); + spin_lock_irqsave(&hif_dev->tx.tx_lock, flags); } + spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
spin_lock_irqsave(&hif_dev->tx.tx_lock, flags); hif_dev->tx.flags |= HIF_USB_TX_FLUSH; spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
+ spin_lock_irqsave(&hif_dev->tx.tx_lock, flags); list_for_each_entry_safe(tx_buf, tx_buf_tmp, &hif_dev->tx.tx_pending, list) { + usb_get_urb(tx_buf->urb); + spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags); usb_kill_urb(tx_buf->urb); list_del(&tx_buf->list); usb_free_urb(tx_buf->urb); kfree(tx_buf->buf); kfree(tx_buf); + spin_lock_irqsave(&hif_dev->tx.tx_lock, flags); } + spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
usb_kill_anchored_urbs(&hif_dev->mgmt_submitted); }
From: Neil Armstrong narmstrong@baylibre.com
[ Upstream commit afcd0c7d3d4c22afc8befcfc906db6ce3058d3ee ]
This adds the required GPU quirks, including the quirk in the PWR registers at the GPU reset time and the IOMMU quirk for shareability issues observed on G52 in Amlogic G12B SoCs.
Signed-off-by: Neil Armstrong narmstrong@baylibre.com Reviewed-by: Steven Price steven.price@arm.com Reviewed-by: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Steven Price steven.price@arm.com Link: https://patchwork.freedesktop.org/patch/msgid/20200916150147.25753-4-narmstr... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/panfrost/panfrost_drv.c | 11 +++++++++++ 1 file changed, 11 insertions(+)
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index ada51df9a7a32..f6d5d03201fad 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -667,7 +667,18 @@ static const struct panfrost_compatible default_data = { .pm_domain_names = NULL, };
+static const struct panfrost_compatible amlogic_data = { + .num_supplies = ARRAY_SIZE(default_supplies), + .supply_names = default_supplies, + .vendor_quirk = panfrost_gpu_amlogic_quirk, +}; + static const struct of_device_id dt_match[] = { + /* Set first to probe before the generic compatibles */ + { .compatible = "amlogic,meson-gxm-mali", + .data = &amlogic_data, }, + { .compatible = "amlogic,meson-g12a-mali", + .data = &amlogic_data, }, { .compatible = "arm,mali-t604", .data = &default_data, }, { .compatible = "arm,mali-t624", .data = &default_data, }, { .compatible = "arm,mali-t628", .data = &default_data, },
From: Neil Armstrong narmstrong@baylibre.com
[ Upstream commit 110003002291525bb209f47e6dbf121a63249a97 ]
The T820, G31 & G52 GPUs integrated by Amlogic in the respective GXM, G12A/SM1 & G12B SoCs needs a quirk in the PWR registers at the GPU reset time.
Since the Amlogic's integration of the GPU cores with the SoC is not publicly documented we do not know what does these values, but they permit having a fully functional GPU running with Panfrost.
Signed-off-by: Neil Armstrong narmstrong@baylibre.com [Steven: Fix typo in commit log] Reviewed-by: Steven Price steven.price@arm.com Reviewed-by: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Steven Price steven.price@arm.com Link: https://patchwork.freedesktop.org/patch/msgid/20200916150147.25753-3-narmstr... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/panfrost/panfrost_gpu.c | 11 +++++++++++ drivers/gpu/drm/panfrost/panfrost_gpu.h | 2 ++ drivers/gpu/drm/panfrost/panfrost_regs.h | 4 ++++ 3 files changed, 17 insertions(+)
diff --git a/drivers/gpu/drm/panfrost/panfrost_gpu.c b/drivers/gpu/drm/panfrost/panfrost_gpu.c index f2c1ddc41a9bf..6606643850fc6 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gpu.c +++ b/drivers/gpu/drm/panfrost/panfrost_gpu.c @@ -75,6 +75,17 @@ int panfrost_gpu_soft_reset(struct panfrost_device *pfdev) return 0; }
+void panfrost_gpu_amlogic_quirk(struct panfrost_device *pfdev) +{ + /* + * The Amlogic integrated Mali-T820, Mali-G31 & Mali-G52 needs + * these undocumented bits in GPU_PWR_OVERRIDE1 to be set in order + * to operate correctly. + */ + gpu_write(pfdev, GPU_PWR_KEY, GPU_PWR_KEY_UNLOCK); + gpu_write(pfdev, GPU_PWR_OVERRIDE1, 0xfff | (0x20 << 16)); +} + static void panfrost_gpu_init_quirks(struct panfrost_device *pfdev) { u32 quirks = 0; diff --git a/drivers/gpu/drm/panfrost/panfrost_gpu.h b/drivers/gpu/drm/panfrost/panfrost_gpu.h index 4112412087b27..468c51e7e46db 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gpu.h +++ b/drivers/gpu/drm/panfrost/panfrost_gpu.h @@ -16,4 +16,6 @@ int panfrost_gpu_soft_reset(struct panfrost_device *pfdev); void panfrost_gpu_power_on(struct panfrost_device *pfdev); void panfrost_gpu_power_off(struct panfrost_device *pfdev);
+void panfrost_gpu_amlogic_quirk(struct panfrost_device *pfdev); + #endif diff --git a/drivers/gpu/drm/panfrost/panfrost_regs.h b/drivers/gpu/drm/panfrost/panfrost_regs.h index ea38ac60581c6..eddaa62ad8b0e 100644 --- a/drivers/gpu/drm/panfrost/panfrost_regs.h +++ b/drivers/gpu/drm/panfrost/panfrost_regs.h @@ -51,6 +51,10 @@ #define GPU_STATUS 0x34 #define GPU_STATUS_PRFCNT_ACTIVE BIT(2) #define GPU_LATEST_FLUSH_ID 0x38 +#define GPU_PWR_KEY 0x50 /* (WO) Power manager key register */ +#define GPU_PWR_KEY_UNLOCK 0x2968A819 +#define GPU_PWR_OVERRIDE0 0x54 /* (RW) Power manager override settings */ +#define GPU_PWR_OVERRIDE1 0x58 /* (RW) Power manager override settings */ #define GPU_FAULT_STATUS 0x3C #define GPU_FAULT_ADDRESS_LO 0x40 #define GPU_FAULT_ADDRESS_HI 0x44
From: Neil Armstrong narmstrong@baylibre.com
[ Upstream commit 91e89097b86f566636ea5a7329c79d5521be46d2 ]
The T820, G31 & G52 GPUs integrated by Amlogic in the respective GXM, G12A/SM1 & G12B SoCs needs a quirk in the PWR registers after each reset.
This adds a callback in the device compatible struct of permit this.
Signed-off-by: Neil Armstrong narmstrong@baylibre.com [Steven: Fix typo in commit log] Reviewed-by: Steven Price steven.price@arm.com Reviewed-by: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Steven Price steven.price@arm.com Link: https://patchwork.freedesktop.org/patch/msgid/20200916150147.25753-2-narmstr... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/panfrost/panfrost_device.h | 3 +++ drivers/gpu/drm/panfrost/panfrost_gpu.c | 4 ++++ 2 files changed, 7 insertions(+)
diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h index c30c719a80594..3c4a85213c15f 100644 --- a/drivers/gpu/drm/panfrost/panfrost_device.h +++ b/drivers/gpu/drm/panfrost/panfrost_device.h @@ -69,6 +69,9 @@ struct panfrost_compatible { int num_pm_domains; /* Only required if num_pm_domains > 1. */ const char * const *pm_domain_names; + + /* Vendor implementation quirks callback */ + void (*vendor_quirk)(struct panfrost_device *pfdev); };
struct panfrost_device { diff --git a/drivers/gpu/drm/panfrost/panfrost_gpu.c b/drivers/gpu/drm/panfrost/panfrost_gpu.c index 6606643850fc6..502b1f1f7b204 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gpu.c +++ b/drivers/gpu/drm/panfrost/panfrost_gpu.c @@ -146,6 +146,10 @@ static void panfrost_gpu_init_quirks(struct panfrost_device *pfdev)
if (quirks) gpu_write(pfdev, GPU_JM_CONFIG, quirks); + + /* Here goes platform specific quirks */ + if (pfdev->comp->vendor_quirk) + pfdev->comp->vendor_quirk(pfdev); }
#define MAX_HW_REVS 6
From: Maciej Fijalkowski maciej.fijalkowski@intel.com
[ Upstream commit 7f6e4312e15a5c370e84eaa685879b6bdcc717e4 ]
Protect against potential stack overflow that might happen when bpf2bpf calls get combined with tailcalls. Limit the caller's stack depth for such case down to 256 so that the worst case scenario would result in 8k stack size (32 which is tailcall limit * 256 = 8k).
Suggested-by: Alexei Starovoitov ast@kernel.org Signed-off-by: Maciej Fijalkowski maciej.fijalkowski@intel.com Signed-off-by: Alexei Starovoitov ast@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/bpf_verifier.h | 1 + kernel/bpf/verifier.c | 29 +++++++++++++++++++++++++++++ 2 files changed, 30 insertions(+)
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 53c7bd568c5d4..5026b75db9725 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -358,6 +358,7 @@ struct bpf_subprog_info { u32 start; /* insn idx of function entry point */ u32 linfo_idx; /* The idx to the main_prog->aux->linfo */ u16 stack_depth; /* max. stack depth used by this function */ + bool has_tail_call; };
/* single container for all structs diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index fba52d9ec8fc4..cf9172f40ebcd 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -1489,6 +1489,10 @@ static int check_subprogs(struct bpf_verifier_env *env) for (i = 0; i < insn_cnt; i++) { u8 code = insn[i].code;
+ if (code == (BPF_JMP | BPF_CALL) && + insn[i].imm == BPF_FUNC_tail_call && + insn[i].src_reg != BPF_PSEUDO_CALL) + subprog[cur_subprog].has_tail_call = true; if (BPF_CLASS(code) != BPF_JMP && BPF_CLASS(code) != BPF_JMP32) goto next; if (BPF_OP(code) == BPF_EXIT || BPF_OP(code) == BPF_CALL) @@ -2974,6 +2978,31 @@ static int check_max_stack_depth(struct bpf_verifier_env *env) int ret_prog[MAX_CALL_FRAMES];
process_func: + /* protect against potential stack overflow that might happen when + * bpf2bpf calls get combined with tailcalls. Limit the caller's stack + * depth for such case down to 256 so that the worst case scenario + * would result in 8k stack size (32 which is tailcall limit * 256 = + * 8k). + * + * To get the idea what might happen, see an example: + * func1 -> sub rsp, 128 + * subfunc1 -> sub rsp, 256 + * tailcall1 -> add rsp, 256 + * func2 -> sub rsp, 192 (total stack size = 128 + 192 = 320) + * subfunc2 -> sub rsp, 64 + * subfunc22 -> sub rsp, 128 + * tailcall2 -> add rsp, 128 + * func3 -> sub rsp, 32 (total stack size 128 + 192 + 64 + 32 = 416) + * + * tailcall will unwind the current stack frame but it will not get rid + * of caller's stack as shown on the example above. + */ + if (idx && subprog[idx].has_tail_call && depth >= 256) { + verbose(env, + "tail_calls are not allowed when call stack of previous frames is %d bytes. Too large\n", + depth); + return -EACCES; + } /* round up to 32-bytes, since this is granularity * of interpreter stack size */
From: Thomas Tai thomas.tai@oracle.com
[ Upstream commit f959dcd6ddfd29235030e8026471ac1b022ad2b0 ]
When booting the kernel v5.9-rc4 on a VM, the kernel would panic when printing a warning message in swiotlb_map(). The dev->dma_mask must not be a NULL pointer when calling the dma mapping layer. A NULL pointer check can potentially avoid the panic.
Signed-off-by: Thomas Tai thomas.tai@oracle.com Reviewed-by: Konrad Rzeszutek Wilk konrad.wilk@oracle.com Signed-off-by: Christoph Hellwig hch@lst.de Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/dma-direct.h | 3 --- kernel/dma/mapping.c | 11 +++++++++++ 2 files changed, 11 insertions(+), 3 deletions(-)
diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h index 6e87225600ae3..064870844f06c 100644 --- a/include/linux/dma-direct.h +++ b/include/linux/dma-direct.h @@ -62,9 +62,6 @@ static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size, { dma_addr_t end = addr + size - 1;
- if (!dev->dma_mask) - return false; - if (is_ram && !IS_ENABLED(CONFIG_ARCH_DMA_ADDR_T_64BIT) && min(addr, end) < phys_to_dma(dev, PFN_PHYS(min_low_pfn))) return false; diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 0d129421e75fc..7133d5c6e1a6d 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -144,6 +144,10 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, dma_addr_t addr;
BUG_ON(!valid_dma_direction(dir)); + + if (WARN_ON_ONCE(!dev->dma_mask)) + return DMA_MAPPING_ERROR; + if (dma_map_direct(dev, ops)) addr = dma_direct_map_page(dev, page, offset, size, dir, attrs); else @@ -179,6 +183,10 @@ int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents, int ents;
BUG_ON(!valid_dma_direction(dir)); + + if (WARN_ON_ONCE(!dev->dma_mask)) + return 0; + if (dma_map_direct(dev, ops)) ents = dma_direct_map_sg(dev, sg, nents, dir, attrs); else @@ -213,6 +221,9 @@ dma_addr_t dma_map_resource(struct device *dev, phys_addr_t phys_addr,
BUG_ON(!valid_dma_direction(dir));
+ if (WARN_ON_ONCE(!dev->dma_mask)) + return DMA_MAPPING_ERROR; + /* Don't allow RAM to be mapped */ if (WARN_ON_ONCE(pfn_valid(PHYS_PFN(phys_addr)))) return DMA_MAPPING_ERROR;
From: Keita Suzuki keitasuzuki.park@sslab.ics.keio.ac.jp
[ Upstream commit bc28369c6189009b66d9619dd9f09bd8c684bb98 ]
When mfd_add_devices() fail, pcr->slots should also be freed. However, the current implementation does not free the member, leading to a memory leak.
Fix this by adding a new goto label that frees pcr->slots.
Signed-off-by: Keita Suzuki keitasuzuki.park@sslab.ics.keio.ac.jp Link: https://lore.kernel.org/r/20200909071853.4053-1-keitasuzuki.park@sslab.ics.k... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/misc/cardreader/rtsx_pcr.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/misc/cardreader/rtsx_pcr.c b/drivers/misc/cardreader/rtsx_pcr.c index 37ccc67f4914b..f2b2805942f50 100644 --- a/drivers/misc/cardreader/rtsx_pcr.c +++ b/drivers/misc/cardreader/rtsx_pcr.c @@ -1562,12 +1562,14 @@ static int rtsx_pci_probe(struct pci_dev *pcidev, ret = mfd_add_devices(&pcidev->dev, pcr->id, rtsx_pcr_cells, ARRAY_SIZE(rtsx_pcr_cells), NULL, 0, NULL); if (ret < 0) - goto disable_irq; + goto free_slots;
schedule_delayed_work(&pcr->idle_work, msecs_to_jiffies(200));
return 0;
+free_slots: + kfree(pcr->slots); disable_irq: free_irq(pcr->irq, (void *)pcr); disable_msi:
From: Eric Biggers ebiggers@google.com
[ Upstream commit 8859bf2b1278d064a139e3031451524a49a56bd0 ]
unlock_new_inode() is only meant to be called after a new inode has already been inserted into the hash table. But reiserfs_new_inode() can call it even before it has inserted the inode, triggering the WARNING in unlock_new_inode(). Fix this by only calling unlock_new_inode() if the inode has the I_NEW flag set, indicating that it's in the table.
This addresses the syzbot report "WARNING in unlock_new_inode" (https://syzkaller.appspot.com/bug?extid=187510916eb6a14598f7).
Link: https://lore.kernel.org/r/20200628070057.820213-1-ebiggers@kernel.org Reported-by: syzbot+187510916eb6a14598f7@syzkaller.appspotmail.com Signed-off-by: Eric Biggers ebiggers@google.com Signed-off-by: Jan Kara jack@suse.cz Signed-off-by: Sasha Levin sashal@kernel.org --- fs/reiserfs/inode.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/reiserfs/inode.c b/fs/reiserfs/inode.c index 1509775da040a..e3af44c61524d 100644 --- a/fs/reiserfs/inode.c +++ b/fs/reiserfs/inode.c @@ -2163,7 +2163,8 @@ int reiserfs_new_inode(struct reiserfs_transaction_handle *th, out_inserted_sd: clear_nlink(inode); th->t_trans_id = 0; /* so the caller can't use this handle later */ - unlock_new_inode(inode); /* OK to do even if we hadn't locked it */ + if (inode->i_state & I_NEW) + unlock_new_inode(inode); iput(inode); return err; }
From: Viresh Kumar viresh.kumar@linaro.org
[ Upstream commit cb60e9602cce1593eb1e9cdc8ee562815078a354 ]
If dev_pm_opp_attach_genpd() is called multiple times (once for each CPU sharing the table), then it would result in unwanted behavior like memory leak, attaching the domain multiple times, etc.
Handle that by checking and returning earlier if the domains are already attached. Now that dev_pm_opp_detach_genpd() can get called multiple times as well, we need to protect that too.
Note that the virtual device pointers aren't returned in this case, as they may become unavailable to some callers during the middle of the operation.
Reported-by: Stephan Gerhold stephan@gerhold.net Signed-off-by: Viresh Kumar viresh.kumar@linaro.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/opp/core.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/drivers/opp/core.c b/drivers/opp/core.c index 3ca7543142bf3..1a95ad40795be 100644 --- a/drivers/opp/core.c +++ b/drivers/opp/core.c @@ -1949,6 +1949,9 @@ static void _opp_detach_genpd(struct opp_table *opp_table) { int index;
+ if (!opp_table->genpd_virt_devs) + return; + for (index = 0; index < opp_table->required_opp_count; index++) { if (!opp_table->genpd_virt_devs[index]) continue; @@ -1995,6 +1998,9 @@ struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, if (!opp_table) return ERR_PTR(-ENOMEM);
+ if (opp_table->genpd_virt_devs) + return opp_table; + /* * If the genpd's OPP table isn't already initialized, parsing of the * required-opps fail for dev. We should retry this after genpd's OPP
From: "Darrick J. Wong" darrick.wong@oracle.com
[ Upstream commit 2a6ca4baed620303d414934aa1b7b0a8e7bab05f ]
There's an overflow bug in the realtime allocator. If the rt volume is large enough to handle a single allocation request that is larger than the maximum bmap extent length and the rt bitmap ends exactly on a bitmap block boundary, it's possible that the near allocator will try to check the freeness of a range that extends past the end of the bitmap. This fails with a corruption error and shuts down the fs.
Therefore, constrain maxlen so that the range scan cannot run off the end of the rt bitmap.
Signed-off-by: Darrick J. Wong darrick.wong@oracle.com Reviewed-by: Christoph Hellwig hch@lst.de Signed-off-by: Sasha Levin sashal@kernel.org --- fs/xfs/xfs_rtalloc.c | 11 +++++++++++ 1 file changed, 11 insertions(+)
diff --git a/fs/xfs/xfs_rtalloc.c b/fs/xfs/xfs_rtalloc.c index 6209e7b6b895b..86994d7f7cba3 100644 --- a/fs/xfs/xfs_rtalloc.c +++ b/fs/xfs/xfs_rtalloc.c @@ -247,6 +247,9 @@ xfs_rtallocate_extent_block( end = XFS_BLOCKTOBIT(mp, bbno + 1) - 1; i <= end; i++) { + /* Make sure we don't scan off the end of the rt volume. */ + maxlen = min(mp->m_sb.sb_rextents, i + maxlen) - i; + /* * See if there's a free extent of maxlen starting at i. * If it's not so then next will contain the first non-free. @@ -442,6 +445,14 @@ xfs_rtallocate_extent_near( */ if (bno >= mp->m_sb.sb_rextents) bno = mp->m_sb.sb_rextents - 1; + + /* Make sure we don't run off the end of the rt volume. */ + maxlen = min(mp->m_sb.sb_rextents, bno + maxlen) - bno; + if (maxlen < minlen) { + *rtblock = NULLRTBLOCK; + return 0; + } + /* * Try the exact allocation first. */
From: Hamish Martin hamish.martin@alliedtelesis.co.nz
[ Upstream commit b77d2a0a223bc139ee8904991b2922d215d02636 ]
Some integrated OHCI controller hubs do not expose all ports of the hub to pins on the SoC. In some cases the unconnected ports generate spurious over-current events. For example the Broadcom 56060/Ranger 2 SoC contains a nominally 3 port hub but only the first port is wired.
Default behaviour for ohci-platform driver is to use global over-current protection mode (AKA "ganged"). This leads to the spurious over-current events affecting all ports in the hub.
We now alter the default to use per-port over-current protection.
This patch results in the following configuration changes depending on quirks: - For quirk OHCI_QUIRK_SUPERIO no changes. These systems remain set up for ganged power switching and no over-current protection. - For quirk OHCI_QUIRK_AMD756 or OHCI_QUIRK_HUB_POWER power switching remains at none, while over-current protection is now guaranteed to be set to per-port rather than the previous behaviour where it was either none or global over-current protection depending on the value at function entry.
Suggested-by: Alan Stern stern@rowland.harvard.edu Acked-by: Alan Stern stern@rowland.harvard.edu Signed-off-by: Hamish Martin hamish.martin@alliedtelesis.co.nz Link: https://lore.kernel.org/r/20200910212512.16670-1-hamish.martin@alliedtelesis... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/host/ohci-hcd.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-)
diff --git a/drivers/usb/host/ohci-hcd.c b/drivers/usb/host/ohci-hcd.c index dd37e77dae001..2845ea328a064 100644 --- a/drivers/usb/host/ohci-hcd.c +++ b/drivers/usb/host/ohci-hcd.c @@ -673,20 +673,24 @@ static int ohci_run (struct ohci_hcd *ohci)
/* handle root hub init quirks ... */ val = roothub_a (ohci); - val &= ~(RH_A_PSM | RH_A_OCPM); + /* Configure for per-port over-current protection by default */ + val &= ~RH_A_NOCP; + val |= RH_A_OCPM; if (ohci->flags & OHCI_QUIRK_SUPERIO) { - /* NSC 87560 and maybe others */ + /* NSC 87560 and maybe others. + * Ganged power switching, no over-current protection. + */ val |= RH_A_NOCP; - val &= ~(RH_A_POTPGT | RH_A_NPS); - ohci_writel (ohci, val, &ohci->regs->roothub.a); + val &= ~(RH_A_POTPGT | RH_A_NPS | RH_A_PSM | RH_A_OCPM); } else if ((ohci->flags & OHCI_QUIRK_AMD756) || (ohci->flags & OHCI_QUIRK_HUB_POWER)) { /* hub power always on; required for AMD-756 and some - * Mac platforms. ganged overcurrent reporting, if any. + * Mac platforms. */ val |= RH_A_NPS; - ohci_writel (ohci, val, &ohci->regs->roothub.a); } + ohci_writel(ohci, val, &ohci->regs->roothub.a); + ohci_writel (ohci, RH_HS_LPSC, &ohci->regs->roothub.status); ohci_writel (ohci, (val & RH_A_NPS) ? 0 : RH_B_PPCM, &ohci->regs->roothub.b);
From: Jia Yang jiayang5@huawei.com
[ Upstream commit da62cb7230f0871c30dc9789071f63229158d261 ]
I got a use-after-free report when doing some fuzz test:
If ttm_bo_init() fails, the "gbo" and "gbo->bo.base" will be freed by ttm_buffer_object_destroy() in ttm_bo_init(). But then drm_gem_vram_create() and drm_gem_vram_init() will free "gbo" and "gbo->bo.base" again.
BUG: KMSAN: use-after-free in drm_vma_offset_remove+0xb3/0x150 CPU: 0 PID: 24282 Comm: syz-executor.1 Tainted: G B W 5.7.0-rc4-msan #2 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014 Call Trace: __dump_stack dump_stack+0x1c9/0x220 kmsan_report+0xf7/0x1e0 __msan_warning+0x58/0xa0 drm_vma_offset_remove+0xb3/0x150 drm_gem_free_mmap_offset drm_gem_object_release+0x159/0x180 drm_gem_vram_init drm_gem_vram_create+0x7c5/0x990 drm_gem_vram_fill_create_dumb drm_gem_vram_driver_dumb_create+0x238/0x590 drm_mode_create_dumb drm_mode_create_dumb_ioctl+0x41d/0x450 drm_ioctl_kernel+0x5a4/0x710 drm_ioctl+0xc6f/0x1240 vfs_ioctl ksys_ioctl __do_sys_ioctl __se_sys_ioctl+0x2e9/0x410 __x64_sys_ioctl+0x4a/0x70 do_syscall_64+0xb8/0x160 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x4689b9 Code: fd e0 fa ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 cb e0 fa ff c3 66 2e 0f 1f 84 00 00 00 00 RSP: 002b:00007f368fa4dc98 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 000000000076bf00 RCX: 00000000004689b9 RDX: 0000000020000240 RSI: 00000000c02064b2 RDI: 0000000000000003 RBP: 0000000000000004 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00000000004d17e0 R14: 00007f368fa4e6d4 R15: 000000000076bf0c
Uninit was created at: kmsan_save_stack_with_flags kmsan_internal_poison_shadow+0x66/0xd0 kmsan_slab_free+0x6e/0xb0 slab_free_freelist_hook slab_free kfree+0x571/0x30a0 drm_gem_vram_destroy ttm_buffer_object_destroy+0xc8/0x130 ttm_bo_release kref_put ttm_bo_put+0x117d/0x23e0 ttm_bo_init_reserved+0x11c0/0x11d0 ttm_bo_init+0x289/0x3f0 drm_gem_vram_init drm_gem_vram_create+0x775/0x990 drm_gem_vram_fill_create_dumb drm_gem_vram_driver_dumb_create+0x238/0x590 drm_mode_create_dumb drm_mode_create_dumb_ioctl+0x41d/0x450 drm_ioctl_kernel+0x5a4/0x710 drm_ioctl+0xc6f/0x1240 vfs_ioctl ksys_ioctl __do_sys_ioctl __se_sys_ioctl+0x2e9/0x410 __x64_sys_ioctl+0x4a/0x70 do_syscall_64+0xb8/0x160 entry_SYSCALL_64_after_hwframe+0x44/0xa9
If ttm_bo_init() fails, the "gbo" will be freed by ttm_buffer_object_destroy() in ttm_bo_init(). But then drm_gem_vram_create() and drm_gem_vram_init() will free "gbo" again.
Reported-by: Hulk Robot hulkci@huawei.com Reported-by: butt3rflyh4ck butterflyhuangxx@gmail.com Signed-off-by: Jia Yang jiayang5@huawei.com Signed-off-by: Thomas Zimmermann tzimmermann@suse.de Reviewed-by: Thomas Zimmermann tzimmermann@suse.de Link: https://patchwork.freedesktop.org/patch/msgid/20200714083238.28479-2-tzimmer... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/drm_gem_vram_helper.c | 28 +++++++++++++++------------ 1 file changed, 16 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c index 3296ed3df3580..8b65ca164bf4b 100644 --- a/drivers/gpu/drm/drm_gem_vram_helper.c +++ b/drivers/gpu/drm/drm_gem_vram_helper.c @@ -167,6 +167,10 @@ static void drm_gem_vram_placement(struct drm_gem_vram_object *gbo, } }
+/* + * Note that on error, drm_gem_vram_init will free the buffer object. + */ + static int drm_gem_vram_init(struct drm_device *dev, struct drm_gem_vram_object *gbo, size_t size, unsigned long pg_align) @@ -176,15 +180,19 @@ static int drm_gem_vram_init(struct drm_device *dev, int ret; size_t acc_size;
- if (WARN_ONCE(!vmm, "VRAM MM not initialized")) + if (WARN_ONCE(!vmm, "VRAM MM not initialized")) { + kfree(gbo); return -EINVAL; + } bdev = &vmm->bdev;
gbo->bo.base.funcs = &drm_gem_vram_object_funcs;
ret = drm_gem_object_init(dev, &gbo->bo.base, size); - if (ret) + if (ret) { + kfree(gbo); return ret; + }
acc_size = ttm_bo_dma_acc_size(bdev, size, sizeof(*gbo));
@@ -195,13 +203,13 @@ static int drm_gem_vram_init(struct drm_device *dev, &gbo->placement, pg_align, false, acc_size, NULL, NULL, ttm_buffer_object_destroy); if (ret) - goto err_drm_gem_object_release; + /* + * A failing ttm_bo_init will call ttm_buffer_object_destroy + * to release gbo->bo.base and kfree gbo. + */ + return ret;
return 0; - -err_drm_gem_object_release: - drm_gem_object_release(&gbo->bo.base); - return ret; }
/** @@ -235,13 +243,9 @@ struct drm_gem_vram_object *drm_gem_vram_create(struct drm_device *dev,
ret = drm_gem_vram_init(dev, gbo, size, pg_align); if (ret < 0) - goto err_kfree; + return ERR_PTR(ret);
return gbo; - -err_kfree: - kfree(gbo); - return ERR_PTR(ret); } EXPORT_SYMBOL(drm_gem_vram_create);
From: Abhishek Pandit-Subedi abhishekpandit@chromium.org
[ Upstream commit 20ae4089d0afeb24e9ceb026b996bfa55c983cc2 ]
Since l2cap_sock_teardown_cb doesn't acquire the channel lock before setting the socket as zapped, it could potentially race with l2cap_sock_release which frees the socket. Thus, wait until the cleanup is complete before marking the socket as zapped.
This race was reproduced on a JBL GO speaker after the remote device rejected L2CAP connection due to resource unavailability.
Here is a dmesg log with debug logs from a repro of this bug: [ 3465.424086] Bluetooth: hci_core.c:hci_acldata_packet() hci0 len 16 handle 0x0003 flags 0x0002 [ 3465.424090] Bluetooth: hci_conn.c:hci_conn_enter_active_mode() hcon 00000000cfedd07d mode 0 [ 3465.424094] Bluetooth: l2cap_core.c:l2cap_recv_acldata() conn 000000007eae8952 len 16 flags 0x2 [ 3465.424098] Bluetooth: l2cap_core.c:l2cap_recv_frame() len 12, cid 0x0001 [ 3465.424102] Bluetooth: l2cap_core.c:l2cap_raw_recv() conn 000000007eae8952 [ 3465.424175] Bluetooth: l2cap_core.c:l2cap_sig_channel() code 0x03 len 8 id 0x0c [ 3465.424180] Bluetooth: l2cap_core.c:l2cap_connect_create_rsp() dcid 0x0045 scid 0x0000 result 0x02 status 0x00 [ 3465.424189] Bluetooth: l2cap_core.c:l2cap_chan_put() chan 000000006acf9bff orig refcnt 4 [ 3465.424196] Bluetooth: l2cap_core.c:l2cap_chan_del() chan 000000006acf9bff, conn 000000007eae8952, err 111, state BT_CONNECT [ 3465.424203] Bluetooth: l2cap_sock.c:l2cap_sock_teardown_cb() chan 000000006acf9bff state BT_CONNECT [ 3465.424221] Bluetooth: l2cap_core.c:l2cap_chan_put() chan 000000006acf9bff orig refcnt 3 [ 3465.424226] Bluetooth: hci_core.h:hci_conn_drop() hcon 00000000cfedd07d orig refcnt 6 [ 3465.424234] BUG: spinlock bad magic on CPU#2, kworker/u17:0/159 [ 3465.425626] Bluetooth: hci_sock.c:hci_sock_sendmsg() sock 000000002bb0cb64 sk 00000000a7964053 [ 3465.430330] lock: 0xffffff804410aac0, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0 [ 3465.430332] Causing a watchdog bite!
Signed-off-by: Abhishek Pandit-Subedi abhishekpandit@chromium.org Reported-by: Balakrishna Godavarthi bgodavar@codeaurora.org Reviewed-by: Manish Mandlik mmandlik@chromium.org Signed-off-by: Marcel Holtmann marcel@holtmann.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/bluetooth/l2cap_sock.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c index e1a3e66b17540..e7cfe28140c39 100644 --- a/net/bluetooth/l2cap_sock.c +++ b/net/bluetooth/l2cap_sock.c @@ -1521,8 +1521,6 @@ static void l2cap_sock_teardown_cb(struct l2cap_chan *chan, int err)
parent = bt_sk(sk)->parent;
- sock_set_flag(sk, SOCK_ZAPPED); - switch (chan->state) { case BT_OPEN: case BT_BOUND: @@ -1549,8 +1547,11 @@ static void l2cap_sock_teardown_cb(struct l2cap_chan *chan, int err)
break; } - release_sock(sk); + + /* Only zap after cleanup to avoid use after free race */ + sock_set_flag(sk, SOCK_ZAPPED); + }
static void l2cap_sock_state_change_cb(struct l2cap_chan *chan, int state,
From: Zhenzhong Duan zhenzhong.duan@gmail.com
[ Upstream commit 08d3ab4b46339bc6f97e83b54a3fb4f8bf8f4cd9 ]
It's allocating an array of a6xx_gpu_state_obj structure rathor than its pointers.
This patch fix it.
Signed-off-by: Zhenzhong Duan zhenzhong.duan@gmail.com Signed-off-by: Rob Clark robdclark@chromium.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c index b12f5b4a1bea9..e9ede19193b0e 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c @@ -875,7 +875,7 @@ static void a6xx_get_indexed_registers(struct msm_gpu *gpu, int i;
a6xx_state->indexed_regs = state_kcalloc(a6xx_state, count, - sizeof(a6xx_state->indexed_regs)); + sizeof(*a6xx_state->indexed_regs)); if (!a6xx_state->indexed_regs) return;
From: Daniel Vetter daniel.vetter@ffwll.ch
[ Upstream commit 075342ea3d93044d68f821cf91c1a1a7d2fa569e ]
Gets rid of drmm_add_final_kfree, which I want to unexport so that it stops confusion people about this transitional state of rolling drm managed memory out.
This also fixes the missing drm_dev_put in the error path of the probe code.
v2: Drop the misplaced drm_dev_put from zynqmp_dpsub_drm_init (all other paths leaked on error, this should have been in zynqmp_dpsub_probe), now that subsumed by the auto-cleanup of devm_drm_dev_alloc.
Reviewed-by: Hyun Kwon hyun.kwon@xilinx.com Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Hyun Kwon hyun.kwon@xilinx.com Cc: Laurent Pinchart laurent.pinchart@ideasonboard.com Cc: Michal Simek michal.simek@xilinx.com Cc: linux-arm-kernel@lists.infradead.org Link: https://patchwork.freedesktop.org/patch/msgid/20200907082225.150837-1-daniel... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/xlnx/zynqmp_dpsub.c | 27 ++++++--------------------- 1 file changed, 6 insertions(+), 21 deletions(-)
diff --git a/drivers/gpu/drm/xlnx/zynqmp_dpsub.c b/drivers/gpu/drm/xlnx/zynqmp_dpsub.c index 26328c76305be..8e69303aad3f7 100644 --- a/drivers/gpu/drm/xlnx/zynqmp_dpsub.c +++ b/drivers/gpu/drm/xlnx/zynqmp_dpsub.c @@ -111,7 +111,7 @@ static int zynqmp_dpsub_drm_init(struct zynqmp_dpsub *dpsub) /* Initialize mode config, vblank and the KMS poll helper. */ ret = drmm_mode_config_init(drm); if (ret < 0) - goto err_dev_put; + return ret;
drm->mode_config.funcs = &zynqmp_dpsub_mode_config_funcs; drm->mode_config.min_width = 0; @@ -121,7 +121,7 @@ static int zynqmp_dpsub_drm_init(struct zynqmp_dpsub *dpsub)
ret = drm_vblank_init(drm, 1); if (ret) - goto err_dev_put; + return ret;
drm->irq_enabled = 1;
@@ -154,8 +154,6 @@ static int zynqmp_dpsub_drm_init(struct zynqmp_dpsub *dpsub)
err_poll_fini: drm_kms_helper_poll_fini(drm); -err_dev_put: - drm_dev_put(drm); return ret; }
@@ -208,27 +206,16 @@ static int zynqmp_dpsub_probe(struct platform_device *pdev) int ret;
/* Allocate private data. */ - dpsub = kzalloc(sizeof(*dpsub), GFP_KERNEL); - if (!dpsub) - return -ENOMEM; + dpsub = devm_drm_dev_alloc(&pdev->dev, &zynqmp_dpsub_drm_driver, + struct zynqmp_dpsub, drm); + if (IS_ERR(dpsub)) + return PTR_ERR(dpsub);
dpsub->dev = &pdev->dev; platform_set_drvdata(pdev, dpsub);
dma_set_mask(dpsub->dev, DMA_BIT_MASK(ZYNQMP_DISP_MAX_DMA_BIT));
- /* - * Initialize the DRM device early, as the DRM core mandates usage of - * the managed memory helpers tied to the DRM device. - */ - ret = drm_dev_init(&dpsub->drm, &zynqmp_dpsub_drm_driver, &pdev->dev); - if (ret < 0) { - kfree(dpsub); - return ret; - } - - drmm_add_final_kfree(&dpsub->drm, dpsub); - /* Try the reserved memory. Proceed if there's none. */ of_reserved_mem_device_init(&pdev->dev);
@@ -286,8 +273,6 @@ static int zynqmp_dpsub_remove(struct platform_device *pdev) clk_disable_unprepare(dpsub->apb_clk); of_reserved_mem_device_release(&pdev->dev);
- drm_dev_put(drm); - return 0; }
From: Qian Cai cai@lca.pw
[ Upstream commit a805c111650cdba6ee880f528abdd03c1af82089 ]
It is trivial to trigger a WARN_ON_ONCE(1) in iomap_dio_actor() by unprivileged users which would taint the kernel, or worse - panic if panic_on_warn or panic_on_taint is set. Hence, just convert it to pr_warn_ratelimited() to let users know their workloads are racing. Thank Dave Chinner for the initial analysis of the racing reproducers.
Signed-off-by: Qian Cai cai@lca.pw Reviewed-by: Christoph Hellwig hch@lst.de Reviewed-by: Darrick J. Wong darrick.wong@oracle.com Signed-off-by: Darrick J. Wong darrick.wong@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- fs/iomap/direct-io.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c index c1aafb2ab9907..9519113ebc352 100644 --- a/fs/iomap/direct-io.c +++ b/fs/iomap/direct-io.c @@ -388,6 +388,16 @@ iomap_dio_actor(struct inode *inode, loff_t pos, loff_t length, return iomap_dio_bio_actor(inode, pos, length, dio, iomap); case IOMAP_INLINE: return iomap_dio_inline_actor(inode, pos, length, dio, iomap); + case IOMAP_DELALLOC: + /* + * DIO is not serialised against mmap() access at all, and so + * if the page_mkwrite occurs between the writeback and the + * iomap_apply() call in the DIO path, then it will see the + * DELALLOC block that the page-mkwrite allocated. + */ + pr_warn_ratelimited("Direct I/O collision with buffered writes! File: %pD4 Comm: %.20s\n", + dio->iocb->ki_filp, current->comm); + return -EIO; default: WARN_ON_ONCE(1); return -EIO;
From: Jing Xiangfeng jingxiangfeng@huawei.com
[ Upstream commit 5e48a084f4e824e1b624d3fd7ddcf53d2ba69e53 ]
Fix to return error code PTR_ERR() from the error handling case instead of 0.
Link: https://lore.kernel.org/r/20200907083949.154251-1-jingxiangfeng@huawei.com Acked-by: Tyrel Datwyler tyreld@linux.ibm.com Signed-off-by: Jing Xiangfeng jingxiangfeng@huawei.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/ibmvscsi/ibmvfc.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c index ea7c8930592dc..70daa0605082d 100644 --- a/drivers/scsi/ibmvscsi/ibmvfc.c +++ b/drivers/scsi/ibmvscsi/ibmvfc.c @@ -4928,6 +4928,7 @@ static int ibmvfc_probe(struct vio_dev *vdev, const struct vio_device_id *id) if (IS_ERR(vhost->work_thread)) { dev_err(dev, "Couldn't create kernel thread: %ld\n", PTR_ERR(vhost->work_thread)); + rc = PTR_ERR(vhost->work_thread); goto free_host_mem; }
From: Daniel Wagner dwagner@suse.de
[ Upstream commit c0014f94218ea3a312f6235febea0d626c5f2154 ]
Emit a warning when ->done or ->free are called on an already freed srb. There is a hidden use-after-free bug in the driver which corrupts the srb memory pool which originates from the cleanup callbacks.
An extensive search didn't bring any lights on the real problem. The initial fix was to set both pointers to NULL and try to catch invalid accesses. But instead the memory corruption was gone and the driver didn't crash. Since not all calling places check for NULL pointer, add explicitly default handlers. With this we workaround the memory corruption and add a debug help.
Link: https://lore.kernel.org/r/20200908081516.8561-2-dwagner@suse.de Reviewed-by: Martin Wilck mwilck@suse.com Reviewed-by: Arun Easi aeasi@marvell.com Signed-off-by: Daniel Wagner dwagner@suse.de Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/qla2xxx/qla_init.c | 10 ++++++++++ drivers/scsi/qla2xxx/qla_inline.h | 5 +++++ 2 files changed, 15 insertions(+)
diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c index 0bd04a62af836..8d4b651e14422 100644 --- a/drivers/scsi/qla2xxx/qla_init.c +++ b/drivers/scsi/qla2xxx/qla_init.c @@ -63,6 +63,16 @@ void qla2x00_sp_free(srb_t *sp) qla2x00_rel_sp(sp); }
+void qla2xxx_rel_done_warning(srb_t *sp, int res) +{ + WARN_ONCE(1, "Calling done() of an already freed srb %p object\n", sp); +} + +void qla2xxx_rel_free_warning(srb_t *sp) +{ + WARN_ONCE(1, "Calling free() of an already freed srb %p object\n", sp); +} + /* Asynchronous Login/Logout Routines -------------------------------------- */
unsigned long diff --git a/drivers/scsi/qla2xxx/qla_inline.h b/drivers/scsi/qla2xxx/qla_inline.h index 861dc522723ce..2aa6f81f87c43 100644 --- a/drivers/scsi/qla2xxx/qla_inline.h +++ b/drivers/scsi/qla2xxx/qla_inline.h @@ -207,10 +207,15 @@ qla2xxx_get_qpair_sp(scsi_qla_host_t *vha, struct qla_qpair *qpair, return sp; }
+void qla2xxx_rel_done_warning(srb_t *sp, int res); +void qla2xxx_rel_free_warning(srb_t *sp); + static inline void qla2xxx_rel_qpair_sp(struct qla_qpair *qpair, srb_t *sp) { sp->qpair = NULL; + sp->done = qla2xxx_rel_done_warning; + sp->free = qla2xxx_rel_free_warning; mempool_free(sp, qpair->srb_mempool); QLA_QPAIR_MARK_NOT_BUSY(qpair); }
From: Yonghong Song yhs@fb.com
[ Upstream commit 7fb5eefd76394cfefb380724a87ca40b47d44405 ]
Andrii reported that with latest clang, when building selftests, we have error likes: error: progs/test_sysctl_loop1.c:23:16: in function sysctl_tcp_mem i32 (%struct.bpf_sysctl*): Looks like the BPF stack limit of 512 bytes is exceeded. Please move large on stack variables into BPF per-cpu array map.
The error is triggered by the following LLVM patch: https://reviews.llvm.org/D87134
For example, the following code is from test_sysctl_loop1.c: static __always_inline int is_tcp_mem(struct bpf_sysctl *ctx) { volatile char tcp_mem_name[] = "net/ipv4/tcp_mem/very_very_very_very_long_pointless_string"; ... } Without the above LLVM patch, the compiler did optimization to load the string (59 bytes long) with 7 64bit loads, 1 8bit load and 1 16bit load, occupying 64 byte stack size.
With the above LLVM patch, the compiler only uses 8bit loads, but subregister is 32bit. So stack requirements become 4 * 59 = 236 bytes. Together with other stuff on the stack, total stack size exceeds 512 bytes, hence compiler complains and quits.
To fix the issue, removing "volatile" key word or changing "volatile" to "const"/"static const" does not work, the string is put in .rodata.str1.1 section, which libbpf did not process it and errors out with libbpf: elf: skipping unrecognized data section(6) .rodata.str1.1 libbpf: prog 'sysctl_tcp_mem': bad map relo against '.L__const.is_tcp_mem.tcp_mem_name' in section '.rodata.str1.1'
Defining the string const as global variable can fix the issue as it puts the string constant in '.rodata' section which is recognized by libbpf. In the future, when libbpf can process '.rodata.str*.*' properly, the global definition can be changed back to local definition.
Defining tcp_mem_name as a global, however, triggered a verifier failure. ./test_progs -n 7/21 libbpf: load bpf program failed: Permission denied libbpf: -- BEGIN DUMP LOG --- libbpf: invalid stack off=0 size=1 verification time 6975 usec stack depth 160+64 processed 889 insns (limit 1000000) max_states_per_insn 4 total_states 14 peak_states 14 mark_read 10
libbpf: -- END LOG -- libbpf: failed to load program 'sysctl_tcp_mem' libbpf: failed to load object 'test_sysctl_loop2.o' test_bpf_verif_scale:FAIL:114 #7/21 test_sysctl_loop2.o:FAIL This actually exposed a bpf program bug. In test_sysctl_loop{1,2}, we have code like const char tcp_mem_name[] = "<...long string...>"; ... char name[64]; ... for (i = 0; i < sizeof(tcp_mem_name); ++i) if (name[i] != tcp_mem_name[i]) return 0; In the above code, if sizeof(tcp_mem_name) > 64, name[i] access may be out of bound. The sizeof(tcp_mem_name) is 59 for test_sysctl_loop1.c and 79 for test_sysctl_loop2.c.
Without promotion-to-global change, old compiler generates code where the overflowed stack access is actually filled with valid value, so hiding the bpf program bug. With promotion-to-global change, the code is different, more specifically, the previous loading constants to stack is gone, and "name" occupies stack[-64:0] and overflow access triggers a verifier error. To fix the issue, adjust "name" buffer size properly.
Reported-by: Andrii Nakryiko andriin@fb.com Signed-off-by: Yonghong Song yhs@fb.com Signed-off-by: Alexei Starovoitov ast@kernel.org Acked-by: Andrii Nakryiko andriin@fb.com Link: https://lore.kernel.org/bpf/20200909171542.3673449-1-yhs@fb.com Signed-off-by: Sasha Levin sashal@kernel.org --- tools/testing/selftests/bpf/progs/test_sysctl_loop1.c | 4 ++-- tools/testing/selftests/bpf/progs/test_sysctl_loop2.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/bpf/progs/test_sysctl_loop1.c b/tools/testing/selftests/bpf/progs/test_sysctl_loop1.c index 458b0d69133e4..553a282d816ab 100644 --- a/tools/testing/selftests/bpf/progs/test_sysctl_loop1.c +++ b/tools/testing/selftests/bpf/progs/test_sysctl_loop1.c @@ -18,11 +18,11 @@ #define MAX_ULONG_STR_LEN 7 #define MAX_VALUE_STR_LEN (TCP_MEM_LOOPS * MAX_ULONG_STR_LEN)
+const char tcp_mem_name[] = "net/ipv4/tcp_mem/very_very_very_very_long_pointless_string"; static __always_inline int is_tcp_mem(struct bpf_sysctl *ctx) { - volatile char tcp_mem_name[] = "net/ipv4/tcp_mem/very_very_very_very_long_pointless_string"; unsigned char i; - char name[64]; + char name[sizeof(tcp_mem_name)]; int ret;
memset(name, 0, sizeof(name)); diff --git a/tools/testing/selftests/bpf/progs/test_sysctl_loop2.c b/tools/testing/selftests/bpf/progs/test_sysctl_loop2.c index b2e6f9b0894d8..2b64bc563a12e 100644 --- a/tools/testing/selftests/bpf/progs/test_sysctl_loop2.c +++ b/tools/testing/selftests/bpf/progs/test_sysctl_loop2.c @@ -18,11 +18,11 @@ #define MAX_ULONG_STR_LEN 7 #define MAX_VALUE_STR_LEN (TCP_MEM_LOOPS * MAX_ULONG_STR_LEN)
+const char tcp_mem_name[] = "net/ipv4/tcp_mem/very_very_very_very_long_pointless_string_to_stress_byte_loop"; static __attribute__((noinline)) int is_tcp_mem(struct bpf_sysctl *ctx) { - volatile char tcp_mem_name[] = "net/ipv4/tcp_mem/very_very_very_very_long_pointless_string_to_stress_byte_loop"; unsigned char i; - char name[64]; + char name[sizeof(tcp_mem_name)]; int ret;
memset(name, 0, sizeof(name));
From: Pierre-Louis Bossart pierre-louis.bossart@linux.intel.com
[ Upstream commit d2068da5c85697b5880483dd7beaba98e0b62e02 ]
In system suspend stress cases, the SOF CI reports timeouts. The root cause is that an alert is generated while the system suspends. The interrupt handling generates transactions on the bus that will never be handled because the interrupts are disabled in parallel.
As a result, the transaction never completes and times out on resume. This error doesn't seem too problematic since it happens in a work queue, and the system recovers without issues.
Nevertheless, this race condition should not happen. When doing a system suspend, or when disabling interrupts, we should make sure the current transaction can complete, and prevent new work from being queued.
BugLink: https://github.com/thesofproject/linux/issues/2344 Signed-off-by: Pierre-Louis Bossart pierre-louis.bossart@linux.intel.com Reviewed-by: Ranjani Sridharan ranjani.sridharan@linux.intel.com Reviewed-by: Rander Wang rander.wang@linux.intel.com Signed-off-by: Bard Liao yung-chuan.liao@linux.intel.com Acked-by: Jaroslav Kysela perex@perex.cz Link: https://lore.kernel.org/r/20200817222340.18042-1-yung-chuan.liao@linux.intel... Signed-off-by: Vinod Koul vkoul@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/soundwire/cadence_master.c | 24 +++++++++++++++++++++++- drivers/soundwire/cadence_master.h | 1 + 2 files changed, 24 insertions(+), 1 deletion(-)
diff --git a/drivers/soundwire/cadence_master.c b/drivers/soundwire/cadence_master.c index 24eafe0aa1c3e..1330ffc475960 100644 --- a/drivers/soundwire/cadence_master.c +++ b/drivers/soundwire/cadence_master.c @@ -791,7 +791,16 @@ irqreturn_t sdw_cdns_irq(int irq, void *dev_id) CDNS_MCP_INT_SLAVE_MASK, 0);
int_status &= ~CDNS_MCP_INT_SLAVE_MASK; - schedule_work(&cdns->work); + + /* + * Deal with possible race condition between interrupt + * handling and disabling interrupts on suspend. + * + * If the master is in the process of disabling + * interrupts, don't schedule a workqueue + */ + if (cdns->interrupt_enabled) + schedule_work(&cdns->work); }
cdns_writel(cdns, CDNS_MCP_INTSTAT, int_status); @@ -924,6 +933,19 @@ int sdw_cdns_enable_interrupt(struct sdw_cdns *cdns, bool state) slave_state = cdns_readl(cdns, CDNS_MCP_SLAVE_INTSTAT1); cdns_writel(cdns, CDNS_MCP_SLAVE_INTSTAT1, slave_state); } + cdns->interrupt_enabled = state; + + /* + * Complete any on-going status updates before updating masks, + * and cancel queued status updates. + * + * There could be a race with a new interrupt thrown before + * the 3 mask updates below are complete, so in the interrupt + * we use the 'interrupt_enabled' status to prevent new work + * from being queued. + */ + if (!state) + cancel_work_sync(&cdns->work);
cdns_writel(cdns, CDNS_MCP_SLAVE_INTMASK0, slave_intmask0); cdns_writel(cdns, CDNS_MCP_SLAVE_INTMASK1, slave_intmask1); diff --git a/drivers/soundwire/cadence_master.h b/drivers/soundwire/cadence_master.h index 7638858397df9..15b0834030866 100644 --- a/drivers/soundwire/cadence_master.h +++ b/drivers/soundwire/cadence_master.h @@ -129,6 +129,7 @@ struct sdw_cdns {
bool link_up; unsigned int msg_count; + bool interrupt_enabled;
struct work_struct work;
From: Keita Suzuki keitasuzuki.park@sslab.ics.keio.ac.jp
[ Upstream commit f4443293d741d1776b86ed1dd8c4e4285d0775fc ]
When wlc_phy_txpwr_srom_read_lcnphy fails in wlc_phy_attach_lcnphy, the allocated pi->u.pi_lcnphy is leaked, since struct brcms_phy will be freed in the caller function.
Fix this by calling wlc_phy_detach_lcnphy in the error handler of wlc_phy_txpwr_srom_read_lcnphy before returning.
Signed-off-by: Keita Suzuki keitasuzuki.park@sslab.ics.keio.ac.jp Signed-off-by: Kalle Valo kvalo@codeaurora.org Link: https://lore.kernel.org/r/20200908121743.23108-1-keitasuzuki.park@sslab.ics.... Signed-off-by: Sasha Levin sashal@kernel.org --- .../net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c index 7ef36234a25dc..66797dc5e90d5 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c +++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_lcn.c @@ -5065,8 +5065,10 @@ bool wlc_phy_attach_lcnphy(struct brcms_phy *pi) pi->pi_fptr.radioloftget = wlc_lcnphy_get_radio_loft; pi->pi_fptr.detach = wlc_phy_detach_lcnphy;
- if (!wlc_phy_txpwr_srom_read_lcnphy(pi)) + if (!wlc_phy_txpwr_srom_read_lcnphy(pi)) { + kfree(pi->u.pi_lcnphy); return false; + }
if (LCNREV_IS(pi->pubpi.phy_rev, 1)) { if (pi_lcn->lcnphy_tempsense_option == 3) {
From: Chris Chiu chiu@endlessm.com
[ Upstream commit 86279456a4d47782398d3cb8193f78f672e36cac ]
Free the skb if usb_submit_urb fails on rx_urb. And free the urb no matter usb_submit_urb succeeds or not in rtl8xxxu_submit_int_urb.
Signed-off-by: Chris Chiu chiu@endlessm.com Signed-off-by: Kalle Valo kvalo@codeaurora.org Link: https://lore.kernel.org/r/20200906040424.22022-1-chiu@endlessm.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c index 19efae462a242..5cd7ef3625c5e 100644 --- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c +++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c @@ -5795,7 +5795,6 @@ static int rtl8xxxu_submit_int_urb(struct ieee80211_hw *hw) ret = usb_submit_urb(urb, GFP_KERNEL); if (ret) { usb_unanchor_urb(urb); - usb_free_urb(urb); goto error; }
@@ -5804,6 +5803,7 @@ static int rtl8xxxu_submit_int_urb(struct ieee80211_hw *hw) rtl8xxxu_write32(priv, REG_USB_HIMR, val32);
error: + usb_free_urb(urb); return ret; }
@@ -6318,6 +6318,7 @@ static int rtl8xxxu_start(struct ieee80211_hw *hw) struct rtl8xxxu_priv *priv = hw->priv; struct rtl8xxxu_rx_urb *rx_urb; struct rtl8xxxu_tx_urb *tx_urb; + struct sk_buff *skb; unsigned long flags; int ret, i;
@@ -6368,6 +6369,13 @@ static int rtl8xxxu_start(struct ieee80211_hw *hw) rx_urb->hw = hw;
ret = rtl8xxxu_submit_rx_urb(priv, rx_urb); + if (ret) { + if (ret != -ENOMEM) { + skb = (struct sk_buff *)rx_urb->urb.context; + dev_kfree_skb(skb); + } + rtl8xxxu_queue_rx_urb(priv, rx_urb); + } }
schedule_delayed_work(&priv->ra_watchdog, 2 * HZ);
From: Doug Horn doughorn@google.com
[ Upstream commit e219688fc5c3d0d9136f8d29d7e0498388f01440 ]
If a response to virtio_gpu_cmd_get_capset_info takes longer than five seconds to return, the callback will access freed kernel memory in vg->capsets.
Signed-off-by: Doug Horn doughorn@google.com Link: http://patchwork.freedesktop.org/patch/msgid/20200902210847.2689-2-gurchetan... Signed-off-by: Gerd Hoffmann kraxel@redhat.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/virtio/virtgpu_kms.c | 2 ++ drivers/gpu/drm/virtio/virtgpu_vq.c | 10 +++++++--- 2 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c index 4d944a0dff3e9..fdd7671a7b126 100644 --- a/drivers/gpu/drm/virtio/virtgpu_kms.c +++ b/drivers/gpu/drm/virtio/virtgpu_kms.c @@ -80,8 +80,10 @@ static void virtio_gpu_get_capsets(struct virtio_gpu_device *vgdev, vgdev->capsets[i].id > 0, 5 * HZ); if (ret == 0) { DRM_ERROR("timed out waiting for cap set %d\n", i); + spin_lock(&vgdev->display_info_lock); kfree(vgdev->capsets); vgdev->capsets = NULL; + spin_unlock(&vgdev->display_info_lock); return; } DRM_INFO("cap set %d: id %d, max-version %d, max-size %d\n", diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index 53af60d484a44..9d2abdbd865a7 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -684,9 +684,13 @@ static void virtio_gpu_cmd_get_capset_info_cb(struct virtio_gpu_device *vgdev, int i = le32_to_cpu(cmd->capset_index);
spin_lock(&vgdev->display_info_lock); - vgdev->capsets[i].id = le32_to_cpu(resp->capset_id); - vgdev->capsets[i].max_version = le32_to_cpu(resp->capset_max_version); - vgdev->capsets[i].max_size = le32_to_cpu(resp->capset_max_size); + if (vgdev->capsets) { + vgdev->capsets[i].id = le32_to_cpu(resp->capset_id); + vgdev->capsets[i].max_version = le32_to_cpu(resp->capset_max_version); + vgdev->capsets[i].max_size = le32_to_cpu(resp->capset_max_size); + } else { + DRM_ERROR("invalid capset memory."); + } spin_unlock(&vgdev->display_info_lock); wake_up(&vgdev->resp_wq); }
From: Hans de Goede hdegoede@redhat.com
[ Upstream commit 5bf2f2f331ad812c9b7eea6e14a3ea328acbffc0 ]
The Acer One S1003 2-in-1 keyboard dock uses a Synaptics S910xx touchpad which is connected to an ITE 8910 USB keyboard controller chip.
This keyboard has the same quirk for its rfkill / airplane mode hotkey as other keyboards with ITE keyboard chips, it only sends a single release event when pressed and released, it never sends a press event.
This commit adds this keyboards USB id to the hid-ite id-table, fixing the rfkill key not working on this keyboard. Note that like for the Acer Aspire Switch 10 (SW5-012) the id-table entry matches on the HID_GROUP_GENERIC generic group so that hid-ite only binds to the keyboard interface and the mouse/touchpad interface is left untouched so that hid-multitouch can bind to it.
Signed-off-by: Hans de Goede hdegoede@redhat.com Signed-off-by: Jiri Kosina jkosina@suse.cz Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/hid/hid-ids.h | 1 + drivers/hid/hid-ite.c | 4 ++++ 2 files changed, 5 insertions(+)
diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h index 6a6e2c1b60900..79495e218b7fc 100644 --- a/drivers/hid/hid-ids.h +++ b/drivers/hid/hid-ids.h @@ -1124,6 +1124,7 @@ #define USB_DEVICE_ID_SYNAPTICS_DELL_K12A 0x2819 #define USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5_012 0x2968 #define USB_DEVICE_ID_SYNAPTICS_TP_V103 0x5710 +#define USB_DEVICE_ID_SYNAPTICS_ACER_ONE_S1003 0x73f5 #define USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5 0x81a7
#define USB_VENDOR_ID_TEXAS_INSTRUMENTS 0x2047 diff --git a/drivers/hid/hid-ite.c b/drivers/hid/hid-ite.c index 6c55682c59740..044a93f3c1178 100644 --- a/drivers/hid/hid-ite.c +++ b/drivers/hid/hid-ite.c @@ -44,6 +44,10 @@ static const struct hid_device_id ite_devices[] = { { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5_012) }, + /* ITE8910 USB kbd ctlr, with Synaptics touchpad connected to it. */ + { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, + USB_VENDOR_ID_SYNAPTICS, + USB_DEVICE_ID_SYNAPTICS_ACER_ONE_S1003) }, { } }; MODULE_DEVICE_TABLE(hid, ite_devices);
From: Saurav Kashyap skashyap@marvell.com
[ Upstream commit 10aff62fab263ad7661780816551420cea956ebb ]
If SUCCESS is not returned, error handling will escalate. Return SUCCESS similar to other conditions in this function.
Link: https://lore.kernel.org/r/20200907121443.5150-6-jhasan@marvell.com Signed-off-by: Saurav Kashyap skashyap@marvell.com Signed-off-by: Javed Hasan jhasan@marvell.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/qedf/qedf_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c index 5ca424df355c1..bc30e3e039dd2 100644 --- a/drivers/scsi/qedf/qedf_main.c +++ b/drivers/scsi/qedf/qedf_main.c @@ -726,7 +726,7 @@ static int qedf_eh_abort(struct scsi_cmnd *sc_cmd) rdata = fcport->rdata; if (!rdata || !kref_get_unless_zero(&rdata->kref)) { QEDF_ERR(&qedf->dbg_ctx, "stale rport, sc_cmd=%p\n", sc_cmd); - rc = 1; + rc = SUCCESS; goto out; }
From: Nilesh Javali njavali@marvell.com
[ Upstream commit 4118879be3755b38171063dfd4a57611d4b20a83 ]
For short time cable pulls, the in-flight I/O to the firmware is never cleaned up, resulting in the behaviour of stale I/O completion causing list_del corruption and soft lockup of the system.
On link down event, mark all the connections for recovery, causing cleanup of all the in-flight I/O immediately.
Link: https://lore.kernel.org/r/20200908095657.26821-7-mrangankar@marvell.com Signed-off-by: Nilesh Javali njavali@marvell.com Signed-off-by: Manish Rangankar mrangankar@marvell.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/qedi/qedi_main.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c index 6f038ae5efcaf..dfe24b505b402 100644 --- a/drivers/scsi/qedi/qedi_main.c +++ b/drivers/scsi/qedi/qedi_main.c @@ -1127,6 +1127,15 @@ static void qedi_schedule_recovery_handler(void *dev) schedule_delayed_work(&qedi->recovery_work, 0); }
+static void qedi_set_conn_recovery(struct iscsi_cls_session *cls_session) +{ + struct iscsi_session *session = cls_session->dd_data; + struct iscsi_conn *conn = session->leadconn; + struct qedi_conn *qedi_conn = conn->dd_data; + + qedi_start_conn_recovery(qedi_conn->qedi, qedi_conn); +} + static void qedi_link_update(void *dev, struct qed_link_output *link) { struct qedi_ctx *qedi = (struct qedi_ctx *)dev; @@ -1138,6 +1147,7 @@ static void qedi_link_update(void *dev, struct qed_link_output *link) QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO, "Link Down event.\n"); atomic_set(&qedi->link_state, QEDI_LINK_DOWN); + iscsi_host_for_each_session(qedi->shost, qedi_set_conn_recovery); } }
From: Nilesh Javali njavali@marvell.com
[ Upstream commit c0650e28448d606c84f76c34333dba30f61de993 ]
Protect active command list for non-I/O commands like login response, logout response, text response, and recovery cleanup of active list to avoid list corruption.
Link: https://lore.kernel.org/r/20200908095657.26821-5-mrangankar@marvell.com Signed-off-by: Nilesh Javali njavali@marvell.com Signed-off-by: Manish Rangankar mrangankar@marvell.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/qedi/qedi_fw.c | 8 ++++++++ drivers/scsi/qedi/qedi_iscsi.c | 2 ++ 2 files changed, 10 insertions(+)
diff --git a/drivers/scsi/qedi/qedi_fw.c b/drivers/scsi/qedi/qedi_fw.c index 6ed74583b1b9b..b9f9f764808f9 100644 --- a/drivers/scsi/qedi/qedi_fw.c +++ b/drivers/scsi/qedi/qedi_fw.c @@ -59,6 +59,7 @@ static void qedi_process_logout_resp(struct qedi_ctx *qedi, "Freeing tid=0x%x for cid=0x%x\n", cmd->task_id, qedi_conn->iscsi_conn_id);
+ spin_lock(&qedi_conn->list_lock); if (likely(cmd->io_cmd_in_list)) { cmd->io_cmd_in_list = false; list_del_init(&cmd->io_cmd); @@ -69,6 +70,7 @@ static void qedi_process_logout_resp(struct qedi_ctx *qedi, cmd->task_id, qedi_conn->iscsi_conn_id, &cmd->io_cmd); } + spin_unlock(&qedi_conn->list_lock);
cmd->state = RESPONSE_RECEIVED; qedi_clear_task_idx(qedi, cmd->task_id); @@ -122,6 +124,7 @@ static void qedi_process_text_resp(struct qedi_ctx *qedi, "Freeing tid=0x%x for cid=0x%x\n", cmd->task_id, qedi_conn->iscsi_conn_id);
+ spin_lock(&qedi_conn->list_lock); if (likely(cmd->io_cmd_in_list)) { cmd->io_cmd_in_list = false; list_del_init(&cmd->io_cmd); @@ -132,6 +135,7 @@ static void qedi_process_text_resp(struct qedi_ctx *qedi, cmd->task_id, qedi_conn->iscsi_conn_id, &cmd->io_cmd); } + spin_unlock(&qedi_conn->list_lock);
cmd->state = RESPONSE_RECEIVED; qedi_clear_task_idx(qedi, cmd->task_id); @@ -222,11 +226,13 @@ static void qedi_process_tmf_resp(struct qedi_ctx *qedi,
tmf_hdr = (struct iscsi_tm *)qedi_cmd->task->hdr;
+ spin_lock(&qedi_conn->list_lock); if (likely(qedi_cmd->io_cmd_in_list)) { qedi_cmd->io_cmd_in_list = false; list_del_init(&qedi_cmd->io_cmd); qedi_conn->active_cmd_count--; } + spin_unlock(&qedi_conn->list_lock);
if (((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) == ISCSI_TM_FUNC_LOGICAL_UNIT_RESET) || @@ -288,11 +294,13 @@ static void qedi_process_login_resp(struct qedi_ctx *qedi, ISCSI_LOGIN_RESPONSE_HDR_DATA_SEG_LEN_MASK; qedi_conn->gen_pdu.resp_wr_ptr = qedi_conn->gen_pdu.resp_buf + pld_len;
+ spin_lock(&qedi_conn->list_lock); if (likely(cmd->io_cmd_in_list)) { cmd->io_cmd_in_list = false; list_del_init(&cmd->io_cmd); qedi_conn->active_cmd_count--; } + spin_unlock(&qedi_conn->list_lock);
memset(task_ctx, '\0', sizeof(*task_ctx));
diff --git a/drivers/scsi/qedi/qedi_iscsi.c b/drivers/scsi/qedi/qedi_iscsi.c index c14ac7882afac..10b9a986a41dc 100644 --- a/drivers/scsi/qedi/qedi_iscsi.c +++ b/drivers/scsi/qedi/qedi_iscsi.c @@ -975,11 +975,13 @@ static void qedi_cleanup_active_cmd_list(struct qedi_conn *qedi_conn) { struct qedi_cmd *cmd, *cmd_tmp;
+ spin_lock(&qedi_conn->list_lock); list_for_each_entry_safe(cmd, cmd_tmp, &qedi_conn->active_cmd_list, io_cmd) { list_del_init(&cmd->io_cmd); qedi_conn->active_cmd_count--; } + spin_unlock(&qedi_conn->list_lock); }
static void qedi_ep_disconnect(struct iscsi_endpoint *ep)
From: Nilesh Javali njavali@marvell.com
[ Upstream commit 28b35d17f9f8573d4646dd8df08917a4076a6b63 ]
While aborting the I/O, the firmware cleanup task timed out and driver deleted the I/O from active command list. Some time later the firmware sent the cleanup task response and driver again deleted the I/O from active command list causing firmware to send completion for non-existent I/O and list_del corruption of active command list.
Add fix to check if I/O is present before deleting it from the active command list to ensure firmware sends valid I/O completion and protect against list_del corruption.
Link: https://lore.kernel.org/r/20200908095657.26821-4-mrangankar@marvell.com Signed-off-by: Nilesh Javali njavali@marvell.com Signed-off-by: Manish Rangankar mrangankar@marvell.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/qedi/qedi_fw.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/drivers/scsi/qedi/qedi_fw.c b/drivers/scsi/qedi/qedi_fw.c index b9f9f764808f9..f158fde0a43c1 100644 --- a/drivers/scsi/qedi/qedi_fw.c +++ b/drivers/scsi/qedi/qedi_fw.c @@ -824,8 +824,11 @@ static void qedi_process_cmd_cleanup_resp(struct qedi_ctx *qedi, qedi_clear_task_idx(qedi_conn->qedi, rtid);
spin_lock(&qedi_conn->list_lock); - list_del_init(&dbg_cmd->io_cmd); - qedi_conn->active_cmd_count--; + if (likely(dbg_cmd->io_cmd_in_list)) { + dbg_cmd->io_cmd_in_list = false; + list_del_init(&dbg_cmd->io_cmd); + qedi_conn->active_cmd_count--; + } spin_unlock(&qedi_conn->list_lock); qedi_cmd->state = CLEANUP_RECV; wake_up_interruptible(&qedi_conn->wait_queue); @@ -1243,6 +1246,7 @@ int qedi_cleanup_all_io(struct qedi_ctx *qedi, struct qedi_conn *qedi_conn, qedi_conn->cmd_cleanup_req++; qedi_iscsi_cleanup_task(ctask, true);
+ cmd->io_cmd_in_list = false; list_del_init(&cmd->io_cmd); qedi_conn->active_cmd_count--; QEDI_WARN(&qedi->dbg_ctx, @@ -1454,8 +1458,11 @@ static void qedi_tmf_work(struct work_struct *work) spin_unlock_bh(&qedi_conn->tmf_work_lock);
spin_lock(&qedi_conn->list_lock); - list_del_init(&cmd->io_cmd); - qedi_conn->active_cmd_count--; + if (likely(cmd->io_cmd_in_list)) { + cmd->io_cmd_in_list = false; + list_del_init(&cmd->io_cmd); + qedi_conn->active_cmd_count--; + } spin_unlock(&qedi_conn->list_lock);
clear_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
From: George Kennedy george.kennedy@oracle.com
[ Upstream commit a49145acfb975d921464b84fe00279f99827d816 ]
A fb_ioctl() FBIOPUT_VSCREENINFO call with invalid xres setting or yres setting in struct fb_var_screeninfo will result in a KASAN: vmalloc-out-of-bounds failure in bitfill_aligned() as the margins are being cleared. The margins are cleared in chunks and if the xres setting or yres setting is a value of zero upto the chunk size, the failure will occur.
Add a margin check to validate xres and yres settings.
Signed-off-by: George Kennedy george.kennedy@oracle.com Reported-by: syzbot+e5fd3e65515b48c02a30@syzkaller.appspotmail.com Reviewed-by: Dan Carpenter dan.carpenter@oracle.com Cc: Dhaval Giani dhaval.giani@oracle.com Signed-off-by: Bartlomiej Zolnierkiewicz b.zolnierkie@samsung.com Link: https://patchwork.freedesktop.org/patch/msgid/1594149963-13801-1-git-send-em... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/video/fbdev/core/fbmem.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c index 6815bfb7f5724..e33bf1c386926 100644 --- a/drivers/video/fbdev/core/fbmem.c +++ b/drivers/video/fbdev/core/fbmem.c @@ -1006,6 +1006,10 @@ fb_set_var(struct fb_info *info, struct fb_var_screeninfo *var) return 0; }
+ /* bitfill_aligned() assumes that it's at least 8x8 */ + if (var->xres < 8 || var->yres < 8) + return -EINVAL; + ret = info->fbops->fb_check_var(var, info);
if (ret)
From: Tong Zhang ztong0001@gmail.com
[ Upstream commit db332356222d9429731ab9395c89cca403828460 ]
ipwireless_send_packet() can only return 0 on success and -ENOMEM on error, the caller should check non zero for error condition
Signed-off-by: Tong Zhang ztong0001@gmail.com Acked-by: David Sterba dsterba@suse.com Link: https://lore.kernel.org/r/20200821161942.36589-1-ztong0001@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/tty/ipwireless/network.c | 4 ++-- drivers/tty/ipwireless/tty.c | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/tty/ipwireless/network.c b/drivers/tty/ipwireless/network.c index cf20616340a1a..fe569f6294a24 100644 --- a/drivers/tty/ipwireless/network.c +++ b/drivers/tty/ipwireless/network.c @@ -117,7 +117,7 @@ static int ipwireless_ppp_start_xmit(struct ppp_channel *ppp_channel, skb->len, notify_packet_sent, network); - if (ret == -1) { + if (ret < 0) { skb_pull(skb, 2); return 0; } @@ -134,7 +134,7 @@ static int ipwireless_ppp_start_xmit(struct ppp_channel *ppp_channel, notify_packet_sent, network); kfree(buf); - if (ret == -1) + if (ret < 0) return 0; } kfree_skb(skb); diff --git a/drivers/tty/ipwireless/tty.c b/drivers/tty/ipwireless/tty.c index fad3401e604d9..23584769fc292 100644 --- a/drivers/tty/ipwireless/tty.c +++ b/drivers/tty/ipwireless/tty.c @@ -218,7 +218,7 @@ static int ipw_write(struct tty_struct *linux_tty, ret = ipwireless_send_packet(tty->hardware, IPW_CHANNEL_RAS, buf, count, ipw_write_packet_sent_callback, tty); - if (ret == -1) { + if (ret < 0) { mutex_unlock(&tty->ipw_tty_mutex); return 0; }
From: xinhui pan xinhui.pan@amd.com
[ Upstream commit 1545fbf97eafc1dbdc2923e58b4186b16a834784 ]
Remove the private obj from the internal list before we free aconnector.
[ 56.925828] BUG: unable to handle page fault for address: ffff8f84a870a560 [ 56.933272] #PF: supervisor read access in kernel mode [ 56.938801] #PF: error_code(0x0000) - not-present page [ 56.944376] PGD 18e605067 P4D 18e605067 PUD 86a614067 PMD 86a4d0067 PTE 800ffff8578f5060 [ 56.953260] Oops: 0000 [#1] SMP DEBUG_PAGEALLOC NOPTI [ 56.958815] CPU: 6 PID: 1407 Comm: bash Tainted: G O 5.9.0-rc2+ #46 [ 56.967092] Hardware name: System manufacturer System Product Name/PRIME Z390-A, BIOS 1401 11/26/2019 [ 56.977162] RIP: 0010:__list_del_entry_valid+0x31/0xa0 [ 56.982768] Code: 00 ad de 55 48 8b 17 4c 8b 47 08 48 89 e5 48 39 c2 74 27 48 b8 22 01 00 00 00 00 ad de 49 39 c0 74 2d 49 8b 30 48 39 fe 75 3d <48> 8b 52 08 48 39 f2 75 4c b8 01 00 00 00 5d c3 48 89 7 [ 57.003327] RSP: 0018:ffffb40c81687c90 EFLAGS: 00010246 [ 57.009048] RAX: dead000000000122 RBX: ffff8f84ea41f4f0 RCX: 0000000000000006 [ 57.016871] RDX: ffff8f84a870a558 RSI: ffff8f84ea41f4f0 RDI: ffff8f84ea41f4f0 [ 57.024672] RBP: ffffb40c81687c90 R08: ffff8f84ea400998 R09: 0000000000000001 [ 57.032490] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000006 [ 57.040287] R13: ffff8f84ea422a90 R14: ffff8f84b4129a20 R15: fffffffffffffff2 [ 57.048105] FS: 00007f550d885740(0000) GS:ffff8f8509600000(0000) knlGS:0000000000000000 [ 57.056979] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 57.063260] CR2: ffff8f84a870a560 CR3: 00000007e5144001 CR4: 00000000003706e0 [ 57.071053] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 57.078849] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 57.086684] Call Trace: [ 57.089381] drm_atomic_private_obj_fini+0x29/0x82 [drm] [ 57.095247] amdgpu_dm_fini+0x83/0x170 [amdgpu] [ 57.100264] dm_hw_fini+0x23/0x30 [amdgpu] [ 57.104814] amdgpu_device_fini+0x1df/0x4fe [amdgpu] [ 57.110271] amdgpu_driver_unload_kms+0x43/0x70 [amdgpu] [ 57.116136] amdgpu_pci_remove+0x3b/0x60 [amdgpu] [ 57.121291] pci_device_remove+0x3e/0xb0 [ 57.125583] device_release_driver_internal+0xff/0x1d0 [ 57.131223] device_release_driver+0x12/0x20 [ 57.135903] pci_stop_bus_device+0x70/0xa0 [ 57.140401] pci_stop_and_remove_bus_device_locked+0x1b/0x30 [ 57.146571] remove_store+0x7b/0x90 [ 57.150429] dev_attr_store+0x17/0x30 [ 57.154441] sysfs_kf_write+0x4b/0x60 [ 57.158479] kernfs_fop_write+0xe8/0x1d0 [ 57.162788] vfs_write+0xf5/0x230 [ 57.166426] ksys_write+0x70/0xf0 [ 57.170087] __x64_sys_write+0x1a/0x20 [ 57.174219] do_syscall_64+0x38/0x90 [ 57.178145] entry_SYSCALL_64_after_hwframe+0x44/0xa9
Signed-off-by: xinhui pan xinhui.pan@amd.com Acked-by: Feifei Xu <Feifei Xu@amd.com> Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index a717a4904268e..57ad6450e20b2 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -4882,6 +4882,7 @@ static void amdgpu_dm_connector_destroy(struct drm_connector *connector) struct amdgpu_device *adev = connector->dev->dev_private; struct amdgpu_display_manager *dm = &adev->dm;
+ drm_atomic_private_obj_fini(&aconnector->mst_mgr.base); #if defined(CONFIG_BACKLIGHT_CLASS_DEVICE) ||\ defined(CONFIG_BACKLIGHT_CLASS_DEVICE_MODULE)
On Sun, Oct 18, 2020 at 3:19 PM Sasha Levin sashal@kernel.org wrote:
From: xinhui pan xinhui.pan@amd.com
[ Upstream commit 1545fbf97eafc1dbdc2923e58b4186b16a834784 ]
Remove the private obj from the internal list before we free aconnector.
[ 56.925828] BUG: unable to handle page fault for address: ffff8f84a870a560 [ 56.933272] #PF: supervisor read access in kernel mode [ 56.938801] #PF: error_code(0x0000) - not-present page [ 56.944376] PGD 18e605067 P4D 18e605067 PUD 86a614067 PMD 86a4d0067 PTE 800ffff8578f5060 [ 56.953260] Oops: 0000 [#1] SMP DEBUG_PAGEALLOC NOPTI [ 56.958815] CPU: 6 PID: 1407 Comm: bash Tainted: G O 5.9.0-rc2+ #46 [ 56.967092] Hardware name: System manufacturer System Product Name/PRIME Z390-A, BIOS 1401 11/26/2019 [ 56.977162] RIP: 0010:__list_del_entry_valid+0x31/0xa0 [ 56.982768] Code: 00 ad de 55 48 8b 17 4c 8b 47 08 48 89 e5 48 39 c2 74 27 48 b8 22 01 00 00 00 00 ad de 49 39 c0 74 2d 49 8b 30 48 39 fe 75 3d <48> 8b 52 08 48 39 f2 75 4c b8 01 00 00 00 5d c3 48 89 7 [ 57.003327] RSP: 0018:ffffb40c81687c90 EFLAGS: 00010246 [ 57.009048] RAX: dead000000000122 RBX: ffff8f84ea41f4f0 RCX: 0000000000000006 [ 57.016871] RDX: ffff8f84a870a558 RSI: ffff8f84ea41f4f0 RDI: ffff8f84ea41f4f0 [ 57.024672] RBP: ffffb40c81687c90 R08: ffff8f84ea400998 R09: 0000000000000001 [ 57.032490] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000006 [ 57.040287] R13: ffff8f84ea422a90 R14: ffff8f84b4129a20 R15: fffffffffffffff2 [ 57.048105] FS: 00007f550d885740(0000) GS:ffff8f8509600000(0000) knlGS:0000000000000000 [ 57.056979] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 57.063260] CR2: ffff8f84a870a560 CR3: 00000007e5144001 CR4: 00000000003706e0 [ 57.071053] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 57.078849] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 57.086684] Call Trace: [ 57.089381] drm_atomic_private_obj_fini+0x29/0x82 [drm] [ 57.095247] amdgpu_dm_fini+0x83/0x170 [amdgpu] [ 57.100264] dm_hw_fini+0x23/0x30 [amdgpu] [ 57.104814] amdgpu_device_fini+0x1df/0x4fe [amdgpu] [ 57.110271] amdgpu_driver_unload_kms+0x43/0x70 [amdgpu] [ 57.116136] amdgpu_pci_remove+0x3b/0x60 [amdgpu] [ 57.121291] pci_device_remove+0x3e/0xb0 [ 57.125583] device_release_driver_internal+0xff/0x1d0 [ 57.131223] device_release_driver+0x12/0x20 [ 57.135903] pci_stop_bus_device+0x70/0xa0 [ 57.140401] pci_stop_and_remove_bus_device_locked+0x1b/0x30 [ 57.146571] remove_store+0x7b/0x90 [ 57.150429] dev_attr_store+0x17/0x30 [ 57.154441] sysfs_kf_write+0x4b/0x60 [ 57.158479] kernfs_fop_write+0xe8/0x1d0 [ 57.162788] vfs_write+0xf5/0x230 [ 57.166426] ksys_write+0x70/0xf0 [ 57.170087] __x64_sys_write+0x1a/0x20 [ 57.174219] do_syscall_64+0x38/0x90 [ 57.178145] entry_SYSCALL_64_after_hwframe+0x44/0xa9
Signed-off-by: xinhui pan xinhui.pan@amd.com Acked-by: Feifei Xu <Feifei Xu@amd.com> Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org
This ended up getting reverted. Please drop this one.
Alex
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index a717a4904268e..57ad6450e20b2 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -4882,6 +4882,7 @@ static void amdgpu_dm_connector_destroy(struct drm_connector *connector) struct amdgpu_device *adev = connector->dev->dev_private; struct amdgpu_display_manager *dm = &adev->dm;
drm_atomic_private_obj_fini(&aconnector->mst_mgr.base);
#if defined(CONFIG_BACKLIGHT_CLASS_DEVICE) ||\ defined(CONFIG_BACKLIGHT_CLASS_DEVICE_MODULE)
-- 2.25.1
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Mon, Oct 19, 2020 at 08:40:36AM -0400, Alex Deucher wrote:
On Sun, Oct 18, 2020 at 3:19 PM Sasha Levin sashal@kernel.org wrote:
From: xinhui pan xinhui.pan@amd.com
[ Upstream commit 1545fbf97eafc1dbdc2923e58b4186b16a834784 ]
Remove the private obj from the internal list before we free aconnector.
[ 56.925828] BUG: unable to handle page fault for address: ffff8f84a870a560 [ 56.933272] #PF: supervisor read access in kernel mode [ 56.938801] #PF: error_code(0x0000) - not-present page [ 56.944376] PGD 18e605067 P4D 18e605067 PUD 86a614067 PMD 86a4d0067 PTE 800ffff8578f5060 [ 56.953260] Oops: 0000 [#1] SMP DEBUG_PAGEALLOC NOPTI [ 56.958815] CPU: 6 PID: 1407 Comm: bash Tainted: G O 5.9.0-rc2+ #46 [ 56.967092] Hardware name: System manufacturer System Product Name/PRIME Z390-A, BIOS 1401 11/26/2019 [ 56.977162] RIP: 0010:__list_del_entry_valid+0x31/0xa0 [ 56.982768] Code: 00 ad de 55 48 8b 17 4c 8b 47 08 48 89 e5 48 39 c2 74 27 48 b8 22 01 00 00 00 00 ad de 49 39 c0 74 2d 49 8b 30 48 39 fe 75 3d <48> 8b 52 08 48 39 f2 75 4c b8 01 00 00 00 5d c3 48 89 7 [ 57.003327] RSP: 0018:ffffb40c81687c90 EFLAGS: 00010246 [ 57.009048] RAX: dead000000000122 RBX: ffff8f84ea41f4f0 RCX: 0000000000000006 [ 57.016871] RDX: ffff8f84a870a558 RSI: ffff8f84ea41f4f0 RDI: ffff8f84ea41f4f0 [ 57.024672] RBP: ffffb40c81687c90 R08: ffff8f84ea400998 R09: 0000000000000001 [ 57.032490] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000006 [ 57.040287] R13: ffff8f84ea422a90 R14: ffff8f84b4129a20 R15: fffffffffffffff2 [ 57.048105] FS: 00007f550d885740(0000) GS:ffff8f8509600000(0000) knlGS:0000000000000000 [ 57.056979] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 57.063260] CR2: ffff8f84a870a560 CR3: 00000007e5144001 CR4: 00000000003706e0 [ 57.071053] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 57.078849] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 57.086684] Call Trace: [ 57.089381] drm_atomic_private_obj_fini+0x29/0x82 [drm] [ 57.095247] amdgpu_dm_fini+0x83/0x170 [amdgpu] [ 57.100264] dm_hw_fini+0x23/0x30 [amdgpu] [ 57.104814] amdgpu_device_fini+0x1df/0x4fe [amdgpu] [ 57.110271] amdgpu_driver_unload_kms+0x43/0x70 [amdgpu] [ 57.116136] amdgpu_pci_remove+0x3b/0x60 [amdgpu] [ 57.121291] pci_device_remove+0x3e/0xb0 [ 57.125583] device_release_driver_internal+0xff/0x1d0 [ 57.131223] device_release_driver+0x12/0x20 [ 57.135903] pci_stop_bus_device+0x70/0xa0 [ 57.140401] pci_stop_and_remove_bus_device_locked+0x1b/0x30 [ 57.146571] remove_store+0x7b/0x90 [ 57.150429] dev_attr_store+0x17/0x30 [ 57.154441] sysfs_kf_write+0x4b/0x60 [ 57.158479] kernfs_fop_write+0xe8/0x1d0 [ 57.162788] vfs_write+0xf5/0x230 [ 57.166426] ksys_write+0x70/0xf0 [ 57.170087] __x64_sys_write+0x1a/0x20 [ 57.174219] do_syscall_64+0x38/0x90 [ 57.178145] entry_SYSCALL_64_after_hwframe+0x44/0xa9
Signed-off-by: xinhui pan xinhui.pan@amd.com Acked-by: Feifei Xu <Feifei Xu@amd.com> Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org
This ended up getting reverted. Please drop this one.
Dropped, thanks!
From: Dinghao Liu dinghao.liu@zju.edu.cn
[ Upstream commit d33fe77bdf75806d785dabf90d21d962122e5296 ]
When kmalloc() on buf fails, urb should be freed just like when kmalloc() on dr fails.
Signed-off-by: Dinghao Liu dinghao.liu@zju.edu.cn Signed-off-by: Marcel Holtmann marcel@holtmann.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/bluetooth/btusb.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c index 8d2608ddfd087..f88968bcdd6a8 100644 --- a/drivers/bluetooth/btusb.c +++ b/drivers/bluetooth/btusb.c @@ -2896,6 +2896,7 @@ static int btusb_mtk_submit_wmt_recv_urb(struct hci_dev *hdev) buf = kmalloc(size, GFP_KERNEL); if (!buf) { kfree(dr); + usb_free_urb(urb); return -ENOMEM; }
From: Peilin Ye yepeilin.cs@gmail.com
[ Upstream commit c5a8a8498eed1c164afc94f50a939c1a10abf8ad ]
do_ip_vs_set_ctl() is referencing uninitialized stack value when `len` is zero. Fix it.
Reported-by: syzbot+23b5f9e7caf61d9a3898@syzkaller.appspotmail.com Link: https://syzkaller.appspot.com/bug?id=46ebfb92a8a812621a001ef04d90dfa459520fe... Suggested-by: Julian Anastasov ja@ssi.bg Signed-off-by: Peilin Ye yepeilin.cs@gmail.com Acked-by: Julian Anastasov ja@ssi.bg Reviewed-by: Simon Horman horms@verge.net.au Signed-off-by: Pablo Neira Ayuso pablo@netfilter.org Signed-off-by: Sasha Levin sashal@kernel.org --- net/netfilter/ipvs/ip_vs_ctl.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c index 678c5b14841c1..8dbfd84322a88 100644 --- a/net/netfilter/ipvs/ip_vs_ctl.c +++ b/net/netfilter/ipvs/ip_vs_ctl.c @@ -2508,6 +2508,10 @@ do_ip_vs_set_ctl(struct sock *sk, int cmd, sockptr_t ptr, unsigned int len) /* Set timeout values for (tcp tcpfin udp) */ ret = ip_vs_set_timeout(ipvs, (struct ip_vs_timeout_user *)arg); goto out_unlock; + } else if (!len) { + /* No more commands with len == 0 below */ + ret = -EINVAL; + goto out_unlock; }
usvc_compat = (struct ip_vs_service_user *)arg; @@ -2584,9 +2588,6 @@ do_ip_vs_set_ctl(struct sock *sk, int cmd, sockptr_t ptr, unsigned int len) break; case IP_VS_SO_SET_DELDEST: ret = ip_vs_del_dest(svc, &udest); - break; - default: - ret = -EINVAL; }
out_unlock:
From: Jan Kara jack@suse.cz
[ Upstream commit e9d4709fcc26353df12070566970f080e651f0c9 ]
When a usrjquota or grpjquota mount option is used multiple times, we will leak memory allocated for the file name. Make sure the last setting is used and all the previous ones are properly freed.
Reported-by: syzbot+c9e294bbe0333a6b7640@syzkaller.appspotmail.com Signed-off-by: Jan Kara jack@suse.cz Signed-off-by: Sasha Levin sashal@kernel.org --- fs/reiserfs/super.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/fs/reiserfs/super.c b/fs/reiserfs/super.c index a6bce5b1fb1dc..1b9c7a387dc71 100644 --- a/fs/reiserfs/super.c +++ b/fs/reiserfs/super.c @@ -1258,6 +1258,10 @@ static int reiserfs_parse_options(struct super_block *s, "turned on."); return 0; } + if (qf_names[qtype] != + REISERFS_SB(s)->s_qf_names[qtype]) + kfree(qf_names[qtype]); + qf_names[qtype] = NULL; if (*arg) { /* Some filename specified? */ if (REISERFS_SB(s)->s_qf_names[qtype] && strcmp(REISERFS_SB(s)->s_qf_names[qtype], @@ -1287,10 +1291,6 @@ static int reiserfs_parse_options(struct super_block *s, else *mount_options |= 1 << REISERFS_GRPQUOTA; } else { - if (qf_names[qtype] != - REISERFS_SB(s)->s_qf_names[qtype]) - kfree(qf_names[qtype]); - qf_names[qtype] = NULL; if (qtype == USRQUOTA) *mount_options &= ~(1 << REISERFS_USRQUOTA); else
From: Julian Wiedmann jwi@linux.ibm.com
[ Upstream commit 9d6a569a4cbab5a8b4c959d4e312daeecb7c9f09 ]
The current code for bridge address events has two shortcomings in its control sequence:
1. after disabling address events via PNSO, we don't flush the remaining events from the event_wq. So if the feature is re-enabled fast enough, stale events could leak over. 2. PNSO and the events' arrival via the READ ccw device are unordered. So even if we flushed the workqueue, it's difficult to say whether the READ device might produce more events onto the workqueue afterwards.
Fix this by 1. explicitly fencing off the events when we no longer care, in the READ device's event handler. This ensures that once we flush the workqueue, it doesn't get additional address events. 2. Flush the workqueue after disabling the events & fencing them off. As the code that triggers the flush will typically hold the sbp_lock, we need to rework the worker code to avoid a deadlock here in case of a 'notifications-stopped' event. In case of lock contention, requeue such an event with a delay. We'll eventually aquire the lock, or spot that the feature has been disabled and the event can thus be discarded.
This leaves the theoretical race that a stale event could arrive _after_ we re-enabled ourselves to receive events again. Such an event would be impossible to distinguish from a 'good' event, nothing we can do about it.
Signed-off-by: Julian Wiedmann jwi@linux.ibm.com Reviewed-by: Alexandra Winter wintera@linux.ibm.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/s390/net/qeth_core.h | 6 ++++ drivers/s390/net/qeth_l2_main.c | 53 ++++++++++++++++++++++++++++----- drivers/s390/net/qeth_l2_sys.c | 1 + 3 files changed, 52 insertions(+), 8 deletions(-)
diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h index ecfd6d152e862..6b5cf9ba03e5b 100644 --- a/drivers/s390/net/qeth_core.h +++ b/drivers/s390/net/qeth_core.h @@ -680,6 +680,11 @@ struct qeth_card_blkt { int inter_packet_jumbo; };
+enum qeth_pnso_mode { + QETH_PNSO_NONE, + QETH_PNSO_BRIDGEPORT, +}; + #define QETH_BROADCAST_WITH_ECHO 0x01 #define QETH_BROADCAST_WITHOUT_ECHO 0x02 struct qeth_card_info { @@ -696,6 +701,7 @@ struct qeth_card_info { /* no bitfield, we take a pointer on these two: */ u8 has_lp2lp_cso_v6; u8 has_lp2lp_cso_v4; + enum qeth_pnso_mode pnso_mode; enum qeth_card_types type; enum qeth_link_types link_type; int broadcast_capable; diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c index 6384f7adba660..9866d01b40fe7 100644 --- a/drivers/s390/net/qeth_l2_main.c +++ b/drivers/s390/net/qeth_l2_main.c @@ -273,6 +273,17 @@ static int qeth_l2_vlan_rx_kill_vid(struct net_device *dev, return qeth_l2_send_setdelvlan(card, vid, IPA_CMD_DELVLAN); }
+static void qeth_l2_set_pnso_mode(struct qeth_card *card, + enum qeth_pnso_mode mode) +{ + spin_lock_irq(get_ccwdev_lock(CARD_RDEV(card))); + WRITE_ONCE(card->info.pnso_mode, mode); + spin_unlock_irq(get_ccwdev_lock(CARD_RDEV(card))); + + if (mode == QETH_PNSO_NONE) + drain_workqueue(card->event_wq); +} + static void qeth_l2_stop_card(struct qeth_card *card) { QETH_CARD_TEXT(card, 2, "stopcard"); @@ -290,7 +301,7 @@ static void qeth_l2_stop_card(struct qeth_card *card) qeth_qdio_clear_card(card, 0); qeth_drain_output_queues(card); qeth_clear_working_pool_list(card); - flush_workqueue(card->event_wq); + qeth_l2_set_pnso_mode(card, QETH_PNSO_NONE); qeth_flush_local_addrs(card); card->info.promisc_mode = 0; } @@ -1163,19 +1174,34 @@ static void qeth_bridge_state_change(struct qeth_card *card, }
struct qeth_addr_change_data { - struct work_struct worker; + struct delayed_work dwork; struct qeth_card *card; struct qeth_ipacmd_addr_change ac_event; };
static void qeth_addr_change_event_worker(struct work_struct *work) { - struct qeth_addr_change_data *data = - container_of(work, struct qeth_addr_change_data, worker); + struct delayed_work *dwork = to_delayed_work(work); + struct qeth_addr_change_data *data; + struct qeth_card *card; int i;
+ data = container_of(dwork, struct qeth_addr_change_data, dwork); + card = data->card; + QETH_CARD_TEXT(data->card, 4, "adrchgew"); + + if (READ_ONCE(card->info.pnso_mode) == QETH_PNSO_NONE) + goto free; + if (data->ac_event.lost_event_mask) { + /* Potential re-config in progress, try again later: */ + if (!mutex_trylock(&card->sbp_lock)) { + queue_delayed_work(card->event_wq, dwork, + msecs_to_jiffies(100)); + return; + } + dev_info(&data->card->gdev->dev, "Address change notification stopped on %s (%s)\n", data->card->dev->name, @@ -1184,8 +1210,9 @@ static void qeth_addr_change_event_worker(struct work_struct *work) : (data->ac_event.lost_event_mask == 0x02) ? "Bridge port state change" : "Unknown reason"); - mutex_lock(&data->card->sbp_lock); + data->card->options.sbp.hostnotification = 0; + card->info.pnso_mode = QETH_PNSO_NONE; mutex_unlock(&data->card->sbp_lock); qeth_bridge_emit_host_event(data->card, anev_abort, 0, NULL, NULL); @@ -1199,6 +1226,8 @@ static void qeth_addr_change_event_worker(struct work_struct *work) &entry->token, &entry->addr_lnid); } + +free: kfree(data); }
@@ -1210,6 +1239,9 @@ static void qeth_addr_change_event(struct qeth_card *card, struct qeth_addr_change_data *data; int extrasize;
+ if (card->info.pnso_mode == QETH_PNSO_NONE) + return; + QETH_CARD_TEXT(card, 4, "adrchgev"); if (cmd->hdr.return_code != 0x0000) { if (cmd->hdr.return_code == 0x0010) { @@ -1229,11 +1261,11 @@ static void qeth_addr_change_event(struct qeth_card *card, QETH_CARD_TEXT(card, 2, "ACNalloc"); return; } - INIT_WORK(&data->worker, qeth_addr_change_event_worker); + INIT_DELAYED_WORK(&data->dwork, qeth_addr_change_event_worker); data->card = card; memcpy(&data->ac_event, hostevs, sizeof(struct qeth_ipacmd_addr_change) + extrasize); - queue_work(card->event_wq, &data->worker); + queue_delayed_work(card->event_wq, &data->dwork, 0); }
/* SETBRIDGEPORT support; sending commands */ @@ -1554,9 +1586,14 @@ int qeth_bridgeport_an_set(struct qeth_card *card, int enable)
if (enable) { qeth_bridge_emit_host_event(card, anev_reset, 0, NULL, NULL); + qeth_l2_set_pnso_mode(card, QETH_PNSO_BRIDGEPORT); rc = qeth_l2_pnso(card, 1, qeth_bridgeport_an_set_cb, card); - } else + if (rc) + qeth_l2_set_pnso_mode(card, QETH_PNSO_NONE); + } else { rc = qeth_l2_pnso(card, 0, NULL, NULL); + qeth_l2_set_pnso_mode(card, QETH_PNSO_NONE); + } return rc; }
diff --git a/drivers/s390/net/qeth_l2_sys.c b/drivers/s390/net/qeth_l2_sys.c index 86bcae992f725..4695d25e54f24 100644 --- a/drivers/s390/net/qeth_l2_sys.c +++ b/drivers/s390/net/qeth_l2_sys.c @@ -157,6 +157,7 @@ static ssize_t qeth_bridgeport_hostnotification_store(struct device *dev, rc = -EBUSY; else if (qeth_card_hw_is_reachable(card)) { rc = qeth_bridgeport_an_set(card, enable); + /* sbp_lock ensures ordering vs notifications-stopped events */ if (!rc) card->options.sbp.hostnotification = enable; } else
From: Tetsuo Handa penguin-kernel@I-love.SAKURA.ne.jp
[ Upstream commit 621a3a8b1c0ecf16e1e5667ea5756a76a082b738 ]
syzbot is reporting that del_timer_sync() is called from mwifiex_usb_cleanup_tx_aggr() from mwifiex_unregister_dev() without checking timer_setup() from mwifiex_usb_tx_init() was called [1].
Ganapathi Bhat proposed a possibly cleaner fix, but it seems that that fix was forgotten [2].
"grep -FrB1 'del_timer' drivers/ | grep -FA1 '.function)'" says that currently there are 28 locations which call del_timer[_sync]() only if that timer's function field was initialized (because timer_setup() sets that timer's function field). Therefore, let's use same approach here.
[1] https://syzkaller.appspot.com/bug?id=26525f643f454dd7be0078423e3cdb0d5774495... [2] https://lkml.kernel.org/r/CA+ASDXMHt2gq9Hy+iP_BYkWXsSreWdp3_bAfMkNcuqJ3K+-jb...
Reported-by: syzbot syzbot+dc4127f950da51639216@syzkaller.appspotmail.com Cc: Ganapathi Bhat ganapathi.bhat@nxp.com Cc: Brian Norris briannorris@chromium.org Signed-off-by: Tetsuo Handa penguin-kernel@I-love.SAKURA.ne.jp Reviewed-by: Brian Norris briannorris@chromium.org Acked-by: Ganapathi Bhat ganapathi.bhat@nxp.com Signed-off-by: Kalle Valo kvalo@codeaurora.org Link: https://lore.kernel.org/r/20200821082720.7716-1-penguin-kernel@I-love.SAKURA... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/marvell/mwifiex/usb.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/wireless/marvell/mwifiex/usb.c b/drivers/net/wireless/marvell/mwifiex/usb.c index 6f3cfde4654cc..426e39d4ccf0f 100644 --- a/drivers/net/wireless/marvell/mwifiex/usb.c +++ b/drivers/net/wireless/marvell/mwifiex/usb.c @@ -1353,7 +1353,8 @@ static void mwifiex_usb_cleanup_tx_aggr(struct mwifiex_adapter *adapter) skb_dequeue(&port->tx_aggr.aggr_list))) mwifiex_write_data_complete(adapter, skb_tmp, 0, -1); - del_timer_sync(&port->tx_aggr.timer_cnxt.hold_timer); + if (port->tx_aggr.timer_cnxt.hold_timer.function) + del_timer_sync(&port->tx_aggr.timer_cnxt.hold_timer); port->tx_aggr.timer_cnxt.is_hold_timer_set = false; port->tx_aggr.timer_cnxt.hold_tmo_msecs = 0; }
From: Connor McAdams conmanx360@gmail.com
[ Upstream commit ed93f9750c6c2ed371347d0aac3dcd31cb9cf256 ]
Add AE-7 quirk data for setting of microphone. The AE-7 has no front panel connector, so only rear-mic/line-in have new commands.
Signed-off-by: Connor McAdams conmanx360@gmail.com Link: https://lore.kernel.org/r/20200825201040.30339-19-conmanx360@gmail.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- sound/pci/hda/patch_ca0132.c | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-)
diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c index b7dbf2e7f77af..c68669911de0a 100644 --- a/sound/pci/hda/patch_ca0132.c +++ b/sound/pci/hda/patch_ca0132.c @@ -4675,6 +4675,15 @@ static int ca0132_alt_select_in(struct hda_codec *codec) ca0113_mmio_command_set(codec, 0x30, 0x28, 0x00); tmp = FLOAT_THREE; break; + case QUIRK_AE7: + ca0113_mmio_command_set(codec, 0x30, 0x28, 0x00); + tmp = FLOAT_THREE; + chipio_set_conn_rate(codec, MEM_CONNID_MICIN2, + SR_96_000); + chipio_set_conn_rate(codec, MEM_CONNID_MICOUT2, + SR_96_000); + dspio_set_uint_param(codec, 0x80, 0x01, FLOAT_ZERO); + break; default: tmp = FLOAT_ONE; break; @@ -4720,6 +4729,14 @@ static int ca0132_alt_select_in(struct hda_codec *codec) case QUIRK_AE5: ca0113_mmio_command_set(codec, 0x30, 0x28, 0x00); break; + case QUIRK_AE7: + ca0113_mmio_command_set(codec, 0x30, 0x28, 0x3f); + chipio_set_conn_rate(codec, MEM_CONNID_MICIN2, + SR_96_000); + chipio_set_conn_rate(codec, MEM_CONNID_MICOUT2, + SR_96_000); + dspio_set_uint_param(codec, 0x80, 0x01, FLOAT_ZERO); + break; default: break; } @@ -4729,7 +4746,10 @@ static int ca0132_alt_select_in(struct hda_codec *codec) if (ca0132_quirk(spec) == QUIRK_R3DI) chipio_set_conn_rate(codec, 0x0F, SR_96_000);
- tmp = FLOAT_ZERO; + if (ca0132_quirk(spec) == QUIRK_AE7) + tmp = FLOAT_THREE; + else + tmp = FLOAT_ZERO; dspio_set_uint_param(codec, 0x80, 0x00, tmp);
switch (ca0132_quirk(spec)) {
From: Connor McAdams conmanx360@gmail.com
[ Upstream commit 620f08eea6d6961b789af3fa3ea86725c8c93ece ]
Add a new PCI subsystem ID for the SoundBlaster AE-7 card.
Signed-off-by: Connor McAdams conmanx360@gmail.com Link: https://lore.kernel.org/r/20200825201040.30339-11-conmanx360@gmail.com Signed-off-by: Takashi Iwai tiwai@suse.de Signed-off-by: Sasha Levin sashal@kernel.org --- sound/pci/hda/patch_ca0132.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c index c68669911de0a..a3eecdf9185e8 100644 --- a/sound/pci/hda/patch_ca0132.c +++ b/sound/pci/hda/patch_ca0132.c @@ -1065,6 +1065,7 @@ enum { QUIRK_R3DI, QUIRK_R3D, QUIRK_AE5, + QUIRK_AE7, };
#ifdef CONFIG_PCI @@ -1184,6 +1185,7 @@ static const struct snd_pci_quirk ca0132_quirks[] = { SND_PCI_QUIRK(0x1102, 0x0013, "Recon3D", QUIRK_R3D), SND_PCI_QUIRK(0x1102, 0x0018, "Recon3D", QUIRK_R3D), SND_PCI_QUIRK(0x1102, 0x0051, "Sound Blaster AE-5", QUIRK_AE5), + SND_PCI_QUIRK(0x1102, 0x0081, "Sound Blaster AE-7", QUIRK_AE7), {} };
From: Sathyanarayana Nujella sathyanarayana.nujella@intel.com
[ Upstream commit 5253a73d567dcd75e62834ff5f502ea9470e5722 ]
Add topology filename override based on system DMI data matching, typically to account for a different hardware layout.
In ACPI based systems, the tplg_filename is pre-defined in an ACPI machine table. When a DMI quirk is detected, the sof_pdata->tplg_filename is not set with the hard-coded ACPI value, and instead is set with the DMI-specific filename.
Reviewed-by: Guennadi Liakhovetski guennadi.liakhovetski@linux.intel.com Signed-off-by: Sathyanarayana Nujella sathyanarayana.nujella@intel.com Signed-off-by: Pierre-Louis Bossart pierre-louis.bossart@linux.intel.com Link: https://lore.kernel.org/r/20200821195603.215535-14-pierre-louis.bossart@linu... Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/sof/intel/hda.c | 8 +++++++- sound/soc/sof/sof-pci-dev.c | 24 ++++++++++++++++++++++++ 2 files changed, 31 insertions(+), 1 deletion(-)
diff --git a/sound/soc/sof/intel/hda.c b/sound/soc/sof/intel/hda.c index 63ca920c8e6e0..7152e6d1cf673 100644 --- a/sound/soc/sof/intel/hda.c +++ b/sound/soc/sof/intel/hda.c @@ -1179,7 +1179,13 @@ void hda_machine_select(struct snd_sof_dev *sdev)
mach = snd_soc_acpi_find_machine(desc->machines); if (mach) { - sof_pdata->tplg_filename = mach->sof_tplg_filename; + /* + * If tplg file name is overridden, use it instead of + * the one set in mach table + */ + if (!sof_pdata->tplg_filename) + sof_pdata->tplg_filename = mach->sof_tplg_filename; + sof_pdata->machine = mach;
if (mach->link_mask) { diff --git a/sound/soc/sof/sof-pci-dev.c b/sound/soc/sof/sof-pci-dev.c index aa3532ba14349..f3a8140773db5 100644 --- a/sound/soc/sof/sof-pci-dev.c +++ b/sound/soc/sof/sof-pci-dev.c @@ -35,8 +35,28 @@ static int sof_pci_debug; module_param_named(sof_pci_debug, sof_pci_debug, int, 0444); MODULE_PARM_DESC(sof_pci_debug, "SOF PCI debug options (0x0 all off)");
+static const char *sof_override_tplg_name; + #define SOF_PCI_DISABLE_PM_RUNTIME BIT(0)
+static int sof_tplg_cb(const struct dmi_system_id *id) +{ + sof_override_tplg_name = id->driver_data; + return 1; +} + +static const struct dmi_system_id sof_tplg_table[] = { + { + .callback = sof_tplg_cb, + .matches = { + DMI_MATCH(DMI_PRODUCT_FAMILY, "Google_Volteer"), + DMI_MATCH(DMI_PRODUCT_NAME, "Terrador"), + }, + .driver_data = "sof-tgl-rt5682-ssp0-max98373-ssp2.tplg", + }, + {} +}; + static const struct dmi_system_id community_key_platforms[] = { { .ident = "Up Squared", @@ -347,6 +367,10 @@ static int sof_pci_probe(struct pci_dev *pci, sof_pdata->tplg_filename_prefix = sof_pdata->desc->default_tplg_path;
+ dmi_check_system(sof_tplg_table); + if (sof_override_tplg_name) + sof_pdata->tplg_filename = sof_override_tplg_name; + #if IS_ENABLED(CONFIG_SND_SOC_SOF_PROBE_WORK_QUEUE) /* set callback to enable runtime_pm */ sof_pdata->sof_probe_complete = sof_pci_probe_complete;
From: Sathyanarayana Nujella sathyanarayana.nujella@intel.com
[ Upstream commit 3e1734b64ce786c54dc98adcfe67941e6011d735 ]
A Chrome System based on tgl_max98373_rt5682 has different SSP interface configurations. Using DMI data of this variant DUT, override quirk data.
Reviewed-by: Guennadi Liakhovetski guennadi.liakhovetski@linux.intel.com Signed-off-by: Sathyanarayana Nujella sathyanarayana.nujella@intel.com Signed-off-by: Mac Chiang mac.chiang@intel.com Signed-off-by: Pierre-Louis Bossart pierre-louis.bossart@linux.intel.com Link: https://lore.kernel.org/r/20200821195603.215535-13-pierre-louis.bossart@linu... Signed-off-by: Mark Brown broonie@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- sound/soc/intel/boards/sof_rt5682.c | 13 +++++++++++++ 1 file changed, 13 insertions(+)
diff --git a/sound/soc/intel/boards/sof_rt5682.c b/sound/soc/intel/boards/sof_rt5682.c index 0129d23694ed5..9a6f10ede427e 100644 --- a/sound/soc/intel/boards/sof_rt5682.c +++ b/sound/soc/intel/boards/sof_rt5682.c @@ -119,6 +119,19 @@ static const struct dmi_system_id sof_rt5682_quirk_table[] = { .driver_data = (void *)(SOF_RT5682_MCLK_EN | SOF_RT5682_SSP_CODEC(0)), }, + { + .callback = sof_rt5682_quirk_cb, + .matches = { + DMI_MATCH(DMI_PRODUCT_FAMILY, "Google_Volteer"), + DMI_MATCH(DMI_PRODUCT_NAME, "Terrador"), + }, + .driver_data = (void *)(SOF_RT5682_MCLK_EN | + SOF_RT5682_SSP_CODEC(0) | + SOF_SPEAKER_AMP_PRESENT | + SOF_MAX98373_SPEAKER_AMP_PRESENT | + SOF_RT5682_SSP_AMP(2) | + SOF_RT5682_NUM_HDMIDEV(4)), + }, {} };
From: Kevin Barnett kevin.barnett@microsemi.com
[ Upstream commit 9e68cccc8ef7206f0bccd590378d0dca8f9b4f57 ]
Eliminate kernel panics when getting invalid responses from controller. Take controller offline instead of causing kernel panics.
Link: https://lore.kernel.org/r/159622929306.30579.16523318707596752828.stgit@brun... Reviewed-by: Scott Teel scott.teel@microsemi.com Reviewed-by: Scott Benesh scott.benesh@microsemi.com Reviewed-by: Prasad Munirathnam Prasad.Munirathnam@microsemi.com Reviewed-by: Martin Wilck mwilck@suse.com Signed-off-by: Kevin Barnett kevin.barnett@microsemi.com Signed-off-by: Don Brace don.brace@microsemi.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/smartpqi/smartpqi.h | 2 +- drivers/scsi/smartpqi/smartpqi_init.c | 101 +++++++++++++++++--------- 2 files changed, 68 insertions(+), 35 deletions(-)
diff --git a/drivers/scsi/smartpqi/smartpqi.h b/drivers/scsi/smartpqi/smartpqi.h index 1129fe7a27edd..ee069a8b442a7 100644 --- a/drivers/scsi/smartpqi/smartpqi.h +++ b/drivers/scsi/smartpqi/smartpqi.h @@ -359,7 +359,7 @@ struct pqi_event_response { struct pqi_iu_header header; u8 event_type; u8 reserved2 : 7; - u8 request_acknowlege : 1; + u8 request_acknowledge : 1; __le16 event_id; __le32 additional_event_id; union { diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c index ca1e6cf6a38ef..714a3d38fc431 100644 --- a/drivers/scsi/smartpqi/smartpqi_init.c +++ b/drivers/scsi/smartpqi/smartpqi_init.c @@ -542,8 +542,7 @@ static int pqi_build_raid_path_request(struct pqi_ctrl_info *ctrl_info, put_unaligned_be16(cdb_length, &cdb[7]); break; default: - dev_err(&ctrl_info->pci_dev->dev, "unknown command 0x%c\n", - cmd); + dev_err(&ctrl_info->pci_dev->dev, "unknown command 0x%c\n", cmd); break; }
@@ -2462,7 +2461,6 @@ static int pqi_raid_bypass_submit_scsi_cmd(struct pqi_ctrl_info *ctrl_info, offload_to_mirror = (offload_to_mirror >= layout_map_count - 1) ? 0 : offload_to_mirror + 1; - WARN_ON(offload_to_mirror >= layout_map_count); device->offload_to_mirror = offload_to_mirror; /* * Avoid direct use of device->offload_to_mirror within this @@ -2915,10 +2913,14 @@ static int pqi_interpret_task_management_response( return rc; }
-static unsigned int pqi_process_io_intr(struct pqi_ctrl_info *ctrl_info, - struct pqi_queue_group *queue_group) +static inline void pqi_invalid_response(struct pqi_ctrl_info *ctrl_info) +{ + pqi_take_ctrl_offline(ctrl_info); +} + +static int pqi_process_io_intr(struct pqi_ctrl_info *ctrl_info, struct pqi_queue_group *queue_group) { - unsigned int num_responses; + int num_responses; pqi_index_t oq_pi; pqi_index_t oq_ci; struct pqi_io_request *io_request; @@ -2930,6 +2932,13 @@ static unsigned int pqi_process_io_intr(struct pqi_ctrl_info *ctrl_info,
while (1) { oq_pi = readl(queue_group->oq_pi); + if (oq_pi >= ctrl_info->num_elements_per_oq) { + pqi_invalid_response(ctrl_info); + dev_err(&ctrl_info->pci_dev->dev, + "I/O interrupt: producer index (%u) out of range (0-%u): consumer index: %u\n", + oq_pi, ctrl_info->num_elements_per_oq - 1, oq_ci); + return -1; + } if (oq_pi == oq_ci) break;
@@ -2938,10 +2947,22 @@ static unsigned int pqi_process_io_intr(struct pqi_ctrl_info *ctrl_info, (oq_ci * PQI_OPERATIONAL_OQ_ELEMENT_LENGTH);
request_id = get_unaligned_le16(&response->request_id); - WARN_ON(request_id >= ctrl_info->max_io_slots); + if (request_id >= ctrl_info->max_io_slots) { + pqi_invalid_response(ctrl_info); + dev_err(&ctrl_info->pci_dev->dev, + "request ID in response (%u) out of range (0-%u): producer index: %u consumer index: %u\n", + request_id, ctrl_info->max_io_slots - 1, oq_pi, oq_ci); + return -1; + }
io_request = &ctrl_info->io_request_pool[request_id]; - WARN_ON(atomic_read(&io_request->refcount) == 0); + if (atomic_read(&io_request->refcount) == 0) { + pqi_invalid_response(ctrl_info); + dev_err(&ctrl_info->pci_dev->dev, + "request ID in response (%u) does not match an outstanding I/O request: producer index: %u consumer index: %u\n", + request_id, oq_pi, oq_ci); + return -1; + }
switch (response->header.iu_type) { case PQI_RESPONSE_IU_RAID_PATH_IO_SUCCESS: @@ -2971,24 +2992,22 @@ static unsigned int pqi_process_io_intr(struct pqi_ctrl_info *ctrl_info, io_request->error_info = ctrl_info->error_buffer + (get_unaligned_le16(&response->error_index) * PQI_ERROR_BUFFER_ELEMENT_LENGTH); - pqi_process_io_error(response->header.iu_type, - io_request); + pqi_process_io_error(response->header.iu_type, io_request); break; default: + pqi_invalid_response(ctrl_info); dev_err(&ctrl_info->pci_dev->dev, - "unexpected IU type: 0x%x\n", - response->header.iu_type); - break; + "unexpected IU type: 0x%x: producer index: %u consumer index: %u\n", + response->header.iu_type, oq_pi, oq_ci); + return -1; }
- io_request->io_complete_callback(io_request, - io_request->context); + io_request->io_complete_callback(io_request, io_request->context);
/* * Note that the I/O request structure CANNOT BE TOUCHED after * returning from the I/O completion callback! */ - oq_ci = (oq_ci + 1) % ctrl_info->num_elements_per_oq; }
@@ -3300,9 +3319,9 @@ static void pqi_ofa_capture_event_payload(struct pqi_event *event, } }
-static unsigned int pqi_process_event_intr(struct pqi_ctrl_info *ctrl_info) +static int pqi_process_event_intr(struct pqi_ctrl_info *ctrl_info) { - unsigned int num_events; + int num_events; pqi_index_t oq_pi; pqi_index_t oq_ci; struct pqi_event_queue *event_queue; @@ -3316,26 +3335,31 @@ static unsigned int pqi_process_event_intr(struct pqi_ctrl_info *ctrl_info)
while (1) { oq_pi = readl(event_queue->oq_pi); + if (oq_pi >= PQI_NUM_EVENT_QUEUE_ELEMENTS) { + pqi_invalid_response(ctrl_info); + dev_err(&ctrl_info->pci_dev->dev, + "event interrupt: producer index (%u) out of range (0-%u): consumer index: %u\n", + oq_pi, PQI_NUM_EVENT_QUEUE_ELEMENTS - 1, oq_ci); + return -1; + } + if (oq_pi == oq_ci) break;
num_events++; - response = event_queue->oq_element_array + - (oq_ci * PQI_EVENT_OQ_ELEMENT_LENGTH); + response = event_queue->oq_element_array + (oq_ci * PQI_EVENT_OQ_ELEMENT_LENGTH);
event_index = pqi_event_type_to_event_index(response->event_type);
- if (event_index >= 0) { - if (response->request_acknowlege) { - event = &ctrl_info->events[event_index]; - event->pending = true; - event->event_type = response->event_type; - event->event_id = response->event_id; - event->additional_event_id = - response->additional_event_id; + if (event_index >= 0 && response->request_acknowledge) { + event = &ctrl_info->events[event_index]; + event->pending = true; + event->event_type = response->event_type; + event->event_id = response->event_id; + event->additional_event_id = response->additional_event_id; + if (event->event_type == PQI_EVENT_TYPE_OFA) pqi_ofa_capture_event_payload(event, response); - } }
oq_ci = (oq_ci + 1) % PQI_NUM_EVENT_QUEUE_ELEMENTS; @@ -3450,7 +3474,8 @@ static irqreturn_t pqi_irq_handler(int irq, void *data) { struct pqi_ctrl_info *ctrl_info; struct pqi_queue_group *queue_group; - unsigned int num_responses_handled; + int num_io_responses_handled; + int num_events_handled;
queue_group = data; ctrl_info = queue_group->ctrl_info; @@ -3458,17 +3483,25 @@ static irqreturn_t pqi_irq_handler(int irq, void *data) if (!pqi_is_valid_irq(ctrl_info)) return IRQ_NONE;
- num_responses_handled = pqi_process_io_intr(ctrl_info, queue_group); + num_io_responses_handled = pqi_process_io_intr(ctrl_info, queue_group); + if (num_io_responses_handled < 0) + goto out;
- if (irq == ctrl_info->event_irq) - num_responses_handled += pqi_process_event_intr(ctrl_info); + if (irq == ctrl_info->event_irq) { + num_events_handled = pqi_process_event_intr(ctrl_info); + if (num_events_handled < 0) + goto out; + } else { + num_events_handled = 0; + }
- if (num_responses_handled) + if (num_io_responses_handled + num_events_handled > 0) atomic_inc(&ctrl_info->num_interrupts);
pqi_start_io(ctrl_info, queue_group, RAID_PATH, NULL); pqi_start_io(ctrl_info, queue_group, AIO_PATH, NULL);
+out: return IRQ_HANDLED; }
From: Wang Yufen wangyufen@huawei.com
[ Upstream commit 6c151410d5b57e6bb0d91a735ac511459539a7bf ]
When brcmf_proto_msgbuf_attach fail and msgbuf->txflow_wq != NULL, we should destroy the workqueue.
Reported-by: Hulk Robot hulkci@huawei.com Signed-off-by: Wang Yufen wangyufen@huawei.com Signed-off-by: Kalle Valo kvalo@codeaurora.org Link: https://lore.kernel.org/r/1595237765-66238-1-git-send-email-wangyufen@huawei... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c index f1a20db8daab9..bfddb851e386e 100644 --- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c +++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c @@ -1620,6 +1620,8 @@ int brcmf_proto_msgbuf_attach(struct brcmf_pub *drvr) BRCMF_TX_IOCTL_MAX_MSG_SIZE, msgbuf->ioctbuf, msgbuf->ioctbuf_handle); + if (msgbuf->txflow_wq) + destroy_workqueue(msgbuf->txflow_wq); kfree(msgbuf); } return -ENOMEM;
From: Eli Billauer eli.billauer@gmail.com
[ Upstream commit fbc299437c06648afcc7891e6e2e6638dd48d4df ]
usb_kill_anchored_urbs() is commonly used to cancel all URBs on an anchor just before releasing resources which the URBs rely on. By doing so, users of this function rely on that no completer callbacks will take place from any URB on the anchor after it returns.
However if this function is called in parallel with __usb_hcd_giveback_urb processing a URB on the anchor, the latter may call the completer callback after usb_kill_anchored_urbs() returns. This can lead to a kernel panic due to use after release of memory in interrupt context.
The race condition is that __usb_hcd_giveback_urb() first unanchors the URB and then makes the completer callback. Such URB is hence invisible to usb_kill_anchored_urbs(), allowing it to return before the completer has been called, since the anchor's urb_list is empty.
Even worse, if the racing completer callback resubmits the URB, it may remain in the system long after usb_kill_anchored_urbs() returns.
Hence list_empty(&anchor->urb_list), which is used in the existing while-loop, doesn't reliably ensure that all URBs of the anchor are gone.
A similar problem exists with usb_poison_anchored_urbs() and usb_scuttle_anchored_urbs().
This patch adds an external do-while loop, which ensures that all URBs are indeed handled before these three functions return. This change has no effect at all unless the race condition occurs, in which case the loop will busy-wait until the racing completer callback has finished. This is a rare condition, so the CPU waste of this spinning is negligible.
The additional do-while loop relies on usb_anchor_check_wakeup(), which returns true iff the anchor list is empty, and there is no __usb_hcd_giveback_urb() in the system that is in the middle of the unanchor-before-complete phase. The @suspend_wakeups member of struct usb_anchor is used for this purpose, which was introduced to solve another problem which the same race condition causes, in commit 6ec4147e7bdb ("usb-anchor: Delay usb_wait_anchor_empty_timeout wake up till completion is done").
The surely_empty variable is necessary, because usb_anchor_check_wakeup() must be called with the lock held to prevent races. However the spinlock must be released and reacquired if the outer loop spins with an empty URB list while waiting for the unanchor-before-complete passage to finish: The completer callback may very well attempt to take the very same lock.
To summarize, using usb_anchor_check_wakeup() means that the patched functions can return only when the anchor's list is empty, and there is no invisible URB being processed. Since the inner while loop finishes on the empty list condition, the new do-while loop will terminate as well, except for when the said race condition occurs.
Signed-off-by: Eli Billauer eli.billauer@gmail.com Acked-by: Oliver Neukum oneukum@suse.com Acked-by: Alan Stern stern@rowland.harvard.edu Link: https://lore.kernel.org/r/20200731054650.30644-1-eli.billauer@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/usb/core/urb.c | 89 +++++++++++++++++++++++++----------------- 1 file changed, 54 insertions(+), 35 deletions(-)
diff --git a/drivers/usb/core/urb.c b/drivers/usb/core/urb.c index 7bc23469f4e4e..27e83e55a5901 100644 --- a/drivers/usb/core/urb.c +++ b/drivers/usb/core/urb.c @@ -772,11 +772,12 @@ void usb_block_urb(struct urb *urb) EXPORT_SYMBOL_GPL(usb_block_urb);
/** - * usb_kill_anchored_urbs - cancel transfer requests en masse + * usb_kill_anchored_urbs - kill all URBs associated with an anchor * @anchor: anchor the requests are bound to * - * this allows all outstanding URBs to be killed starting - * from the back of the queue + * This kills all outstanding URBs starting from the back of the queue, + * with guarantee that no completer callbacks will take place from the + * anchor after this function returns. * * This routine should not be called by a driver after its disconnect * method has returned. @@ -784,20 +785,26 @@ EXPORT_SYMBOL_GPL(usb_block_urb); void usb_kill_anchored_urbs(struct usb_anchor *anchor) { struct urb *victim; + int surely_empty;
- spin_lock_irq(&anchor->lock); - while (!list_empty(&anchor->urb_list)) { - victim = list_entry(anchor->urb_list.prev, struct urb, - anchor_list); - /* we must make sure the URB isn't freed before we kill it*/ - usb_get_urb(victim); - spin_unlock_irq(&anchor->lock); - /* this will unanchor the URB */ - usb_kill_urb(victim); - usb_put_urb(victim); + do { spin_lock_irq(&anchor->lock); - } - spin_unlock_irq(&anchor->lock); + while (!list_empty(&anchor->urb_list)) { + victim = list_entry(anchor->urb_list.prev, + struct urb, anchor_list); + /* make sure the URB isn't freed before we kill it */ + usb_get_urb(victim); + spin_unlock_irq(&anchor->lock); + /* this will unanchor the URB */ + usb_kill_urb(victim); + usb_put_urb(victim); + spin_lock_irq(&anchor->lock); + } + surely_empty = usb_anchor_check_wakeup(anchor); + + spin_unlock_irq(&anchor->lock); + cpu_relax(); + } while (!surely_empty); } EXPORT_SYMBOL_GPL(usb_kill_anchored_urbs);
@@ -816,21 +823,27 @@ EXPORT_SYMBOL_GPL(usb_kill_anchored_urbs); void usb_poison_anchored_urbs(struct usb_anchor *anchor) { struct urb *victim; + int surely_empty;
- spin_lock_irq(&anchor->lock); - anchor->poisoned = 1; - while (!list_empty(&anchor->urb_list)) { - victim = list_entry(anchor->urb_list.prev, struct urb, - anchor_list); - /* we must make sure the URB isn't freed before we kill it*/ - usb_get_urb(victim); - spin_unlock_irq(&anchor->lock); - /* this will unanchor the URB */ - usb_poison_urb(victim); - usb_put_urb(victim); + do { spin_lock_irq(&anchor->lock); - } - spin_unlock_irq(&anchor->lock); + anchor->poisoned = 1; + while (!list_empty(&anchor->urb_list)) { + victim = list_entry(anchor->urb_list.prev, + struct urb, anchor_list); + /* make sure the URB isn't freed before we kill it */ + usb_get_urb(victim); + spin_unlock_irq(&anchor->lock); + /* this will unanchor the URB */ + usb_poison_urb(victim); + usb_put_urb(victim); + spin_lock_irq(&anchor->lock); + } + surely_empty = usb_anchor_check_wakeup(anchor); + + spin_unlock_irq(&anchor->lock); + cpu_relax(); + } while (!surely_empty); } EXPORT_SYMBOL_GPL(usb_poison_anchored_urbs);
@@ -970,14 +983,20 @@ void usb_scuttle_anchored_urbs(struct usb_anchor *anchor) { struct urb *victim; unsigned long flags; + int surely_empty; + + do { + spin_lock_irqsave(&anchor->lock, flags); + while (!list_empty(&anchor->urb_list)) { + victim = list_entry(anchor->urb_list.prev, + struct urb, anchor_list); + __usb_unanchor_urb(victim, anchor); + } + surely_empty = usb_anchor_check_wakeup(anchor);
- spin_lock_irqsave(&anchor->lock, flags); - while (!list_empty(&anchor->urb_list)) { - victim = list_entry(anchor->urb_list.prev, struct urb, - anchor_list); - __usb_unanchor_urb(victim, anchor); - } - spin_unlock_irqrestore(&anchor->lock, flags); + spin_unlock_irqrestore(&anchor->lock, flags); + cpu_relax(); + } while (!surely_empty); }
EXPORT_SYMBOL_GPL(usb_scuttle_anchored_urbs);
From: Bard Liao yung-chuan.liao@linux.intel.com
[ Upstream commit a5a0239c27fe1125826c5cad4dec9cd1fd960d4a ]
The .prepare() callback is invoked for normal streaming, underflows or during the system resume transition. In the latter case, the context for the ALH PDIs is lost, and the DSP is not initialized properly either, but the bus parameters don't need to be recomputed.
Conversely, when doing a regular .prepare() during an underflow, the ALH/SHIM registers shall not be changed as the hardware cannot be reprogrammed after the DMA started (hardware spec requirement).
This patch adds storage of PDI and hw_params in the DAI dma context, and the difference between the types of .prepare() usages is handled via a simple boolean, updated when suspending, and tested for in the .prepare() case.
Signed-off-by: Bard Liao yung-chuan.liao@linux.intel.com Signed-off-by: Pierre-Louis Bossart pierre-louis.bossart@linux.intel.com Link: https://lore.kernel.org/r/20200817152923.3259-6-yung-chuan.liao@linux.intel.... Signed-off-by: Vinod Koul vkoul@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/soundwire/cadence_master.h | 4 ++ drivers/soundwire/intel.c | 71 +++++++++++++++++++++++++++++- 2 files changed, 74 insertions(+), 1 deletion(-)
diff --git a/drivers/soundwire/cadence_master.h b/drivers/soundwire/cadence_master.h index 15b0834030866..4d1aab5b5ec2d 100644 --- a/drivers/soundwire/cadence_master.h +++ b/drivers/soundwire/cadence_master.h @@ -84,6 +84,8 @@ struct sdw_cdns_stream_config { * @bus: Bus handle * @stream_type: Stream type * @link_id: Master link id + * @hw_params: hw_params to be applied in .prepare step + * @suspended: status set when suspended, to be used in .prepare */ struct sdw_cdns_dma_data { char *name; @@ -92,6 +94,8 @@ struct sdw_cdns_dma_data { struct sdw_bus *bus; enum sdw_stream_type stream_type; int link_id; + struct snd_pcm_hw_params *hw_params; + bool suspended; };
/** diff --git a/drivers/soundwire/intel.c b/drivers/soundwire/intel.c index a283670659a92..11d90fd4cc0d2 100644 --- a/drivers/soundwire/intel.c +++ b/drivers/soundwire/intel.c @@ -856,6 +856,10 @@ static int intel_hw_params(struct snd_pcm_substream *substream, intel_pdi_alh_configure(sdw, pdi); sdw_cdns_config_stream(cdns, ch, dir, pdi);
+ /* store pdi and hw_params, may be needed in prepare step */ + dma->suspended = false; + dma->pdi = pdi; + dma->hw_params = params;
/* Inform DSP about PDI stream number */ ret = intel_params_stream(sdw, substream, dai, params, @@ -899,7 +903,11 @@ static int intel_hw_params(struct snd_pcm_substream *substream, static int intel_prepare(struct snd_pcm_substream *substream, struct snd_soc_dai *dai) { + struct sdw_cdns *cdns = snd_soc_dai_get_drvdata(dai); + struct sdw_intel *sdw = cdns_to_intel(cdns); struct sdw_cdns_dma_data *dma; + int ch, dir; + int ret;
dma = snd_soc_dai_get_dma_data(dai, substream); if (!dma) { @@ -908,7 +916,41 @@ static int intel_prepare(struct snd_pcm_substream *substream, return -EIO; }
- return sdw_prepare_stream(dma->stream); + if (dma->suspended) { + dma->suspended = false; + + /* + * .prepare() is called after system resume, where we + * need to reinitialize the SHIM/ALH/Cadence IP. + * .prepare() is also called to deal with underflows, + * but in those cases we cannot touch ALH/SHIM + * registers + */ + + /* configure stream */ + ch = params_channels(dma->hw_params); + if (substream->stream == SNDRV_PCM_STREAM_CAPTURE) + dir = SDW_DATA_DIR_RX; + else + dir = SDW_DATA_DIR_TX; + + intel_pdi_shim_configure(sdw, dma->pdi); + intel_pdi_alh_configure(sdw, dma->pdi); + sdw_cdns_config_stream(cdns, ch, dir, dma->pdi); + + /* Inform DSP about PDI stream number */ + ret = intel_params_stream(sdw, substream, dai, + dma->hw_params, + sdw->instance, + dma->pdi->intel_alh_id); + if (ret) + goto err; + } + + ret = sdw_prepare_stream(dma->stream); + +err: + return ret; }
static int intel_trigger(struct snd_pcm_substream *substream, int cmd, @@ -979,6 +1021,9 @@ intel_hw_free(struct snd_pcm_substream *substream, struct snd_soc_dai *dai) return ret; }
+ dma->hw_params = NULL; + dma->pdi = NULL; + return 0; }
@@ -988,6 +1033,29 @@ static void intel_shutdown(struct snd_pcm_substream *substream,
}
+static int intel_component_dais_suspend(struct snd_soc_component *component) +{ + struct sdw_cdns_dma_data *dma; + struct snd_soc_dai *dai; + + for_each_component_dais(component, dai) { + /* + * we don't have a .suspend dai_ops, and we don't have access + * to the substream, so let's mark both capture and playback + * DMA contexts as suspended + */ + dma = dai->playback_dma_data; + if (dma) + dma->suspended = true; + + dma = dai->capture_dma_data; + if (dma) + dma->suspended = true; + } + + return 0; +} + static int intel_pcm_set_sdw_stream(struct snd_soc_dai *dai, void *stream, int direction) { @@ -1040,6 +1108,7 @@ static const struct snd_soc_dai_ops intel_pdm_dai_ops = {
static const struct snd_soc_component_driver dai_component = { .name = "soundwire", + .suspend = intel_component_dais_suspend };
static int intel_create_dai(struct sdw_cdns *cdns,
From: Can Guo cang@codeaurora.org
[ Upstream commit 89dd87acd40a44de8ff3358138aedf8f73f4efc6 ]
If ufs_qcom_dump_dbg_regs() calls ufs_qcom_testbus_config() from ufshcd_suspend/resume and/or clk gate/ungate context, pm_runtime_get_sync() and ufshcd_hold() will cause a race condition. Fix this by removing the unnecessary calls of pm_runtime_get_sync() and ufshcd_hold().
Link: https://lore.kernel.org/r/1596975355-39813-3-git-send-email-cang@codeaurora.... Reviewed-by: Hongwu Su hongwus@codeaurora.org Reviewed-by: Avri Altman avri.altman@wdc.com Reviewed-by: Bean Huo beanhuo@micron.com Reviewed-by: Asutosh Das asutoshd@codeaurora.org Signed-off-by: Can Guo cang@codeaurora.org Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/scsi/ufs/ufs-qcom.c | 5 ----- 1 file changed, 5 deletions(-)
diff --git a/drivers/scsi/ufs/ufs-qcom.c b/drivers/scsi/ufs/ufs-qcom.c index d0d75527830e9..823eccfdd00af 100644 --- a/drivers/scsi/ufs/ufs-qcom.c +++ b/drivers/scsi/ufs/ufs-qcom.c @@ -1614,9 +1614,6 @@ int ufs_qcom_testbus_config(struct ufs_qcom_host *host) */ } mask <<= offset; - - pm_runtime_get_sync(host->hba->dev); - ufshcd_hold(host->hba, false); ufshcd_rmwl(host->hba, TEST_BUS_SEL, (u32)host->testbus.select_major << 19, REG_UFS_CFG1); @@ -1629,8 +1626,6 @@ int ufs_qcom_testbus_config(struct ufs_qcom_host *host) * committed before returning. */ mb(); - ufshcd_release(host->hba); - pm_runtime_put_sync(host->hba->dev);
return 0; }
From: Qingqing Zhuo qingqing.zhuo@amd.com
[ Upstream commit ce271b40a91f781af3dee985c39e841ac5148766 ]
[why] Current pipe merge and split logic only supports cases where new dc_state is allocated and relies on dc->current_state to gather information from previous dc_state.
Calls to validate_bandwidth on UPDATE_TYPE_MED would cause an issue because there is no new dc_state allocated, and data in dc->current_state would be overwritten during pipe merge.
[how] Only allow validate_bandwidth when new dc_state space is created.
Signed-off-by: Qingqing Zhuo qingqing.zhuo@amd.com Reviewed-by: Nicholas Kazlauskas Nicholas.Kazlauskas@amd.com Acked-by: Rodrigo Siqueira Rodrigo.Siqueira@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/dc/core/dc.c | 2 +- drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c | 3 +++ drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c | 3 +++ 3 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c index 92eb1ca1634fc..22cbfda6ab338 100644 --- a/drivers/gpu/drm/amd/display/dc/core/dc.c +++ b/drivers/gpu/drm/amd/display/dc/core/dc.c @@ -2621,7 +2621,7 @@ void dc_commit_updates_for_stream(struct dc *dc,
copy_stream_update_to_stream(dc, context, stream, stream_update);
- if (update_type > UPDATE_TYPE_FAST) { + if (update_type >= UPDATE_TYPE_FULL) { if (!dc->res_pool->funcs->validate_bandwidth(dc, context, false)) { DC_ERROR("Mode validation failed for stream update!\n"); dc_release_state(context); diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c index f31f48dd0da29..aaf9a99f9f045 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c @@ -3209,6 +3209,9 @@ static noinline bool dcn20_validate_bandwidth_fp(struct dc *dc, context->bw_ctx.dml.soc.allow_dram_clock_one_display_vactive = dc->debug.enable_dram_clock_change_one_display_vactive;
+ /*Unsafe due to current pipe merge and split logic*/ + ASSERT(context != dc->current_state); + if (fast_validate) { return dcn20_validate_bandwidth_internal(dc, context, true); } diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c index 88d41a385add8..a4f37d83d5cc9 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c @@ -1184,6 +1184,9 @@ bool dcn21_validate_bandwidth(struct dc *dc, struct dc_state *context,
BW_VAL_TRACE_COUNT();
+ /*Unsafe due to current pipe merge and split logic*/ + ASSERT(context != dc->current_state); + out = dcn20_fast_validate_bw(dc, context, pipes, &pipe_cnt, pipe_split_from, &vlevel);
if (pipe_cnt == 0)
From: Serge Semin Sergey.Semin@baikalelectronics.ru
[ Upstream commit e8ee6c8cb61b676f1a2d6b942329e98224bd8ee9 ]
DW DMA IP-core provides a way to synthesize the DMA controller with channels having different parameters like maximum burst-length, multi-block support, maximum data width, etc. Those parameters both explicitly and implicitly affect the channels performance. Since DMA slave devices might be very demanding to the DMA performance, let's provide a functionality for the slaves to be assigned with DW DMA channels, which performance according to the platform engineer fulfill their requirements. After this patch is applied it can be done by passing the mask of suitable DMA-channels either directly in the dw_dma_slave structure instance or as a fifth cell of the DMA DT-property. If mask is zero or not provided, then there is no limitation on the channels allocation.
For instance Baikal-T1 SoC is equipped with a DW DMAC engine, which first two channels are synthesized with max burst length of 16, while the rest of the channels have been created with max-burst-len=4. It would seem that the first two channels must be faster than the others and should be more preferable for the time-critical DMA slave devices. In practice it turned out that the situation is quite the opposite. The channels with max-burst-len=4 demonstrated a better performance than the channels with max-burst-len=16 even when they both had been initialized with the same settings. The performance drop of the first two DMA-channels made them unsuitable for the DW APB SSI slave device. No matter what settings they are configured with, full-duplex SPI transfers occasionally experience the Rx FIFO overflow. It means that the DMA-engine doesn't keep up with incoming data pace even though the SPI-bus is enabled with speed of 25MHz while the DW DMA controller is clocked with 50MHz signal. There is no such problem has been noticed for the channels synthesized with max-burst-len=4.
Signed-off-by: Serge Semin Sergey.Semin@baikalelectronics.ru Link: https://lore.kernel.org/r/20200731200826.9292-6-Sergey.Semin@baikalelectroni... Signed-off-by: Vinod Koul vkoul@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/dma/dw/core.c | 4 ++++ drivers/dma/dw/of.c | 7 +++++-- include/linux/platform_data/dma-dw.h | 2 ++ 3 files changed, 11 insertions(+), 2 deletions(-)
diff --git a/drivers/dma/dw/core.c b/drivers/dma/dw/core.c index 4700f2e87a627..d9333ee14527e 100644 --- a/drivers/dma/dw/core.c +++ b/drivers/dma/dw/core.c @@ -772,6 +772,10 @@ bool dw_dma_filter(struct dma_chan *chan, void *param) if (dws->dma_dev != chan->device->dev) return false;
+ /* permit channels in accordance with the channels mask */ + if (dws->channels && !(dws->channels & dwc->mask)) + return false; + /* We have to copy data since dws can be temporary storage */ memcpy(&dwc->dws, dws, sizeof(struct dw_dma_slave));
diff --git a/drivers/dma/dw/of.c b/drivers/dma/dw/of.c index 1474b3817ef4f..c1cf7675b9d10 100644 --- a/drivers/dma/dw/of.c +++ b/drivers/dma/dw/of.c @@ -22,18 +22,21 @@ static struct dma_chan *dw_dma_of_xlate(struct of_phandle_args *dma_spec, }; dma_cap_mask_t cap;
- if (dma_spec->args_count != 3) + if (dma_spec->args_count < 3 || dma_spec->args_count > 4) return NULL;
slave.src_id = dma_spec->args[0]; slave.dst_id = dma_spec->args[0]; slave.m_master = dma_spec->args[1]; slave.p_master = dma_spec->args[2]; + if (dma_spec->args_count >= 4) + slave.channels = dma_spec->args[3];
if (WARN_ON(slave.src_id >= DW_DMA_MAX_NR_REQUESTS || slave.dst_id >= DW_DMA_MAX_NR_REQUESTS || slave.m_master >= dw->pdata->nr_masters || - slave.p_master >= dw->pdata->nr_masters)) + slave.p_master >= dw->pdata->nr_masters || + slave.channels >= BIT(dw->pdata->nr_channels))) return NULL;
dma_cap_zero(cap); diff --git a/include/linux/platform_data/dma-dw.h b/include/linux/platform_data/dma-dw.h index fbbeb2f6189b8..b34a094b2258d 100644 --- a/include/linux/platform_data/dma-dw.h +++ b/include/linux/platform_data/dma-dw.h @@ -26,6 +26,7 @@ struct device; * @dst_id: dst request line * @m_master: memory master for transfers on allocated channel * @p_master: peripheral master for transfers on allocated channel + * @channels: mask of the channels permitted for allocation (zero value means any) * @hs_polarity:set active low polarity of handshake interface */ struct dw_dma_slave { @@ -34,6 +35,7 @@ struct dw_dma_slave { u8 dst_id; u8 m_master; u8 p_master; + u8 channels; bool hs_polarity; };
From: Serge Semin Sergey.Semin@baikalelectronics.ru
[ Upstream commit 6d9459d04081c796fc67c2bb771f4e4ebb5744c4 ]
CFGx.FIFO_MODE field controls a DMA-controller "FIFO readiness" criterion. In other words it determines when to start pushing data out of a DW DMAC channel FIFO to a destination peripheral or from a source peripheral to the DW DMAC channel FIFO. Currently FIFO-mode is set to one for all DW DMAC channels. It means they are tuned to flush data out of FIFO (to a memory peripheral or by accepting the burst transaction requests) when FIFO is at least half-full (except at the end of the block transfer, when FIFO-flush mode is activated) and are configured to get data to the FIFO when it's at least half-empty.
Such configuration is a good choice when there is no slave device involved in the DMA transfers. In that case the number of bursts per block is less than when CFGx.FIFO_MODE = 0 and, hence, the bus utilization will improve. But the latency of DMA transfers may increase when CFGx.FIFO_MODE = 1, since DW DMAC will wait for the channel FIFO contents to be either half-full or half-empty depending on having the destination or the source transfers. Such latencies might be dangerous in case if the DMA transfers are expected to be performed from/to a slave device. Since normally peripheral devices keep data in internal FIFOs, any latency at some critical moment may cause one being overflown and consequently losing data. This especially concerns a case when either a peripheral device is relatively fast or the DW DMAC engine is relatively slow with respect to the incoming data pace.
In order to solve problems, which might be caused by the latencies described above, let's enable the FIFO half-full/half-empty "FIFO readiness" criterion only for DMA transfers with no slave device involved. Thanks to the commit 99ba8b9b0d97 ("dmaengine: dw: Initialize channel before each transfer") we can freely do that in the generic dw_dma_initialize_chan() method.
Signed-off-by: Serge Semin Sergey.Semin@baikalelectronics.ru Reviewed-by: Andy Shevchenko andriy.shevchenko@linux.intel.com Link: https://lore.kernel.org/r/20200731200826.9292-3-Sergey.Semin@baikalelectroni... Signed-off-by: Vinod Koul vkoul@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/dma/dw/dw.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/dma/dw/dw.c b/drivers/dma/dw/dw.c index 7a085b3c1854c..d9810980920a1 100644 --- a/drivers/dma/dw/dw.c +++ b/drivers/dma/dw/dw.c @@ -14,7 +14,7 @@ static void dw_dma_initialize_chan(struct dw_dma_chan *dwc) { struct dw_dma *dw = to_dw_dma(dwc->chan.device); - u32 cfghi = DWC_CFGH_FIFO_MODE; + u32 cfghi = is_slave_direction(dwc->direction) ? 0 : DWC_CFGH_FIFO_MODE; u32 cfglo = DWC_CFGL_CH_PRIOR(dwc->priority); bool hs_polarity = dwc->dws.hs_polarity;
From: Tian Tao tiantao6@hisilicon.com
[ Upstream commit 13b0d4a9ae0c2d650993c48be797992eaf621332 ]
The memory used to be allocated with devres helpers and released automatically. In rare circumstances, the memory's release could have happened before the DRM device got released, which would have caused memory corruption of some kind. Now we're embedding the data structures in struct hibmc_drm_private. The whole release problem has been resolved, because struct hibmc_drm_private is allocated with drmm_kzalloc and always released with the DRM device.
Signed-off-by: Tian Tao tiantao6@hisilicon.com Reviewed-by: Thomas Zimmermann tzimmermann@suse.de Signed-off-by: Thomas Zimmermann tzimmermann@suse.de Link: https://patchwork.freedesktop.org/patch/msgid/1597218179-3938-3-git-send-ema... Signed-off-by: Sasha Levin sashal@kernel.org --- .../gpu/drm/hisilicon/hibmc/hibmc_drm_de.c | 55 +++++-------------- .../gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h | 2 + 2 files changed, 15 insertions(+), 42 deletions(-)
diff --git a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_de.c b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_de.c index cc70e836522f0..8758958e16893 100644 --- a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_de.c +++ b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_de.c @@ -160,37 +160,6 @@ static const struct drm_plane_helper_funcs hibmc_plane_helper_funcs = { .atomic_update = hibmc_plane_atomic_update, };
-static struct drm_plane *hibmc_plane_init(struct hibmc_drm_private *priv) -{ - struct drm_device *dev = priv->dev; - struct drm_plane *plane; - int ret = 0; - - plane = devm_kzalloc(dev->dev, sizeof(*plane), GFP_KERNEL); - if (!plane) { - DRM_ERROR("failed to alloc memory when init plane\n"); - return ERR_PTR(-ENOMEM); - } - /* - * plane init - * TODO: Now only support primary plane, overlay planes - * need to do. - */ - ret = drm_universal_plane_init(dev, plane, 1, &hibmc_plane_funcs, - channel_formats1, - ARRAY_SIZE(channel_formats1), - NULL, - DRM_PLANE_TYPE_PRIMARY, - NULL); - if (ret) { - DRM_ERROR("failed to init plane: %d\n", ret); - return ERR_PTR(ret); - } - - drm_plane_helper_add(plane, &hibmc_plane_helper_funcs); - return plane; -} - static void hibmc_crtc_dpms(struct drm_crtc *crtc, int dpms) { struct hibmc_drm_private *priv = crtc->dev->dev_private; @@ -537,22 +506,24 @@ static const struct drm_crtc_helper_funcs hibmc_crtc_helper_funcs = { int hibmc_de_init(struct hibmc_drm_private *priv) { struct drm_device *dev = priv->dev; - struct drm_crtc *crtc; - struct drm_plane *plane; + struct drm_crtc *crtc = &priv->crtc; + struct drm_plane *plane = &priv->primary_plane; int ret;
- plane = hibmc_plane_init(priv); - if (IS_ERR(plane)) { - DRM_ERROR("failed to create plane: %ld\n", PTR_ERR(plane)); - return PTR_ERR(plane); - } + ret = drm_universal_plane_init(dev, plane, 1, &hibmc_plane_funcs, + channel_formats1, + ARRAY_SIZE(channel_formats1), + NULL, + DRM_PLANE_TYPE_PRIMARY, + NULL);
- crtc = devm_kzalloc(dev->dev, sizeof(*crtc), GFP_KERNEL); - if (!crtc) { - DRM_ERROR("failed to alloc memory when init crtc\n"); - return -ENOMEM; + if (ret) { + DRM_ERROR("failed to init plane: %d\n", ret); + return ret; }
+ drm_plane_helper_add(plane, &hibmc_plane_helper_funcs); + ret = drm_crtc_init_with_planes(dev, crtc, plane, NULL, &hibmc_crtc_funcs, NULL); if (ret) { diff --git a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h index 609768748de65..0a74ba220cac5 100644 --- a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h +++ b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.h @@ -29,6 +29,8 @@ struct hibmc_drm_private {
/* drm */ struct drm_device *dev; + struct drm_plane primary_plane; + struct drm_crtc crtc; struct drm_encoder encoder; struct drm_connector connector; bool mode_config_initialized;
From: Alvin Lee alvin.lee2@amd.com
[ Upstream commit 81b437f57e35a6caa3a4304e6fff0eba0a9f3266 ]
[Why] When changing pixel formats for HDR (e.g. ARGB -> FP16) there are configurations that change from 2 pipes to 1 pipe. In these cases, it seems that disconnecting MPCC and doing a surface update at the same time(after unlocking) causes some registers to be updated slightly faster than others after unlocking (e.g. if the pixel format is updated to FP16 before the new surface address is programmed, we get corruption on the screen because the pixel formats aren't matching). We separate disconnecting MPCC from the rest of the pipe programming sequence to prevent this.
[How] Move MPCC disconnect into separate operation than the rest of the pipe programming.
Signed-off-by: Alvin Lee alvin.lee2@amd.com Reviewed-by: Aric Cyr Aric.Cyr@amd.com Acked-by: Qingqing Zhuo qingqing.zhuo@amd.com Signed-off-by: Alex Deucher alexander.deucher@amd.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/amd/display/dc/core/dc.c | 10 ++ .../amd/display/dc/dcn10/dcn10_hw_sequencer.c | 146 ++++++++++++++++++ .../amd/display/dc/dcn10/dcn10_hw_sequencer.h | 6 + .../gpu/drm/amd/display/dc/dcn10/dcn10_init.c | 2 + .../gpu/drm/amd/display/dc/dcn20/dcn20_init.c | 2 + .../gpu/drm/amd/display/dc/dcn21/dcn21_init.c | 2 + .../gpu/drm/amd/display/dc/dcn30/dcn30_init.c | 2 + .../gpu/drm/amd/display/dc/inc/hw_sequencer.h | 4 + 8 files changed, 174 insertions(+)
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c index 22cbfda6ab338..95ec8ae5a7739 100644 --- a/drivers/gpu/drm/amd/display/dc/core/dc.c +++ b/drivers/gpu/drm/amd/display/dc/core/dc.c @@ -2295,6 +2295,7 @@ static void commit_planes_for_stream(struct dc *dc, enum surface_update_type update_type, struct dc_state *context) { + bool mpcc_disconnected = false; int i, j; struct pipe_ctx *top_pipe_to_program = NULL;
@@ -2325,6 +2326,15 @@ static void commit_planes_for_stream(struct dc *dc, context_clock_trace(dc, context); }
+ if (update_type != UPDATE_TYPE_FAST && dc->hwss.interdependent_update_lock && + dc->hwss.disconnect_pipes && dc->hwss.wait_for_pending_cleared){ + dc->hwss.interdependent_update_lock(dc, context, true); + mpcc_disconnected = dc->hwss.disconnect_pipes(dc, context); + dc->hwss.interdependent_update_lock(dc, context, false); + if (mpcc_disconnected) + dc->hwss.wait_for_pending_cleared(dc, context); + } + for (j = 0; j < dc->res_pool->pipe_count; j++) { struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[j];
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c index fa643ec5a8760..4bbfd8a26a606 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c @@ -2769,6 +2769,152 @@ static struct pipe_ctx *dcn10_find_top_pipe_for_stream( return NULL; }
+bool dcn10_disconnect_pipes( + struct dc *dc, + struct dc_state *context) +{ + bool found_stream = false; + int i, j; + struct dce_hwseq *hws = dc->hwseq; + struct dc_state *old_ctx = dc->current_state; + bool mpcc_disconnected = false; + struct pipe_ctx *old_pipe; + struct pipe_ctx *new_pipe; + DC_LOGGER_INIT(dc->ctx->logger); + + /* Set pipe update flags and lock pipes */ + for (i = 0; i < dc->res_pool->pipe_count; i++) { + old_pipe = &dc->current_state->res_ctx.pipe_ctx[i]; + new_pipe = &context->res_ctx.pipe_ctx[i]; + new_pipe->update_flags.raw = 0; + + if (!old_pipe->plane_state && !new_pipe->plane_state) + continue; + + if (old_pipe->plane_state && !new_pipe->plane_state) + new_pipe->update_flags.bits.disable = 1; + + /* Check for scl update */ + if (memcmp(&old_pipe->plane_res.scl_data, &new_pipe->plane_res.scl_data, sizeof(struct scaler_data))) + new_pipe->update_flags.bits.scaler = 1; + + /* Check for vp update */ + if (memcmp(&old_pipe->plane_res.scl_data.viewport, &new_pipe->plane_res.scl_data.viewport, sizeof(struct rect)) + || memcmp(&old_pipe->plane_res.scl_data.viewport_c, + &new_pipe->plane_res.scl_data.viewport_c, sizeof(struct rect))) + new_pipe->update_flags.bits.viewport = 1; + + } + + if (!IS_DIAG_DC(dc->ctx->dce_environment)) { + /* Disconnect mpcc here only if losing pipe split*/ + for (i = 0; i < dc->res_pool->pipe_count; i++) { + if (context->res_ctx.pipe_ctx[i].update_flags.bits.disable && + old_ctx->res_ctx.pipe_ctx[i].top_pipe) { + + /* Find the top pipe in the new ctx for the bottom pipe that we + * want to remove by comparing the streams. If both pipes are being + * disabled then do it in the regular pipe programming sequence + */ + for (j = 0; j < dc->res_pool->pipe_count; j++) { + if (old_ctx->res_ctx.pipe_ctx[i].top_pipe->stream == context->res_ctx.pipe_ctx[j].stream && + !context->res_ctx.pipe_ctx[j].top_pipe && + !context->res_ctx.pipe_ctx[j].update_flags.bits.disable) { + found_stream = true; + break; + } + } + + // Disconnect if the top pipe lost it's pipe split + if (found_stream && !context->res_ctx.pipe_ctx[j].bottom_pipe) { + hws->funcs.plane_atomic_disconnect(dc, &dc->current_state->res_ctx.pipe_ctx[i]); + DC_LOG_DC("Reset mpcc for pipe %d\n", dc->current_state->res_ctx.pipe_ctx[i].pipe_idx); + mpcc_disconnected = true; + } + } + found_stream = false; + } + } + + if (mpcc_disconnected) { + for (i = 0; i < dc->res_pool->pipe_count; i++) { + struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i]; + struct pipe_ctx *old_pipe = &dc->current_state->res_ctx.pipe_ctx[i]; + struct dc_plane_state *plane_state = pipe_ctx->plane_state; + struct hubp *hubp = pipe_ctx->plane_res.hubp; + + if (!pipe_ctx || !plane_state || !pipe_ctx->stream) + continue; + + // Only update scaler and viewport here if we lose a pipe split. + // This is to prevent half the screen from being black when we + // unlock after disconnecting MPCC. + if (!(old_pipe && !pipe_ctx->top_pipe && + !pipe_ctx->bottom_pipe && old_pipe->bottom_pipe)) + continue; + + if (pipe_ctx->update_flags.raw || pipe_ctx->plane_state->update_flags.raw || pipe_ctx->stream->update_flags.raw) { + if (pipe_ctx->update_flags.bits.scaler || + plane_state->update_flags.bits.scaling_change || + plane_state->update_flags.bits.position_change || + plane_state->update_flags.bits.per_pixel_alpha_change || + pipe_ctx->stream->update_flags.bits.scaling) { + + pipe_ctx->plane_res.scl_data.lb_params.alpha_en = pipe_ctx->plane_state->per_pixel_alpha; + ASSERT(pipe_ctx->plane_res.scl_data.lb_params.depth == LB_PIXEL_DEPTH_30BPP); + /* scaler configuration */ + pipe_ctx->plane_res.dpp->funcs->dpp_set_scaler( + pipe_ctx->plane_res.dpp, &pipe_ctx->plane_res.scl_data); + } + + if (pipe_ctx->update_flags.bits.viewport || + (context == dc->current_state && plane_state->update_flags.bits.position_change) || + (context == dc->current_state && plane_state->update_flags.bits.scaling_change) || + (context == dc->current_state && pipe_ctx->stream->update_flags.bits.scaling)) { + + hubp->funcs->mem_program_viewport( + hubp, + &pipe_ctx->plane_res.scl_data.viewport, + &pipe_ctx->plane_res.scl_data.viewport_c); + } + } + } + } + return mpcc_disconnected; +} + +void dcn10_wait_for_pending_cleared(struct dc *dc, + struct dc_state *context) +{ + struct pipe_ctx *pipe_ctx; + struct timing_generator *tg; + int i; + + for (i = 0; i < dc->res_pool->pipe_count; i++) { + pipe_ctx = &context->res_ctx.pipe_ctx[i]; + tg = pipe_ctx->stream_res.tg; + + /* + * Only wait for top pipe's tg penindg bit + * Also skip if pipe is disabled. + */ + if (pipe_ctx->top_pipe || + !pipe_ctx->stream || !pipe_ctx->plane_state || + !tg->funcs->is_tg_enabled(tg)) + continue; + + /* + * Wait for VBLANK then VACTIVE to ensure we get VUPDATE. + * For some reason waiting for OTG_UPDATE_PENDING cleared + * seems to not trigger the update right away, and if we + * lock again before VUPDATE then we don't get a separated + * operation. + */ + pipe_ctx->stream_res.tg->funcs->wait_for_state(pipe_ctx->stream_res.tg, CRTC_STATE_VBLANK); + pipe_ctx->stream_res.tg->funcs->wait_for_state(pipe_ctx->stream_res.tg, CRTC_STATE_VACTIVE); + } +} + void dcn10_apply_ctx_for_surface( struct dc *dc, const struct dc_stream_state *stream, diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.h b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.h index 6d891166da8a4..e5691e4990231 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.h +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.h @@ -194,6 +194,12 @@ void dcn10_get_surface_visual_confirm_color( void dcn10_get_hdr_visual_confirm_color( struct pipe_ctx *pipe_ctx, struct tg_color *color); +bool dcn10_disconnect_pipes( + struct dc *dc, + struct dc_state *context); + +void dcn10_wait_for_pending_cleared(struct dc *dc, + struct dc_state *context); void dcn10_set_hdr_multiplier(struct pipe_ctx *pipe_ctx); void dcn10_verify_allow_pstate_change_high(struct dc *dc);
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c index 5c98b71c1d47a..a1d1559bb5d73 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c +++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c @@ -34,6 +34,8 @@ static const struct hw_sequencer_funcs dcn10_funcs = { .apply_ctx_to_hw = dce110_apply_ctx_to_hw, .apply_ctx_for_surface = dcn10_apply_ctx_for_surface, .post_unlock_program_front_end = dcn10_post_unlock_program_front_end, + .disconnect_pipes = dcn10_disconnect_pipes, + .wait_for_pending_cleared = dcn10_wait_for_pending_cleared, .update_plane_addr = dcn10_update_plane_addr, .update_dchub = dcn10_update_dchub, .update_pending_status = dcn10_update_pending_status, diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c index 3dde6f26de474..966e1790b9bfd 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c +++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c @@ -34,6 +34,8 @@ static const struct hw_sequencer_funcs dcn20_funcs = { .apply_ctx_to_hw = dce110_apply_ctx_to_hw, .apply_ctx_for_surface = NULL, .program_front_end_for_ctx = dcn20_program_front_end_for_ctx, + .disconnect_pipes = dcn10_disconnect_pipes, + .wait_for_pending_cleared = dcn10_wait_for_pending_cleared, .post_unlock_program_front_end = dcn20_post_unlock_program_front_end, .update_plane_addr = dcn20_update_plane_addr, .update_dchub = dcn10_update_dchub, diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_init.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_init.c index b187f71afa652..2ba880c3943c3 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_init.c +++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_init.c @@ -35,6 +35,8 @@ static const struct hw_sequencer_funcs dcn21_funcs = { .apply_ctx_to_hw = dce110_apply_ctx_to_hw, .apply_ctx_for_surface = NULL, .program_front_end_for_ctx = dcn20_program_front_end_for_ctx, + .disconnect_pipes = dcn10_disconnect_pipes, + .wait_for_pending_cleared = dcn10_wait_for_pending_cleared, .post_unlock_program_front_end = dcn20_post_unlock_program_front_end, .update_plane_addr = dcn20_update_plane_addr, .update_dchub = dcn10_update_dchub, diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_init.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_init.c index 9afee71604902..19daa456e3bfe 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_init.c +++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_init.c @@ -35,6 +35,8 @@ static const struct hw_sequencer_funcs dcn30_funcs = { .apply_ctx_to_hw = dce110_apply_ctx_to_hw, .apply_ctx_for_surface = NULL, .program_front_end_for_ctx = dcn20_program_front_end_for_ctx, + .disconnect_pipes = dcn10_disconnect_pipes, + .wait_for_pending_cleared = dcn10_wait_for_pending_cleared, .post_unlock_program_front_end = dcn20_post_unlock_program_front_end, .update_plane_addr = dcn20_update_plane_addr, .update_dchub = dcn10_update_dchub, diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h index 3c986717dcd56..64c1be818b0e8 100644 --- a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h +++ b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h @@ -67,6 +67,10 @@ struct hw_sequencer_funcs { int num_planes, struct dc_state *context); void (*program_front_end_for_ctx)(struct dc *dc, struct dc_state *context); + bool (*disconnect_pipes)(struct dc *dc, + struct dc_state *context); + void (*wait_for_pending_cleared)(struct dc *dc, + struct dc_state *context); void (*post_unlock_program_front_end)(struct dc *dc, struct dc_state *context); void (*update_plane_addr)(const struct dc *dc,
From: Navid Emamdoost navid.emamdoost@gmail.com
[ Upstream commit 9df0e0c1889677175037445d5ad1654d54176369 ]
in panfrost_perfcnt_enable_locked, pm_runtime_get_sync is called which increments the counter even in case of failure, leading to incorrect ref count. In case of failure, decrement the ref count before returning.
Acked-by: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Signed-off-by: Navid Emamdoost navid.emamdoost@gmail.com Signed-off-by: Rob Herring robh@kernel.org Link: https://patchwork.freedesktop.org/patch/msgid/20200614063619.44944-1-navid.e... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c index ec4695cf3caf3..fdbc8d9491356 100644 --- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c +++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c @@ -83,11 +83,13 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
ret = pm_runtime_get_sync(pfdev->dev); if (ret < 0) - return ret; + goto err_put_pm;
bo = drm_gem_shmem_create(pfdev->ddev, perfcnt->bosize); - if (IS_ERR(bo)) - return PTR_ERR(bo); + if (IS_ERR(bo)) { + ret = PTR_ERR(bo); + goto err_put_pm; + }
/* Map the perfcnt buf in the address space attached to file_priv. */ ret = panfrost_gem_open(&bo->base, file_priv); @@ -168,6 +170,8 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, panfrost_gem_close(&bo->base, file_priv); err_put_bo: drm_gem_object_put(&bo->base); +err_put_pm: + pm_runtime_put(pfdev->dev); return ret; }
From: Zekun Shen bruceshenzk@gmail.com
[ Upstream commit bad60b8d1a7194df38fd7fe4b22f3f4dcf775099 ]
The idx in __ath10k_htt_rx_ring_fill_n function lives in consistent dma region writable by the device. Malfunctional or malicious device could manipulate such idx to have a OOB write. Either by htt->rx_ring.netbufs_ring[idx] = skb; or by ath10k_htt_set_paddrs_ring(htt, paddr, idx);
The idx can also be negative as it's signed, giving a large memory space to write to.
It's possibly exploitable by corruptting a legit pointer with a skb pointer. And then fill skb with payload as rougue object.
Part of the log here. Sometimes it appears as UAF when writing to a freed memory by chance.
[ 15.594376] BUG: unable to handle page fault for address: ffff887f5c1804f0 [ 15.595483] #PF: supervisor write access in kernel mode [ 15.596250] #PF: error_code(0x0002) - not-present page [ 15.597013] PGD 0 P4D 0 [ 15.597395] Oops: 0002 [#1] SMP KASAN PTI [ 15.597967] CPU: 0 PID: 82 Comm: kworker/u2:2 Not tainted 5.6.0 #69 [ 15.598843] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.org 04/01/2014 [ 15.600438] Workqueue: ath10k_wq ath10k_core_register_work [ath10k_core] [ 15.601389] RIP: 0010:__ath10k_htt_rx_ring_fill_n (linux/drivers/net/wireless/ath/ath10k/htt_rx.c:173) ath10k_core
Signed-off-by: Zekun Shen bruceshenzk@gmail.com Signed-off-by: Kalle Valo kvalo@codeaurora.org Link: https://lore.kernel.org/r/20200623221105.3486-1-bruceshenzk@gmail.com Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/net/wireless/ath/ath10k/htt_rx.c | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/drivers/net/wireless/ath/ath10k/htt_rx.c b/drivers/net/wireless/ath/ath10k/htt_rx.c index d787cbead56ab..215ade6faf328 100644 --- a/drivers/net/wireless/ath/ath10k/htt_rx.c +++ b/drivers/net/wireless/ath/ath10k/htt_rx.c @@ -142,6 +142,14 @@ static int __ath10k_htt_rx_ring_fill_n(struct ath10k_htt *htt, int num) BUILD_BUG_ON(HTT_RX_RING_FILL_LEVEL >= HTT_RX_RING_SIZE / 2);
idx = __le32_to_cpu(*htt->rx_ring.alloc_idx.vaddr); + + if (idx < 0 || idx >= htt->rx_ring.size) { + ath10k_err(htt->ar, "rx ring index is not valid, firmware malfunctioning?\n"); + idx &= htt->rx_ring.size_mask; + ret = -ENOMEM; + goto fail; + } + while (num > 0) { skb = dev_alloc_skb(HTT_RX_BUF_SIZE + HTT_RX_DESC_ALIGN); if (!skb) {
linux-stable-mirror@lists.linaro.org