`ni6527_intr_insn_config()` processes `INSN_CONFIG` comedi instructions
for the "interrupt" subdevice. When `data[0]` is
`INSN_CONFIG_DIGITAL_TRIG` it is configuring the digital trigger. When
`data[2]` is `COMEDI_DIGITAL_TRIG_ENABLE_EDGES` it is configuring rising
and falling edge detection for the digital trigger, using a base channel
number (or shift amount) in `data[3]`, a rising edge bitmask in
`data[4]` and falling edge bitmask in `data[5]`.
If the base channel number (shift amount) is greater than or equal to
the number of channels (24) of the digital input subdevice, there are no
changes to the rising and falling edges, so the mask of channels to be
changed can be set to 0, otherwise the mask of channels to be changed,
and the rising and falling edge bitmasks are shifted by the base channel
number before calling `ni6527_set_edge_detection()` to change the
appropriate registers. Unfortunately, the code is comparing the base
channel (shift amount) to the interrupt subdevice's number of channels
(1) instead of the digital input subdevice's number of channels (24).
Fix it by comparing to 32 because all shift amounts for an `unsigned
int` must be less than that and everything from bit 24 upwards is
ignored by `ni6527_set_edge_detection()` anyway.
Fixes: 110f9e687c1a8 ("staging: comedi: ni_6527: support INSN_CONFIG_DIGITAL_TRIG")
Cc: <stable(a)vger.kernel.org> # 3.17+
Signed-off-by: Ian Abbott <abbotti(a)mev.co.uk>
---
drivers/staging/comedi/drivers/ni_6527.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/staging/comedi/drivers/ni_6527.c b/drivers/staging/comedi/drivers/ni_6527.c
index 4d1eccb5041d..4518c2680b7c 100644
--- a/drivers/staging/comedi/drivers/ni_6527.c
+++ b/drivers/staging/comedi/drivers/ni_6527.c
@@ -332,7 +332,7 @@ static int ni6527_intr_insn_config(struct comedi_device *dev,
case COMEDI_DIGITAL_TRIG_ENABLE_EDGES:
/* check shift amount */
shift = data[3];
- if (shift >= s->n_chan) {
+ if (shift >= 32) {
mask = 0;
rising = 0;
falling = 0;
--
2.27.0
task_h_load() can return 0 in some situations like running stress-ng
mmapfork, which forks thousands of threads, in a sched group on a 224 cores
system. The load balance doesn't handle this correctly because
env->imbalance never decreases and it will stop pulling tasks only after
reaching loop_max, which can be equal to the number of running tasks of
the cfs. Make sure that imbalance will be decreased by at least 1.
misfit task is the other feature that doesn't handle correctly such
situation although it's probably more difficult to face the problem
because of the smaller number of CPUs and running tasks on heterogenous
system.
We can't simply ensure that task_h_load() returns at least one because it
would imply to handle underflow in other places.
Signed-off-by: Vincent Guittot <vincent.guittot(a)linaro.org>
Reviewed-by: Valentin Schneider <valentin.schneider(a)arm.com>
Reviewed-by: Dietmar Eggemann <dietmar.eggemann(a)arm.com>
Tested-by: Dietmar Eggemann <dietmar.eggemann(a)arm.com>
Cc: <stable(a)vger.kernel.org> # v4.4+
---
Changes v3:
- Fix warning about cast reported by lkp(a)intel.com>
kernel/sched/fair.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index b9b9f19e80c1..71a372e3707a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4049,7 +4049,11 @@ static inline void update_misfit_status(struct task_struct *p, struct rq *rq)
return;
}
- rq->misfit_task_load = task_h_load(p);
+ /*
+ * Make sure that misfit_task_load will not be null even if
+ * task_h_load() returns 0.
+ */
+ rq->misfit_task_load = max_t(unsigned long, task_h_load(p), 1);
}
#else /* CONFIG_SMP */
@@ -7648,7 +7652,14 @@ static int detach_tasks(struct lb_env *env)
switch (env->migration_type) {
case migrate_load:
- load = task_h_load(p);
+ /*
+ * Depending of the number of CPUs and tasks and the
+ * cgroup hierarchy, task_h_load() can return a null
+ * value. Make sure that env->imbalance decreases
+ * otherwise detach_tasks() will stop only after
+ * detaching up to loop_max tasks.
+ */
+ load = max_t(unsigned long, task_h_load(p), 1);
if (sched_feat(LB_MIN) &&
load < 16 && !env->sd->nr_balance_failed)
--
2.17.1
This is a note to let you know that I've just added the patch titled
staging: rtl8712: handle firmware load failure
to my staging git tree which can be found at
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git
in the staging-testing branch.
The patch will show up in the next release of the linux-next tree
(usually sometime within the next 24 hours during the week.)
The patch will be merged to the staging-next branch sometime soon,
after it passes testing, and the merge window is open.
If you have any questions about this process, please let me know.
>From b4383c971bc5263efe2b0915ba67ebf2bf3f1ee5 Mon Sep 17 00:00:00 2001
From: Rustam Kovhaev <rkovhaev(a)gmail.com>
Date: Thu, 16 Jul 2020 08:13:26 -0700
Subject: staging: rtl8712: handle firmware load failure
when firmware fails to load we should not call unregister_netdev()
this patch fixes a race condition between rtl871x_load_fw_cb() and
r871xu_dev_remove() and fixes the bug reported by syzbot
Reported-by: syzbot+80899a8a8efe8968cde7(a)syzkaller.appspotmail.com
Link: https://syzkaller.appspot.com/bug?extid=80899a8a8efe8968cde7
Signed-off-by: Rustam Kovhaev <rkovhaev(a)gmail.com>
Cc: stable <stable(a)vger.kernel.org>
Link: https://lore.kernel.org/r/20200716151324.1036204-1-rkovhaev@gmail.com
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/staging/rtl8712/hal_init.c | 3 ++-
drivers/staging/rtl8712/usb_intf.c | 11 ++++++++---
2 files changed, 10 insertions(+), 4 deletions(-)
diff --git a/drivers/staging/rtl8712/hal_init.c b/drivers/staging/rtl8712/hal_init.c
index ed51023b85a0..715f1fe8b472 100644
--- a/drivers/staging/rtl8712/hal_init.c
+++ b/drivers/staging/rtl8712/hal_init.c
@@ -33,7 +33,6 @@ static void rtl871x_load_fw_cb(const struct firmware *firmware, void *context)
{
struct _adapter *adapter = context;
- complete(&adapter->rtl8712_fw_ready);
if (!firmware) {
struct usb_device *udev = adapter->dvobjpriv.pusbdev;
struct usb_interface *usb_intf = adapter->pusb_intf;
@@ -41,11 +40,13 @@ static void rtl871x_load_fw_cb(const struct firmware *firmware, void *context)
dev_err(&udev->dev, "r8712u: Firmware request failed\n");
usb_put_dev(udev);
usb_set_intfdata(usb_intf, NULL);
+ complete(&adapter->rtl8712_fw_ready);
return;
}
adapter->fw = firmware;
/* firmware available - start netdev */
register_netdev(adapter->pnetdev);
+ complete(&adapter->rtl8712_fw_ready);
}
static const char firmware_file[] = "rtlwifi/rtl8712u.bin";
diff --git a/drivers/staging/rtl8712/usb_intf.c b/drivers/staging/rtl8712/usb_intf.c
index a87562f632a7..2fcd65260f4c 100644
--- a/drivers/staging/rtl8712/usb_intf.c
+++ b/drivers/staging/rtl8712/usb_intf.c
@@ -595,13 +595,17 @@ static void r871xu_dev_remove(struct usb_interface *pusb_intf)
if (pnetdev) {
struct _adapter *padapter = netdev_priv(pnetdev);
- usb_set_intfdata(pusb_intf, NULL);
- release_firmware(padapter->fw);
/* never exit with a firmware callback pending */
wait_for_completion(&padapter->rtl8712_fw_ready);
+ pnetdev = usb_get_intfdata(pusb_intf);
+ usb_set_intfdata(pusb_intf, NULL);
+ if (!pnetdev)
+ goto firmware_load_fail;
+ release_firmware(padapter->fw);
if (drvpriv.drv_registered)
padapter->surprise_removed = true;
- unregister_netdev(pnetdev); /* will call netdev_close() */
+ if (pnetdev->reg_state != NETREG_UNINITIALIZED)
+ unregister_netdev(pnetdev); /* will call netdev_close() */
flush_scheduled_work();
udelay(1);
/* Stop driver mlme relation timer */
@@ -614,6 +618,7 @@ static void r871xu_dev_remove(struct usb_interface *pusb_intf)
*/
usb_put_dev(udev);
}
+firmware_load_fail:
/* If we didn't unplug usb dongle and remove/insert module, driver
* fails on sitesurvey for the first time when device is up.
* Reset usb port for sitesurvey fail issue.
--
2.27.0
This is a backport of the series that recently went into 5.8. Note
that the first patch is more a complete rewriting than a backport, as
the vdso implementation in 5.4 doesn't have much in common with
mainline. This affects the 32bit arch code in a benign way.
It has seen very little testing, as I don't have the HW that triggers
this issue. I have run it in VMs by faking the CPU MIDR, and nothing
caught fire. Famous last words.
The original cover letter follows.
M.
The relatively recent introduction of the compat vdso on arm64 has
overlooked its interactions with some of the interesting errata
workarounds, such as ARM64_ERRATUM_1418040 (and its older 1188873
incarnation).
This erratum requires the 64bit kernel to trap 32bit accesses to the
virtual counter and emulate it. When the workaround was introduced,
the compat vdso simply wasn't a thing. Now that the patches have
landed in mainline, we trap the CVTVCT accesses from the vdso.
This can end-up in a nasty loop in the vdso, where the sequence number
changes on each trap, never stabilising, and leaving userspace in a
bit of a funny state (which is why we disable the vdso in most similar
cases). This erratum mentionned above is a bit special in the sense
that in only requires to trap AArch32 accesses, and 64bit tasks can be
left alone. Consequently, the vdso is never disabled and AArch32 tasks
are affected.
Obviously, we really want to retain the 64bit vdso in this case. To
that effect, this series offers a way to disable the 32bit view of the
vdso without impacting its 64bit counterpart, by providing a
"no-compat" vdso clock_mode, and plugging this feature into the
1418040 detection code.
Marc Zyngier (3):
arm64: Introduce a way to disable the 32bit vdso
arm64: arch_timer: Allow an workaround descriptor to disable compat
vdso
arm64: arch_timer: Disable the compat vdso for cores affected by
ARM64_WORKAROUND_1418040
arch/arm/include/asm/clocksource.h | 11 ++++++++++-
arch/arm/kernel/vdso.c | 2 +-
arch/arm64/include/asm/arch_timer.h | 1 +
arch/arm64/include/asm/clocksource.h | 5 ++++-
arch/arm64/include/asm/vdso/clocksource.h | 14 ++++++++++++++
.../include/asm/vdso/compat_gettimeofday.h | 5 +++--
arch/arm64/include/asm/vdso/gettimeofday.h | 6 ++++--
arch/arm64/include/asm/vdso/vsyscall.h | 4 +---
drivers/clocksource/arm_arch_timer.c | 19 +++++++++++++++----
9 files changed, 53 insertions(+), 14 deletions(-)
create mode 100644 arch/arm64/include/asm/vdso/clocksource.h
--
2.27.0
From: Joonsoo Kim <iamjoonsoo.kim(a)lge.com>
Currently, preventing cma area in page allocation is implemented by using
current_gfp_context(). However, there are two problems of this
implementation.
First, this doesn't work for allocation fastpath. In the fastpath,
original gfp_mask is used since current_gfp_context() is introduced in
order to control reclaim and it is on slowpath.
Second, clearing __GFP_MOVABLE has a side effect to exclude the memory
on the ZONE_MOVABLE for allocation target.
To fix these problems, this patch changes the implementation to exclude
cma area in page allocation. Main point of this change is using the
alloc_flags. alloc_flags is mainly used to control allocation so it fits
for excluding cma area in allocation.
Fixes: d7fefcc (mm/cma: add PF flag to force non cma alloc)
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim(a)lge.com>
---
include/linux/sched/mm.h | 4 ----
mm/page_alloc.c | 27 +++++++++++++++------------
2 files changed, 15 insertions(+), 16 deletions(-)
diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
index 44ad5b7..a73847a 100644
--- a/include/linux/sched/mm.h
+++ b/include/linux/sched/mm.h
@@ -191,10 +191,6 @@ static inline gfp_t current_gfp_context(gfp_t flags)
flags &= ~(__GFP_IO | __GFP_FS);
else if (pflags & PF_MEMALLOC_NOFS)
flags &= ~__GFP_FS;
-#ifdef CONFIG_CMA
- if (pflags & PF_MEMALLOC_NOCMA)
- flags &= ~__GFP_MOVABLE;
-#endif
}
return flags;
}
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6416d08..cd53894 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2791,7 +2791,7 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype,
* allocating from CMA when over half of the zone's free memory
* is in the CMA area.
*/
- if (migratetype == MIGRATE_MOVABLE &&
+ if (alloc_flags & ALLOC_CMA &&
zone_page_state(zone, NR_FREE_CMA_PAGES) >
zone_page_state(zone, NR_FREE_PAGES) / 2) {
page = __rmqueue_cma_fallback(zone, order);
@@ -2802,7 +2802,7 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype,
retry:
page = __rmqueue_smallest(zone, order, migratetype);
if (unlikely(!page)) {
- if (migratetype == MIGRATE_MOVABLE)
+ if (alloc_flags & ALLOC_CMA)
page = __rmqueue_cma_fallback(zone, order);
if (!page && __rmqueue_fallback(zone, order, migratetype,
@@ -3502,11 +3502,9 @@ static inline long __zone_watermark_unusable_free(struct zone *z,
if (likely(!alloc_harder))
unusable_free += z->nr_reserved_highatomic;
-#ifdef CONFIG_CMA
/* If allocation can't use CMA areas don't use free CMA pages */
- if (!(alloc_flags & ALLOC_CMA))
+ if (IS_ENABLED(CONFIG_CMA) && !(alloc_flags & ALLOC_CMA))
unusable_free += zone_page_state(z, NR_FREE_CMA_PAGES);
-#endif
return unusable_free;
}
@@ -3693,6 +3691,16 @@ alloc_flags_nofragment(struct zone *zone, gfp_t gfp_mask)
return alloc_flags;
}
+static inline void current_alloc_flags(gfp_t gfp_mask,
+ unsigned int *alloc_flags)
+{
+ unsigned int pflags = READ_ONCE(current->flags);
+
+ if (!(pflags & PF_MEMALLOC_NOCMA) &&
+ gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE)
+ *alloc_flags |= ALLOC_CMA;
+}
+
/*
* get_page_from_freelist goes through the zonelist trying to allocate
* a page.
@@ -3706,6 +3714,8 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
struct pglist_data *last_pgdat_dirty_limit = NULL;
bool no_fallback;
+ current_alloc_flags(gfp_mask, &alloc_flags);
+
retry:
/*
* Scan zonelist, looking for a zone with enough free.
@@ -4339,10 +4349,6 @@ gfp_to_alloc_flags(gfp_t gfp_mask)
} else if (unlikely(rt_task(current)) && !in_interrupt())
alloc_flags |= ALLOC_HARDER;
-#ifdef CONFIG_CMA
- if (gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE)
- alloc_flags |= ALLOC_CMA;
-#endif
return alloc_flags;
}
@@ -4808,9 +4814,6 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
if (should_fail_alloc_page(gfp_mask, order))
return false;
- if (IS_ENABLED(CONFIG_CMA) && ac->migratetype == MIGRATE_MOVABLE)
- *alloc_flags |= ALLOC_CMA;
-
return true;
}
--
2.7.4