The patch titled
Subject: kthread_worker: prevent queuing delayed work from timer_fn when it is being canceled
has been added to the -mm tree. Its filename is
kthread_worker-prevent-queuing-delayed-work-from-timer_fn-when-it-is-being-canceled.patch
This patch should soon appear at
https://ozlabs.org/~akpm/mmots/broken-out/kthread_worker-prevent-queuing-de…
and later at
https://ozlabs.org/~akpm/mmotm/broken-out/kthread_worker-prevent-queuing-de…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Zqiang <qiang.zhang(a)windriver.com>
Subject: kthread_worker: prevent queuing delayed work from timer_fn when it is being canceled
There is a small race window when a delayed work is being canceled and the
work still might be queued from the timer_fn:
CPU0 CPU1
kthread_cancel_delayed_work_sync()
__kthread_cancel_work_sync()
__kthread_cancel_work()
work->canceling++;
kthread_delayed_work_timer_fn()
kthread_insert_work();
BUG: kthread_insert_work() should not get called when work->canceling is
set.
Link: https://lkml.kernel.org/r/20201014083030.16895-1-qiang.zhang@windriver.com
Signed-off-by: Zqiang <qiang.zhang(a)windriver.com>
Reviewed-by: Petr Mladek <pmladek(a)suse.com>
Acked-by: Tejun Heo <tj(a)kernel.org>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
kernel/kthread.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
--- a/kernel/kthread.c~kthread_worker-prevent-queuing-delayed-work-from-timer_fn-when-it-is-being-canceled
+++ a/kernel/kthread.c
@@ -897,7 +897,8 @@ void kthread_delayed_work_timer_fn(struc
/* Move the work from worker->delayed_work_list. */
WARN_ON_ONCE(list_empty(&work->node));
list_del_init(&work->node);
- kthread_insert_work(worker, work, &worker->work_list);
+ if (!work->canceling)
+ kthread_insert_work(worker, work, &worker->work_list);
raw_spin_unlock_irqrestore(&worker->lock, flags);
}
_
Patches currently in -mm which might be from qiang.zhang(a)windriver.com are
kthread_worker-prevent-queuing-delayed-work-from-timer_fn-when-it-is-being-canceled.patch
The patch titled
Subject: mm: mempolicy: fix potential pte_unmap_unlock pte error
has been added to the -mm tree. Its filename is
mm-mempolicy-fix-potential-pte_unmap_unlock-pte-error.patch
This patch should soon appear at
https://ozlabs.org/~akpm/mmots/broken-out/mm-mempolicy-fix-potential-pte_un…
and later at
https://ozlabs.org/~akpm/mmotm/broken-out/mm-mempolicy-fix-potential-pte_un…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Shijie Luo <luoshijie1(a)huawei.com>
Subject: mm: mempolicy: fix potential pte_unmap_unlock pte error
When flags in queue_pages_pte_range don't have MPOL_MF_MOVE or
MPOL_MF_MOVE_ALL bits, code breaks and passing origin pte - 1 to
pte_unmap_unlock seems like not a good idea.
queue_pages_pte_range can run in MPOL_MF_MOVE_ALL mode which doesn't
migrate misplaced pages but returns with EIO when encountering such a
page. Since commit a7f40cfe3b7a ("mm: mempolicy: make mbind() return -EIO
when MPOL_MF_STRICT is specified") and early break on the first pte in the
range results in pte_unmap_unlock on an underflow pte. This can lead to
lockups later on when somebody tries to lock the pte resp.
page_table_lock again..
Link: https://lkml.kernel.org/r/20201019074853.50856-1-luoshijie1@huawei.com
Fixes: a7f40cfe3b7a ("mm: mempolicy: make mbind() return -EIO when MPOL_MF_STRICT is specified")
Signed-off-by: Shijie Luo <luoshijie1(a)huawei.com>
Signed-off-by: Miaohe Lin <linmiaohe(a)huawei.com>
Reviewed-by: Oscar Salvador <osalvador(a)suse.de>
Acked-by: Michal Hocko <mhocko(a)suse.com>
Cc: Miaohe Lin <linmiaohe(a)huawei.com>
Cc: Feilong Lin <linfeilong(a)huawei.com>
Cc: Shijie Luo <luoshijie1(a)huawei.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/mempolicy.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
--- a/mm/mempolicy.c~mm-mempolicy-fix-potential-pte_unmap_unlock-pte-error
+++ a/mm/mempolicy.c
@@ -525,7 +525,7 @@ static int queue_pages_pte_range(pmd_t *
unsigned long flags = qp->flags;
int ret;
bool has_unmovable = false;
- pte_t *pte;
+ pte_t *pte, *mapped_pte;
spinlock_t *ptl;
ptl = pmd_trans_huge_lock(pmd, vma);
@@ -539,7 +539,7 @@ static int queue_pages_pte_range(pmd_t *
if (pmd_trans_unstable(pmd))
return 0;
- pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
+ mapped_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
for (; addr != end; pte++, addr += PAGE_SIZE) {
if (!pte_present(*pte))
continue;
@@ -571,7 +571,7 @@ static int queue_pages_pte_range(pmd_t *
} else
break;
}
- pte_unmap_unlock(pte - 1, ptl);
+ pte_unmap_unlock(mapped_pte, ptl);
cond_resched();
if (has_unmovable)
_
Patches currently in -mm which might be from luoshijie1(a)huawei.com are
mm-mempolicy-fix-potential-pte_unmap_unlock-pte-error.patch
The patch titled
Subject: ptrace: fix task_join_group_stop() for the case when current is traced
has been added to the -mm tree. Its filename is
ptrace-fix-task_join_group_stop-for-the-case-when-current-is-traced.patch
This patch should soon appear at
https://ozlabs.org/~akpm/mmots/broken-out/ptrace-fix-task_join_group_stop-f…
and later at
https://ozlabs.org/~akpm/mmotm/broken-out/ptrace-fix-task_join_group_stop-f…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Oleg Nesterov <oleg(a)redhat.com>
Subject: ptrace: fix task_join_group_stop() for the case when current is traced
This testcase
#include <stdio.h>
#include <unistd.h>
#include <signal.h>
#include <sys/ptrace.h>
#include <sys/wait.h>
#include <pthread.h>
#include <assert.h>
void *tf(void *arg)
{
return NULL;
}
int main(void)
{
int pid = fork();
if (!pid) {
kill(getpid(), SIGSTOP);
pthread_t th;
pthread_create(&th, NULL, tf, NULL);
return 0;
}
waitpid(pid, NULL, WSTOPPED);
ptrace(PTRACE_SEIZE, pid, 0, PTRACE_O_TRACECLONE);
waitpid(pid, NULL, 0);
ptrace(PTRACE_CONT, pid, 0,0);
waitpid(pid, NULL, 0);
int status;
int thread = waitpid(-1, &status, 0);
assert(thread > 0 && thread != pid);
assert(status == 0x80137f);
return 0;
}
fails and triggers WARN_ON_ONCE(!signr) in do_jobctl_trap().
This is because task_join_group_stop() has 2 problems when current is traced:
1. We can't rely on the "JOBCTL_STOP_PENDING" check, a stopped tracee
can be woken up by debugger and it can clone another thread which
should join the group-stop.
We need to check group_stop_count || SIGNAL_STOP_STOPPED.
2. If SIGNAL_STOP_STOPPED is already set, we should not increment
sig->group_stop_count and add JOBCTL_STOP_CONSUME. The new thread
should stop without another do_notify_parent_cldstop() report.
To clarify, the problem is very old and we should blame
ptrace_init_task(). But now that we have task_join_group_stop() it makes
more sense to fix this helper to avoid the code duplication.
Link: https://lkml.kernel.org/r/20201019134237.GA18810@redhat.com
Signed-off-by: Oleg Nesterov <oleg(a)redhat.com>
Reported-by: syzbot+3485e3773f7da290eecc(a)syzkaller.appspotmail.com
Cc: Jens Axboe <axboe(a)kernel.dk>
Cc: Christian Brauner <christian(a)brauner.io>
Cc: "Eric W . Biederman" <ebiederm(a)xmission.com>
Cc: Zhiqiang Liu <liuzhiqiang26(a)huawei.com>
Cc: Tejun Heo <tj(a)kernel.org>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
kernel/signal.c | 19 ++++++++++---------
1 file changed, 10 insertions(+), 9 deletions(-)
--- a/kernel/signal.c~ptrace-fix-task_join_group_stop-for-the-case-when-current-is-traced
+++ a/kernel/signal.c
@@ -391,16 +391,17 @@ static bool task_participate_group_stop(
void task_join_group_stop(struct task_struct *task)
{
+ unsigned long mask = current->jobctl & JOBCTL_STOP_SIGMASK;
+ struct signal_struct *sig = current->signal;
+
+ if (sig->group_stop_count) {
+ sig->group_stop_count++;
+ mask |= JOBCTL_STOP_CONSUME;
+ } else if (!(sig->flags & SIGNAL_STOP_STOPPED))
+ return;
+
/* Have the new thread join an on-going signal group stop */
- unsigned long jobctl = current->jobctl;
- if (jobctl & JOBCTL_STOP_PENDING) {
- struct signal_struct *sig = current->signal;
- unsigned long signr = jobctl & JOBCTL_STOP_SIGMASK;
- unsigned long gstop = JOBCTL_STOP_PENDING | JOBCTL_STOP_CONSUME;
- if (task_set_jobctl_pending(task, signr | gstop)) {
- sig->group_stop_count++;
- }
- }
+ task_set_jobctl_pending(task, mask | JOBCTL_STOP_PENDING);
}
/*
_
Patches currently in -mm which might be from oleg(a)redhat.com are
ptrace-fix-task_join_group_stop-for-the-case-when-current-is-traced.patch
aio-simplify-read_events.patch
The patch titled
Subject: compiler.h: fix barrier_data() on clang
has been added to the -mm tree. Its filename is
compilerh-fix-barrier_data-on-clang.patch
This patch should soon appear at
https://ozlabs.org/~akpm/mmots/broken-out/compilerh-fix-barrier_data-on-cla…
and later at
https://ozlabs.org/~akpm/mmotm/broken-out/compilerh-fix-barrier_data-on-cla…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Arvind Sankar <nivedita(a)alum.mit.edu>
Subject: compiler.h: fix barrier_data() on clang
Commit 815f0ddb346c ("include/linux/compiler*.h: make compiler-*.h
mutually exclusive") neglected to copy barrier_data() from compiler-gcc.h
into compiler-clang.h. The definition in compiler-gcc.h was really to
work around clang's more aggressive optimization, so this broke
barrier_data() on clang, and consequently memzero_explicit() as well.
For example, this results in at least the memzero_explicit() call in
lib/crypto/sha256.c:sha256_transform() being optimized away by clang.
Fix this by moving the definition of barrier_data() into compiler.h.
Also move the gcc/clang definition of barrier() into compiler.h,
__memory_barrier() is icc-specific (and barrier() is already defined using
it in compiler-intel.h) and doesn't belong in compiler.h.
Link: https://lkml.kernel.org/r/20201014212631.207844-1-nivedita@alum.mit.edu
Signed-off-by: Arvind Sankar <nivedita(a)alum.mit.edu>
Fixes: 815f0ddb346c ("include/linux/compiler*.h: make compiler-*.h mutually exclusive")
Reviewed-by: Nick Desaulniers <ndesaulniers(a)google.com>
Tested-by: Nick Desaulniers <ndesaulniers(a)google.com>
Reviewed-by: Kees Cook <keescook(a)chromium.org>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
include/linux/compiler-clang.h | 6 ------
include/linux/compiler-gcc.h | 19 -------------------
include/linux/compiler.h | 18 ++++++++++++++++--
3 files changed, 16 insertions(+), 27 deletions(-)
--- a/include/linux/compiler-clang.h~compilerh-fix-barrier_data-on-clang
+++ a/include/linux/compiler-clang.h
@@ -60,12 +60,6 @@
#define COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW 1
#endif
-/* The following are for compatibility with GCC, from compiler-gcc.h,
- * and may be redefined here because they should not be shared with other
- * compilers, like ICC.
- */
-#define barrier() __asm__ __volatile__("" : : : "memory")
-
#if __has_feature(shadow_call_stack)
# define __noscs __attribute__((__no_sanitize__("shadow-call-stack")))
#endif
--- a/include/linux/compiler-gcc.h~compilerh-fix-barrier_data-on-clang
+++ a/include/linux/compiler-gcc.h
@@ -15,25 +15,6 @@
# error Sorry, your version of GCC is too old - please use 4.9 or newer.
#endif
-/* Optimization barrier */
-
-/* The "volatile" is due to gcc bugs */
-#define barrier() __asm__ __volatile__("": : :"memory")
-/*
- * This version is i.e. to prevent dead stores elimination on @ptr
- * where gcc and llvm may behave differently when otherwise using
- * normal barrier(): while gcc behavior gets along with a normal
- * barrier(), llvm needs an explicit input variable to be assumed
- * clobbered. The issue is as follows: while the inline asm might
- * access any memory it wants, the compiler could have fit all of
- * @ptr into memory registers instead, and since @ptr never escaped
- * from that, it proved that the inline asm wasn't touching any of
- * it. This version works well with both compilers, i.e. we're telling
- * the compiler that the inline asm absolutely may see the contents
- * of @ptr. See also: https://llvm.org/bugs/show_bug.cgi?id=15495
- */
-#define barrier_data(ptr) __asm__ __volatile__("": :"r"(ptr) :"memory")
-
/*
* This macro obfuscates arithmetic on a variable address so that gcc
* shouldn't recognize the original var, and make assumptions about it.
--- a/include/linux/compiler.h~compilerh-fix-barrier_data-on-clang
+++ a/include/linux/compiler.h
@@ -80,11 +80,25 @@ void ftrace_likely_update(struct ftrace_
/* Optimization barrier */
#ifndef barrier
-# define barrier() __memory_barrier()
+/* The "volatile" is due to gcc bugs */
+# define barrier() __asm__ __volatile__("": : :"memory")
#endif
#ifndef barrier_data
-# define barrier_data(ptr) barrier()
+/*
+ * This version is i.e. to prevent dead stores elimination on @ptr
+ * where gcc and llvm may behave differently when otherwise using
+ * normal barrier(): while gcc behavior gets along with a normal
+ * barrier(), llvm needs an explicit input variable to be assumed
+ * clobbered. The issue is as follows: while the inline asm might
+ * access any memory it wants, the compiler could have fit all of
+ * @ptr into memory registers instead, and since @ptr never escaped
+ * from that, it proved that the inline asm wasn't touching any of
+ * it. This version works well with both compilers, i.e. we're telling
+ * the compiler that the inline asm absolutely may see the contents
+ * of @ptr. See also: https://llvm.org/bugs/show_bug.cgi?id=15495
+ */
+# define barrier_data(ptr) __asm__ __volatile__("": :"r"(ptr) :"memory")
#endif
/* workaround for GCC PR82365 if needed */
_
Patches currently in -mm which might be from nivedita(a)alum.mit.edu are
compilerh-fix-barrier_data-on-clang.patch
The driver did not update its view of the available device buffer space
until write() was called in task context. This meant that write_room()
would return 0 even after the device had sent a write-unthrottle
notification, something which could lead to blocked writers not being
woken up (e.g. when using OPOST).
Note that we must also request an unthrottle notification is case a
write() request fills the device buffer exactly.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Cc: stable <stable(a)vger.kernel.org>
Signed-off-by: Johan Hovold <johan(a)kernel.org>
---
drivers/usb/serial/keyspan_pda.c | 29 ++++++++++++++++++++---------
1 file changed, 20 insertions(+), 9 deletions(-)
diff --git a/drivers/usb/serial/keyspan_pda.c b/drivers/usb/serial/keyspan_pda.c
index 781b6723379f..39ed3ad32365 100644
--- a/drivers/usb/serial/keyspan_pda.c
+++ b/drivers/usb/serial/keyspan_pda.c
@@ -40,6 +40,8 @@
#define DRIVER_AUTHOR "Brian Warner <warner(a)lothar.com>"
#define DRIVER_DESC "USB Keyspan PDA Converter driver"
+#define KEYSPAN_TX_THRESHOLD 16
+
struct keyspan_pda_private {
int tx_room;
int tx_throttled;
@@ -110,7 +112,7 @@ static void keyspan_pda_request_unthrottle(struct work_struct *work)
7, /* request_unthrottle */
USB_TYPE_VENDOR | USB_RECIP_INTERFACE
| USB_DIR_OUT,
- 16, /* value: threshold */
+ KEYSPAN_TX_THRESHOLD,
0, /* index */
NULL,
0,
@@ -129,6 +131,8 @@ static void keyspan_pda_rx_interrupt(struct urb *urb)
int retval;
int status = urb->status;
struct keyspan_pda_private *priv;
+ unsigned long flags;
+
priv = usb_get_serial_port_data(port);
switch (status) {
@@ -171,7 +175,10 @@ static void keyspan_pda_rx_interrupt(struct urb *urb)
case 1: /* modemline change */
break;
case 2: /* tx unthrottle interrupt */
+ spin_lock_irqsave(&port->lock, flags);
priv->tx_throttled = 0;
+ priv->tx_room = max(priv->tx_room, KEYSPAN_TX_THRESHOLD);
+ spin_unlock_irqrestore(&port->lock, flags);
/* queue up a wakeup at scheduler time */
usb_serial_port_softint(port);
break;
@@ -505,7 +512,8 @@ static int keyspan_pda_write(struct tty_struct *tty,
goto exit;
}
}
- if (count > priv->tx_room) {
+
+ if (count >= priv->tx_room) {
/* we're about to completely fill the Tx buffer, so
we'll be throttled afterwards. */
count = priv->tx_room;
@@ -560,14 +568,17 @@ static void keyspan_pda_write_bulk_callback(struct urb *urb)
static int keyspan_pda_write_room(struct tty_struct *tty)
{
struct usb_serial_port *port = tty->driver_data;
- struct keyspan_pda_private *priv;
- priv = usb_get_serial_port_data(port);
- /* used by n_tty.c for processing of tabs and such. Giving it our
- conservative guess is probably good enough, but needs testing by
- running a console through the device. */
- return priv->tx_room;
-}
+ struct keyspan_pda_private *priv = usb_get_serial_port_data(port);
+ unsigned long flags;
+ int room = 0;
+
+ spin_lock_irqsave(&port->lock, flags);
+ if (test_bit(0, &port->write_urbs_free) && !priv->tx_throttled)
+ room = priv->tx_room;
+ spin_unlock_irqrestore(&port->lock, flags);
+ return room;
+}
static int keyspan_pda_chars_in_buffer(struct tty_struct *tty)
{
--
2.26.2
The driver's transmit-unthrottle work was never flushed on disconnect,
something which could lead to the driver port data being freed while the
unthrottle work is still scheduled.
Fix this by cancelling the unthrottle work when shutting down the port.
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Cc: stable(a)vger.kernel.org
Signed-off-by: Johan Hovold <johan(a)kernel.org>
---
drivers/usb/serial/keyspan_pda.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/usb/serial/keyspan_pda.c b/drivers/usb/serial/keyspan_pda.c
index d91180ab5f3b..781b6723379f 100644
--- a/drivers/usb/serial/keyspan_pda.c
+++ b/drivers/usb/serial/keyspan_pda.c
@@ -647,8 +647,12 @@ static int keyspan_pda_open(struct tty_struct *tty,
}
static void keyspan_pda_close(struct usb_serial_port *port)
{
+ struct keyspan_pda_private *priv = usb_get_serial_port_data(port);
+
usb_kill_urb(port->write_urb);
usb_kill_urb(port->interrupt_in_urb);
+
+ cancel_work_sync(&priv->unthrottle_work);
}
--
2.26.2
Make sure to clear the write-busy flag also in case no new data was
submitted due to lack of device buffer space so that writing is
resumed once space again becomes available.
Fixes: 507ca9bc0476 ("[PATCH] USB: add ability for usb-serial drivers to determine if their write urb is currently being used.")
Cc: stable <stable(a)vger.kernel.org> # 2.6.13
Signed-off-by: Johan Hovold <johan(a)kernel.org>
---
drivers/usb/serial/keyspan_pda.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/usb/serial/keyspan_pda.c b/drivers/usb/serial/keyspan_pda.c
index 17b60e5a9f1f..d6ebde779e85 100644
--- a/drivers/usb/serial/keyspan_pda.c
+++ b/drivers/usb/serial/keyspan_pda.c
@@ -548,7 +548,7 @@ static int keyspan_pda_write(struct tty_struct *tty,
rc = count;
exit:
- if (rc < 0)
+ if (rc <= 0)
set_bit(0, &port->write_urbs_free);
return rc;
}
--
2.26.2
The write() callback can be called in interrupt context (e.g. when used
as a console) so interrupts must be disabled while holding the port lock
to prevent a possible deadlock.
Fixes: e81ee637e4ae ("usb-serial: possible irq lock inversion (PPP vs. usb/serial)")
Fixes: 507ca9bc0476 ("[PATCH] USB: add ability for usb-serial drivers to determine if their write urb is currently being used.")
Cc: stable <stable(a)vger.kernel.org> # 2.6.19
Signed-off-by: Johan Hovold <johan(a)kernel.org>
---
drivers/usb/serial/keyspan_pda.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/usb/serial/keyspan_pda.c b/drivers/usb/serial/keyspan_pda.c
index 2d5ad579475a..17b60e5a9f1f 100644
--- a/drivers/usb/serial/keyspan_pda.c
+++ b/drivers/usb/serial/keyspan_pda.c
@@ -443,6 +443,7 @@ static int keyspan_pda_write(struct tty_struct *tty,
int request_unthrottle = 0;
int rc = 0;
struct keyspan_pda_private *priv;
+ unsigned long flags;
priv = usb_get_serial_port_data(port);
/* guess how much room is left in the device's ring buffer, and if we
@@ -462,13 +463,13 @@ static int keyspan_pda_write(struct tty_struct *tty,
the TX urb is in-flight (wait until it completes)
the device is full (wait until it says there is room)
*/
- spin_lock_bh(&port->lock);
+ spin_lock_irqsave(&port->lock, flags);
if (!test_bit(0, &port->write_urbs_free) || priv->tx_throttled) {
- spin_unlock_bh(&port->lock);
+ spin_unlock_irqrestore(&port->lock, flags);
return 0;
}
clear_bit(0, &port->write_urbs_free);
- spin_unlock_bh(&port->lock);
+ spin_unlock_irqrestore(&port->lock, flags);
/* At this point the URB is in our control, nobody else can submit it
again (the only sudden transition was the one from EINPROGRESS to
--
2.26.2