Hi Pavel,
On Wed, Mar 7, 2018 at 1:43 PM, Pavel Machek <pavel(a)ucw.cz> wrote:
> On Wed 2018-03-07 09:01:16, Richard Weinberger wrote:
>> Pavel,
>>
>> Am Mittwoch, 7. März 2018, 00:18:05 CET schrieb Pavel Machek:
>> > On Sat 2018-03-03 11:45:54, Richard Weinberger wrote:
>> > > While UBI and UBIFS seem to work at first sight with MLC NAND, you will
>> > > most likely lose all your data upon a power-cut or due to read/write
>> > > disturb.
>> > > In order to protect users from bad surprises, refuse to attach to MLC
>> > > NAND.
>> > >
>> > > Cc: stable(a)vger.kernel.org
>> >
>> > That sounds like _really_ bad idea for stable. All it does is it
>> > removes support for hardware that somehow works.
>>
>> MLC is not supported and does not work. Full stop.
>> If someone manages to get it somehow work, either with hardware or software
>> hacks they are on their own.
>> Having it in stable is the only chance we have to get it into vendor
>> kernels.
>
> Can you show how it meets the stable kernel criteria? They are
> documented in tree. This should not be in stable.
>
> And I'd like to see changelog improved. Real reason MLC is not
> supported is upper/lower page parts on MLC. And real fix to work with
> bigger pages in UBI.
>
To clarify one thing: the reason for this is MLC has actually never
been supported, nor worked properly. The fact that it kinda worked was
incidental and the cause of major problems for people due to that not
being clear. This patch only makes it explicit and avoids people
mistakenly trying to use UBIFS on MLC flash and risking their data and
products. To me, that's what's important.
This is an important patch, even if all it does is keep people from
loosing data. It also changes the conversation from "I have a
corrupted UBIFS device, BTW it's on MLC..." to "What can we do to get
UBIFS to work on MLC".
I don't know what the stable criteria is with re: to this patch. But
what I do know is if it doesn't go back into the various stables,
there will be manufacturers who will continue to try to use UBIFS on
MLC in ignorance for the next several years until the current stable
kernels EOL, despite there being a known patch that would make it
immediately obvious they shouldn't.
Thanks,
- Steve
On Wed, Mar 07, 2018 at 02:02:13PM -0800, Paul Lawrence wrote:
> Great! We need to make sure this gets backported to 4.4 and 4.9, and to
> 3.18 with the original dependency, please.
That will happen when it lands in Linus's tree, which should be later
this week if all goes well.
thanks,
greg k-h
This is a note to let you know that I've just added the patch titled
nvme-rdma: don't suppress send completions
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
nvme-rdma-don-t-suppress-send-completions.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From b4b591c87f2b0f4ebaf3a68d4f13873b241aa584 Mon Sep 17 00:00:00 2001
From: Sagi Grimberg <sagi(a)grimberg.me>
Date: Thu, 23 Nov 2017 17:35:21 +0200
Subject: nvme-rdma: don't suppress send completions
From: Sagi Grimberg <sagi(a)grimberg.me>
commit b4b591c87f2b0f4ebaf3a68d4f13873b241aa584 upstream.
The entire completions suppress mechanism is currently broken because the
HCA might retry a send operation (due to dropped ack) after the nvme
transaction has completed.
In order to handle this, we signal all send completions and introduce a
separate done handler for async events as they will be handled differently
(as they don't include in-capsule data by definition).
Signed-off-by: Sagi Grimberg <sagi(a)grimberg.me>
Reviewed-by: Max Gurtovoy <maxg(a)mellanox.com>
Signed-off-by: Christoph Hellwig <hch(a)lst.de>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/nvme/host/rdma.c | 54 ++++++++++++-----------------------------------
1 file changed, 14 insertions(+), 40 deletions(-)
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -88,7 +88,6 @@ enum nvme_rdma_queue_flags {
struct nvme_rdma_queue {
struct nvme_rdma_qe *rsp_ring;
- atomic_t sig_count;
int queue_size;
size_t cmnd_capsule_len;
struct nvme_rdma_ctrl *ctrl;
@@ -521,7 +520,6 @@ static int nvme_rdma_alloc_queue(struct
queue->cmnd_capsule_len = sizeof(struct nvme_command);
queue->queue_size = queue_size;
- atomic_set(&queue->sig_count, 0);
queue->cm_id = rdma_create_id(&init_net, nvme_rdma_cm_handler, queue,
RDMA_PS_TCP, IB_QPT_RC);
@@ -1232,21 +1230,9 @@ static void nvme_rdma_send_done(struct i
nvme_end_request(rq, req->status, req->result);
}
-/*
- * We want to signal completion at least every queue depth/2. This returns the
- * largest power of two that is not above half of (queue size + 1) to optimize
- * (avoid divisions).
- */
-static inline bool nvme_rdma_queue_sig_limit(struct nvme_rdma_queue *queue)
-{
- int limit = 1 << ilog2((queue->queue_size + 1) / 2);
-
- return (atomic_inc_return(&queue->sig_count) & (limit - 1)) == 0;
-}
-
static int nvme_rdma_post_send(struct nvme_rdma_queue *queue,
struct nvme_rdma_qe *qe, struct ib_sge *sge, u32 num_sge,
- struct ib_send_wr *first, bool flush)
+ struct ib_send_wr *first)
{
struct ib_send_wr wr, *bad_wr;
int ret;
@@ -1255,31 +1241,12 @@ static int nvme_rdma_post_send(struct nv
sge->length = sizeof(struct nvme_command),
sge->lkey = queue->device->pd->local_dma_lkey;
- qe->cqe.done = nvme_rdma_send_done;
-
wr.next = NULL;
wr.wr_cqe = &qe->cqe;
wr.sg_list = sge;
wr.num_sge = num_sge;
wr.opcode = IB_WR_SEND;
- wr.send_flags = 0;
-
- /*
- * Unsignalled send completions are another giant desaster in the
- * IB Verbs spec: If we don't regularly post signalled sends
- * the send queue will fill up and only a QP reset will rescue us.
- * Would have been way to obvious to handle this in hardware or
- * at least the RDMA stack..
- *
- * Always signal the flushes. The magic request used for the flush
- * sequencer is not allocated in our driver's tagset and it's
- * triggered to be freed by blk_cleanup_queue(). So we need to
- * always mark it as signaled to ensure that the "wr_cqe", which is
- * embedded in request's payload, is not freed when __ib_process_cq()
- * calls wr_cqe->done().
- */
- if (nvme_rdma_queue_sig_limit(queue) || flush)
- wr.send_flags |= IB_SEND_SIGNALED;
+ wr.send_flags = IB_SEND_SIGNALED;
if (first)
first->next = ≀
@@ -1329,6 +1296,12 @@ static struct blk_mq_tags *nvme_rdma_tag
return queue->ctrl->tag_set.tags[queue_idx - 1];
}
+static void nvme_rdma_async_done(struct ib_cq *cq, struct ib_wc *wc)
+{
+ if (unlikely(wc->status != IB_WC_SUCCESS))
+ nvme_rdma_wr_error(cq, wc, "ASYNC");
+}
+
static void nvme_rdma_submit_async_event(struct nvme_ctrl *arg, int aer_idx)
{
struct nvme_rdma_ctrl *ctrl = to_rdma_ctrl(arg);
@@ -1350,10 +1323,12 @@ static void nvme_rdma_submit_async_event
cmd->common.flags |= NVME_CMD_SGL_METABUF;
nvme_rdma_set_sg_null(cmd);
+ sqe->cqe.done = nvme_rdma_async_done;
+
ib_dma_sync_single_for_device(dev, sqe->dma, sizeof(*cmd),
DMA_TO_DEVICE);
- ret = nvme_rdma_post_send(queue, sqe, &sge, 1, NULL, false);
+ ret = nvme_rdma_post_send(queue, sqe, &sge, 1, NULL);
WARN_ON_ONCE(ret);
}
@@ -1639,7 +1614,6 @@ static blk_status_t nvme_rdma_queue_rq(s
struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
struct nvme_rdma_qe *sqe = &req->sqe;
struct nvme_command *c = sqe->data;
- bool flush = false;
struct ib_device *dev;
blk_status_t ret;
int err;
@@ -1668,13 +1642,13 @@ static blk_status_t nvme_rdma_queue_rq(s
goto err;
}
+ sqe->cqe.done = nvme_rdma_send_done;
+
ib_dma_sync_single_for_device(dev, sqe->dma,
sizeof(struct nvme_command), DMA_TO_DEVICE);
- if (req_op(rq) == REQ_OP_FLUSH)
- flush = true;
err = nvme_rdma_post_send(queue, sqe, req->sge, req->num_sge,
- req->mr->need_inval ? &req->reg_wr.wr : NULL, flush);
+ req->mr->need_inval ? &req->reg_wr.wr : NULL);
if (unlikely(err)) {
nvme_rdma_unmap_data(queue, rq);
goto err;
Patches currently in stable-queue which might be from sagi(a)grimberg.me are
queue-4.14/nvme-rdma-don-t-suppress-send-completions.patch
This is a note to let you know that I've just added the patch titled
netlink: put module reference if dump start fails
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
netlink-put-module-reference-if-dump-start-fails.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From b87b6194be631c94785fe93398651e804ed43e28 Mon Sep 17 00:00:00 2001
From: "Jason A. Donenfeld" <Jason(a)zx2c4.com>
Date: Wed, 21 Feb 2018 04:41:59 +0100
Subject: netlink: put module reference if dump start fails
From: Jason A. Donenfeld <Jason(a)zx2c4.com>
commit b87b6194be631c94785fe93398651e804ed43e28 upstream.
Before, if cb->start() failed, the module reference would never be put,
because cb->cb_running is intentionally false at this point. Users are
generally annoyed by this because they can no longer unload modules that
leak references. Also, it may be possible to tediously wrap a reference
counter back to zero, especially since module.c still uses atomic_inc
instead of refcount_inc.
This patch expands the error path to simply call module_put if
cb->start() fails.
Fixes: 41c87425a1ac ("netlink: do not set cb_running if dump's start() errs")
Signed-off-by: Jason A. Donenfeld <Jason(a)zx2c4.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/netlink/af_netlink.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
--- a/net/netlink/af_netlink.c
+++ b/net/netlink/af_netlink.c
@@ -2258,7 +2258,7 @@ int __netlink_dump_start(struct sock *ss
if (cb->start) {
ret = cb->start(cb);
if (ret)
- goto error_unlock;
+ goto error_put;
}
nlk->cb_running = true;
@@ -2278,6 +2278,8 @@ int __netlink_dump_start(struct sock *ss
*/
return -EINTR;
+error_put:
+ module_put(control->module);
error_unlock:
sock_put(sk);
mutex_unlock(nlk->cb_mutex);
Patches currently in stable-queue which might be from Jason(a)zx2c4.com are
queue-4.9/netlink-put-module-reference-if-dump-start-fails.patch
This is a note to let you know that I've just added the patch titled
netlink: put module reference if dump start fails
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
netlink-put-module-reference-if-dump-start-fails.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Tue Mar 6 19:02:12 PST 2018
From: "Jason A. Donenfeld" <Jason(a)zx2c4.com>
Date: Wed, 21 Feb 2018 04:41:59 +0100
Subject: netlink: put module reference if dump start fails
From: "Jason A. Donenfeld" <Jason(a)zx2c4.com>
[ Upstream commit b87b6194be631c94785fe93398651e804ed43e28 ]
Before, if cb->start() failed, the module reference would never be put,
because cb->cb_running is intentionally false at this point. Users are
generally annoyed by this because they can no longer unload modules that
leak references. Also, it may be possible to tediously wrap a reference
counter back to zero, especially since module.c still uses atomic_inc
instead of refcount_inc.
This patch expands the error path to simply call module_put if
cb->start() fails.
Fixes: 41c87425a1ac ("netlink: do not set cb_running if dump's start() errs")
Signed-off-by: Jason A. Donenfeld <Jason(a)zx2c4.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/netlink/af_netlink.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
--- a/net/netlink/af_netlink.c
+++ b/net/netlink/af_netlink.c
@@ -2276,7 +2276,7 @@ int __netlink_dump_start(struct sock *ss
if (cb->start) {
ret = cb->start(cb);
if (ret)
- goto error_unlock;
+ goto error_put;
}
nlk->cb_running = true;
@@ -2296,6 +2296,8 @@ int __netlink_dump_start(struct sock *ss
*/
return -EINTR;
+error_put:
+ module_put(control->module);
error_unlock:
sock_put(sk);
mutex_unlock(nlk->cb_mutex);
Patches currently in stable-queue which might be from Jason(a)zx2c4.com are
queue-4.14/netlink-put-module-reference-if-dump-start-fails.patch
This is a note to let you know that I've just added the patch titled
x86/speculation: Use Indirect Branch Prediction Barrier in context switch
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-speculation-use-indirect-branch-prediction-barrier-in-context-switch.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 18bf3c3ea8ece8f03b6fc58508f2dfd23c7711c7 Mon Sep 17 00:00:00 2001
From: Tim Chen <tim.c.chen(a)linux.intel.com>
Date: Mon, 29 Jan 2018 22:04:47 +0000
Subject: x86/speculation: Use Indirect Branch Prediction Barrier in context switch
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
From: Tim Chen <tim.c.chen(a)linux.intel.com>
commit 18bf3c3ea8ece8f03b6fc58508f2dfd23c7711c7 upstream.
Flush indirect branches when switching into a process that marked itself
non dumpable. This protects high value processes like gpg better,
without having too high performance overhead.
If done naïvely, we could switch to a kernel idle thread and then back
to the original process, such as:
process A -> idle -> process A
In such scenario, we do not have to do IBPB here even though the process
is non-dumpable, as we are switching back to the same process after a
hiatus.
To avoid the redundant IBPB, which is expensive, we track the last mm
user context ID. The cost is to have an extra u64 mm context id to track
the last mm we were using before switching to the init_mm used by idle.
Avoiding the extra IBPB is probably worth the extra memory for this
common scenario.
For those cases where tlb_defer_switch_to_init_mm() returns true (non
PCID), lazy tlb will defer switch to init_mm, so we will not be changing
the mm for the process A -> idle -> process A switch. So IBPB will be
skipped for this case.
Thanks to the reviewers and Andy Lutomirski for the suggestion of
using ctx_id which got rid of the problem of mm pointer recycling.
Signed-off-by: Tim Chen <tim.c.chen(a)linux.intel.com>
Signed-off-by: David Woodhouse <dwmw(a)amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: ak(a)linux.intel.com
Cc: karahmed(a)amazon.de
Cc: arjan(a)linux.intel.com
Cc: torvalds(a)linux-foundation.org
Cc: linux(a)dominikbrodowski.net
Cc: peterz(a)infradead.org
Cc: bp(a)alien8.de
Cc: luto(a)kernel.org
Cc: pbonzini(a)redhat.com
Link: https://lkml.kernel.org/r/1517263487-3708-1-git-send-email-dwmw@amazon.co.uk
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/tlbflush.h | 2 ++
arch/x86/mm/tlb.c | 31 +++++++++++++++++++++++++++++++
2 files changed, 33 insertions(+)
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -68,6 +68,8 @@ static inline void invpcid_flush_all_non
struct tlb_state {
struct mm_struct *active_mm;
int state;
+ /* last user mm's ctx id */
+ u64 last_ctx_id;
/*
* Access to this CR4 shadow and to H/W CR4 is protected by
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -10,6 +10,7 @@
#include <asm/tlbflush.h>
#include <asm/mmu_context.h>
+#include <asm/nospec-branch.h>
#include <asm/cache.h>
#include <asm/apic.h>
#include <asm/uv/uv.h>
@@ -106,6 +107,28 @@ void switch_mm_irqs_off(struct mm_struct
unsigned cpu = smp_processor_id();
if (likely(prev != next)) {
+ u64 last_ctx_id = this_cpu_read(cpu_tlbstate.last_ctx_id);
+
+ /*
+ * Avoid user/user BTB poisoning by flushing the branch
+ * predictor when switching between processes. This stops
+ * one process from doing Spectre-v2 attacks on another.
+ *
+ * As an optimization, flush indirect branches only when
+ * switching into processes that disable dumping. This
+ * protects high value processes like gpg, without having
+ * too high performance overhead. IBPB is *expensive*!
+ *
+ * This will not flush branches when switching into kernel
+ * threads. It will also not flush if we switch to idle
+ * thread and back to the same process. It will flush if we
+ * switch to a different non-dumpable process.
+ */
+ if (tsk && tsk->mm &&
+ tsk->mm->context.ctx_id != last_ctx_id &&
+ get_dumpable(tsk->mm) != SUID_DUMP_USER)
+ indirect_branch_prediction_barrier();
+
if (IS_ENABLED(CONFIG_VMAP_STACK)) {
/*
* If our current stack is in vmalloc space and isn't
@@ -120,6 +143,14 @@ void switch_mm_irqs_off(struct mm_struct
set_pgd(pgd, init_mm.pgd[stack_pgd_index]);
}
+ /*
+ * Record last user mm's context id, so we can avoid
+ * flushing branch buffer with IBPB if we switch back
+ * to the same user.
+ */
+ if (next != &init_mm)
+ this_cpu_write(cpu_tlbstate.last_ctx_id, next->context.ctx_id);
+
this_cpu_write(cpu_tlbstate.state, TLBSTATE_OK);
this_cpu_write(cpu_tlbstate.active_mm, next);
Patches currently in stable-queue which might be from tim.c.chen(a)linux.intel.com are
queue-4.9/x86-speculation-use-indirect-branch-prediction-barrier-in-context-switch.patch
queue-4.9/x86-mm-give-each-mm-tlb-flush-generation-a-unique-id.patch
This is a note to let you know that I've just added the patch titled
md: only allow remove_and_add_spares when no sync_thread running.
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
md-only-allow-remove_and_add_spares-when-no-sync_thread-running.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 39772f0a7be3b3dc26c74ea13fe7847fd1522c8b Mon Sep 17 00:00:00 2001
From: NeilBrown <neilb(a)suse.com>
Date: Sat, 3 Feb 2018 09:19:30 +1100
Subject: md: only allow remove_and_add_spares when no sync_thread running.
From: NeilBrown <neilb(a)suse.com>
commit 39772f0a7be3b3dc26c74ea13fe7847fd1522c8b upstream.
The locking protocols in md assume that a device will
never be removed from an array during resync/recovery/reshape.
When that isn't happening, rcu or reconfig_mutex is needed
to protect an rdev pointer while taking a refcount. When
it is happening, that protection isn't needed.
Unfortunately there are cases were remove_and_add_spares() is
called when recovery might be happening: is state_store(),
slot_store() and hot_remove_disk().
In each case, this is just an optimization, to try to expedite
removal from the personality so the device can be removed from
the array. If resync etc is happening, we just have to wait
for md_check_recover to find a suitable time to call
remove_and_add_spares().
This optimization and not essential so it doesn't
matter if it fails.
So change remove_and_add_spares() to abort early if
resync/recovery/reshape is happening, unless it is called
from md_check_recovery() as part of a newly started recovery.
The parameter "this" is only NULL when called from
md_check_recovery() so when it is NULL, there is no need to abort.
As this can result in a NULL dereference, the fix is suitable
for -stable.
cc: yuyufen <yuyufen(a)huawei.com>
Cc: Tomasz Majchrzak <tomasz.majchrzak(a)intel.com>
Fixes: 8430e7e0af9a ("md: disconnect device from personality before trying to remove it.")
Cc: stable(a)ver.kernel.org (v4.8+)
Signed-off-by: NeilBrown <neilb(a)suse.com>
Signed-off-by: Shaohua Li <sh.li(a)alibaba-inc.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/md/md.c | 4 ++++
1 file changed, 4 insertions(+)
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -8224,6 +8224,10 @@ static int remove_and_add_spares(struct
int removed = 0;
bool remove_some = false;
+ if (this && test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
+ /* Mustn't remove devices when resync thread is running */
+ return 0;
+
rdev_for_each(rdev, mddev) {
if ((this == NULL || rdev == this) &&
rdev->raid_disk >= 0 &&
Patches currently in stable-queue which might be from neilb(a)suse.com are
queue-4.9/md-only-allow-remove_and_add_spares-when-no-sync_thread-running.patch
This is a note to let you know that I've just added the patch titled
dm io: fix duplicate bio completion due to missing ref count
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
dm-io-fix-duplicate-bio-completion-due-to-missing-ref-count.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From feb7695fe9fb83084aa29de0094774f4c9d4c9fc Mon Sep 17 00:00:00 2001
From: Mike Snitzer <snitzer(a)redhat.com>
Date: Tue, 20 Jun 2017 19:14:30 -0400
Subject: dm io: fix duplicate bio completion due to missing ref count
From: Mike Snitzer <snitzer(a)redhat.com>
commit feb7695fe9fb83084aa29de0094774f4c9d4c9fc upstream.
If only a subset of the devices associated with multiple regions support
a given special operation (eg. DISCARD) then the dec_count() that is
used to set error for the region must increment the io->count.
Otherwise, when the dec_count() is called it can cause the dm-io
caller's bio to be completed multiple times. As was reported against
the dm-mirror target that had mirror legs with a mix of discard
capabilities.
Bug: https://bugzilla.kernel.org/show_bug.cgi?id=196077
Reported-by: Zhang Yi <yizhan(a)redhat.com>
Signed-off-by: Mike Snitzer <snitzer(a)redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/md/dm-io.c | 1 +
1 file changed, 1 insertion(+)
--- a/drivers/md/dm-io.c
+++ b/drivers/md/dm-io.c
@@ -302,6 +302,7 @@ static void do_region(int op, int op_fla
special_cmd_max_sectors = q->limits.max_write_same_sectors;
if ((op == REQ_OP_DISCARD || op == REQ_OP_WRITE_SAME) &&
special_cmd_max_sectors == 0) {
+ atomic_inc(&io->count);
dec_count(io, region, -EOPNOTSUPP);
return;
}
Patches currently in stable-queue which might be from snitzer(a)redhat.com are
queue-4.9/dm-io-fix-duplicate-bio-completion-due-to-missing-ref-count.patch