Hi Greg, Sasha,
Patches below are some collections of bugfixes, those fixes are found when we are using the stable 5.10 kernel, please consider to apply them, I also Cced the author and maintainer for each patch to see if any objections.
Patch 2/7 will fix the failure of LTP test case 'move_pages 12', and patch 3/7 is not a bugfix but a preparation for later bugfixes, other patches are obvious bugfixes.
Gulam Mohamed (1): scsi: iscsi: Fix race condition between login and sync thread
Jens Axboe (1): io_uring: convert io_buffer_idr to XArray
Matthew Wilcox (Oracle) (1): io_uring: Convert personality_idr to XArray
Mauricio Faria de Oliveira (1): loop: fix I/O error on fsync() in detached loop devices
Mike Christie (1): scsi: iscsi: Fix iSCSI cls conn state
Oscar Salvador (1): mm,hwpoison: return -EBUSY when migration fails
Yejune Deng (1): io_uring: simplify io_remove_personalities()
drivers/block/loop.c | 3 + drivers/scsi/libiscsi.c | 26 +------- drivers/scsi/scsi_transport_iscsi.c | 28 ++++++++- fs/io_uring.c | 116 +++++++++++++++--------------------- include/scsi/scsi_transport_iscsi.h | 1 + mm/memory-failure.c | 6 +- 6 files changed, 85 insertions(+), 95 deletions(-)
From: Mauricio Faria de Oliveira mfo@canonical.com
commit 4ceddce55eb35d15b0f87f5dcf6f0058fd15d3a4 upstream.
There's an I/O error on fsync() in a detached loop device if it has been previously attached.
The issue is write cache is enabled in the attach path in loop_configure() but it isn't disabled in the detach path; thus it remains enabled in the block device regardless of whether it is attached or not.
Now fsync() can get an I/O request that will just be failed later in loop_queue_rq() as device's state is not 'Lo_bound'.
So, disable write cache in the detach path.
Do so based on the queue flag, not the loop device flag for read-only (used to enable) as the queue flag can be changed via sysfs even on read-only loop devices (e.g., losetup -r.)
Test-case:
# DEV=/dev/loop7
# IMG=/tmp/image # truncate --size 1M $IMG
# losetup $DEV $IMG # losetup -d $DEV
Before:
# strace -e fsync parted -s $DEV print 2>&1 | grep fsync fsync(3) = -1 EIO (Input/output error) Warning: Error fsyncing/closing /dev/loop7: Input/output error [ 982.529929] blk_update_request: I/O error, dev loop7, sector 0 op 0x1:(WRITE) flags 0x800 phys_seg 0 prio class 0
After:
# strace -e fsync parted -s $DEV print 2>&1 | grep fsync fsync(3) = 0
Co-developed-by: Eric Desrochers eric.desrochers@canonical.com Signed-off-by: Eric Desrochers eric.desrochers@canonical.com Signed-off-by: Mauricio Faria de Oliveira mfo@canonical.com Tested-by: Gabriel Krisman Bertazi krisman@collabora.com Reviewed-by: Ming Lei ming.lei@redhat.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Hanjun Guo guohanjun@huawei.com --- drivers/block/loop.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/block/loop.c b/drivers/block/loop.c index 06d44ae..f0fa0c8 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -1224,6 +1224,9 @@ static int __loop_clr_fd(struct loop_device *lo, bool release) goto out_unlock; }
+ if (test_bit(QUEUE_FLAG_WC, &lo->lo_queue->queue_flags)) + blk_queue_write_cache(lo->lo_queue, false, false); + /* freeze request queue during the transition */ blk_mq_freeze_queue(lo->lo_queue);
From: Oscar Salvador osalvador@suse.de
commit 3f4b815a439adfb8f238335612c4b28bc10084d8
Currently, we return -EIO when we fail to migrate the page.
Migrations' failures are rather transient as they can happen due to several reasons, e.g: high page refcount bump, mapping->migrate_page failing etc. All meaning that at that time the page could not be migrated, but that has nothing to do with an EIO error.
Let us return -EBUSY instead, as we do in case we failed to isolate the page.
While are it, let us remove the "ret" print as its value does not change.
Link: https://lkml.kernel.org/r/20201209092818.30417-1-osalvador@suse.de Signed-off-by: Oscar Salvador osalvador@suse.de Acked-by: Naoya Horiguchi naoya.horiguchi@nec.com Acked-by: Vlastimil Babka vbabka@suse.cz Cc: David Hildenbrand david@redhat.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Hanjun Guo guohanjun@huawei.com --- mm/memory-failure.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 25fb82320..01445dd 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1856,11 +1856,11 @@ static int __soft_offline_page(struct page *page) pr_info("soft offline: %#lx: %s migration failed %d, type %lx (%pGp)\n", pfn, msg_page[huge], ret, page->flags, &page->flags); if (ret > 0) - ret = -EIO; + ret = -EBUSY; } } else { - pr_info("soft offline: %#lx: %s isolation failed: %d, page count %d, type %lx (%pGp)\n", - pfn, msg_page[huge], ret, page_count(page), page->flags, &page->flags); + pr_info("soft offline: %#lx: %s isolation failed, page count %d, type %lx (%pGp)\n", + pfn, msg_page[huge], page_count(page), page->flags, &page->flags); ret = -EBUSY; } return ret;
From: Yejune Deng yejune.deng@gmail.com
commit 0bead8cd39b9c9c7c4e902018ccf129107ac50ef upstream.
The function io_remove_personalities() is very similar to io_unregister_personality(),so implement io_remove_personalities() calling io_unregister_personality().
Signed-off-by: Yejune Deng yejune.deng@gmail.com Reviewed-by: Pavel Begunkov asml.silence@gmail.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Hanjun Guo guohanjun@huawei.com --- fs/io_uring.c | 28 +++++++++++----------------- 1 file changed, 11 insertions(+), 17 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c index 0138aa7..0cbf2a0 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -8505,9 +8505,8 @@ static int io_uring_fasync(int fd, struct file *file, int on) return fasync_helper(fd, file, on, &ctx->cq_fasync); }
-static int io_remove_personalities(int id, void *p, void *data) +static int io_unregister_personality(struct io_ring_ctx *ctx, unsigned id) { - struct io_ring_ctx *ctx = data; struct io_identity *iod;
iod = idr_remove(&ctx->personality_idr, id); @@ -8515,7 +8514,17 @@ static int io_remove_personalities(int id, void *p, void *data) put_cred(iod->creds); if (refcount_dec_and_test(&iod->count)) kfree(iod); + return 0; } + + return -EINVAL; +} + +static int io_remove_personalities(int id, void *p, void *data) +{ + struct io_ring_ctx *ctx = data; + + io_unregister_personality(ctx, id); return 0; }
@@ -9606,21 +9615,6 @@ static int io_register_personality(struct io_ring_ctx *ctx) return ret; }
-static int io_unregister_personality(struct io_ring_ctx *ctx, unsigned id) -{ - struct io_identity *iod; - - iod = idr_remove(&ctx->personality_idr, id); - if (iod) { - put_cred(iod->creds); - if (refcount_dec_and_test(&iod->count)) - kfree(iod); - return 0; - } - - return -EINVAL; -} - static int io_register_restrictions(struct io_ring_ctx *ctx, void __user *arg, unsigned int nr_args) {
From: "Matthew Wilcox (Oracle)" willy@infradead.org
commit 61cf93700fe6359552848ed5e3becba6cd760efa upstream.
You can't call idr_remove() from within a idr_for_each() callback, but you can call xa_erase() from an xa_for_each() loop, so switch the entire personality_idr from the IDR to the XArray. This manifests as a use-after-free as idr_for_each() attempts to walk the rest of the node after removing the last entry from it.
Fixes: 071698e13ac6 ("io_uring: allow registering credentials") Cc: stable@vger.kernel.org # 5.6+ Reported-by: yangerkun yangerkun@huawei.com Signed-off-by: Matthew Wilcox (Oracle) willy@infradead.org [Pavel: rebased (creds load was moved into io_init_req())] Signed-off-by: Pavel Begunkov asml.silence@gmail.com Link: https://lore.kernel.org/r/7ccff36e1375f2b0ebf73d957f037b43becc0dde.161521280... Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Hanjun Guo guohanjun@huawei.com --- fs/io_uring.c | 59 ++++++++++++++++++++++++++++++----------------------------- 1 file changed, 30 insertions(+), 29 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c index 0cbf2a0..cd93bf5 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -346,7 +346,8 @@ struct io_ring_ctx {
struct idr io_buffer_idr;
- struct idr personality_idr; + struct xarray personalities; + u32 pers_next;
struct { unsigned cached_cq_tail; @@ -1212,7 +1213,7 @@ static struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p) init_completion(&ctx->ref_comp); init_completion(&ctx->sq_thread_comp); idr_init(&ctx->io_buffer_idr); - idr_init(&ctx->personality_idr); + xa_init_flags(&ctx->personalities, XA_FLAGS_ALLOC1); mutex_init(&ctx->uring_lock); init_waitqueue_head(&ctx->wait); spin_lock_init(&ctx->completion_lock); @@ -6629,7 +6630,7 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req, if (id) { struct io_identity *iod;
- iod = idr_find(&ctx->personality_idr, id); + iod = xa_load(&ctx->personalities, id); if (unlikely(!iod)) return -EINVAL; refcount_inc(&iod->count); @@ -8445,7 +8446,6 @@ static void io_ring_ctx_free(struct io_ring_ctx *ctx) io_sqe_files_unregister(ctx); io_eventfd_unregister(ctx); io_destroy_buffers(ctx); - idr_destroy(&ctx->personality_idr);
#if defined(CONFIG_UNIX) if (ctx->ring_sock) { @@ -8509,7 +8509,7 @@ static int io_unregister_personality(struct io_ring_ctx *ctx, unsigned id) { struct io_identity *iod;
- iod = idr_remove(&ctx->personality_idr, id); + iod = xa_erase(&ctx->personalities, id); if (iod) { put_cred(iod->creds); if (refcount_dec_and_test(&iod->count)) @@ -8520,14 +8520,6 @@ static int io_unregister_personality(struct io_ring_ctx *ctx, unsigned id) return -EINVAL; }
-static int io_remove_personalities(int id, void *p, void *data) -{ - struct io_ring_ctx *ctx = data; - - io_unregister_personality(ctx, id); - return 0; -} - static void io_ring_exit_work(struct work_struct *work) { struct io_ring_ctx *ctx = container_of(work, struct io_ring_ctx, @@ -8554,6 +8546,9 @@ static bool io_cancel_ctx_cb(struct io_wq_work *work, void *data)
static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx) { + unsigned long index; + struct io_identify *iod; + mutex_lock(&ctx->uring_lock); percpu_ref_kill(&ctx->refs); /* if force is set, the ring is going away. always drop after that */ @@ -8574,7 +8569,8 @@ static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
/* if we failed setting up the ctx, we might not have any rings */ io_iopoll_try_reap_events(ctx); - idr_for_each(&ctx->personality_idr, io_remove_personalities, ctx); + xa_for_each(&ctx->personalities, index, iod) + io_unregister_personality(ctx, index);
/* * Do this upfront, so we won't have a grace period where the ring @@ -9137,11 +9133,10 @@ static int io_sqpoll_wait_sq(struct io_ring_ctx *ctx) }
#ifdef CONFIG_PROC_FS -static int io_uring_show_cred(int id, void *p, void *data) +static int io_uring_show_cred(struct seq_file *m, unsigned int id, + const struct io_identity *iod) { - struct io_identity *iod = p; const struct cred *cred = iod->creds; - struct seq_file *m = data; struct user_namespace *uns = seq_user_ns(m); struct group_info *gi; kernel_cap_t cap; @@ -9209,9 +9204,13 @@ static void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, struct seq_file *m) seq_printf(m, "%5u: 0x%llx/%u\n", i, buf->ubuf, (unsigned int) buf->len); } - if (has_lock && !idr_is_empty(&ctx->personality_idr)) { + if (has_lock && !xa_empty(&ctx->personalities)) { + unsigned long index; + const struct io_identity *iod; + seq_printf(m, "Personalities:\n"); - idr_for_each(&ctx->personality_idr, io_uring_show_cred, m); + xa_for_each(&ctx->personalities, index, iod) + io_uring_show_cred(m, index, iod); } seq_printf(m, "PollList:\n"); spin_lock_irq(&ctx->completion_lock); @@ -9597,21 +9596,23 @@ static int io_probe(struct io_ring_ctx *ctx, void __user *arg, unsigned nr_args)
static int io_register_personality(struct io_ring_ctx *ctx) { - struct io_identity *id; + struct io_identity *iod; + u32 id; int ret;
- id = kmalloc(sizeof(*id), GFP_KERNEL); - if (unlikely(!id)) + iod = kmalloc(sizeof(*iod), GFP_KERNEL); + if (unlikely(!iod)) return -ENOMEM;
- io_init_identity(id); - id->creds = get_current_cred(); + io_init_identity(iod); + iod->creds = get_current_cred();
- ret = idr_alloc_cyclic(&ctx->personality_idr, id, 1, USHRT_MAX, GFP_KERNEL); - if (ret < 0) { - put_cred(id->creds); - kfree(id); - } + ret = xa_alloc_cyclic(&ctx->personalities, &id, (void *)iod, + XA_LIMIT(0, USHRT_MAX), &ctx->pers_next, GFP_KERNEL); + if (!ret) + return id; + put_cred(iod->creds); + kfree(iod); return ret; }
From: Jens Axboe axboe@kernel.dk
commit 9e15c3a0ced5a61f320b989072c24983cb1620c1 upstream.
Like we did for the personality idr, convert the IO buffer idr to use XArray. This avoids a use-after-free on removal of entries, since idr doesn't like doing so from inside an iterator, and it nicely reduces the amount of code we need to support this feature.
Fixes: 5a2e745d4d43 ("io_uring: buffer registration infrastructure") Cc: stable@vger.kernel.org Cc: Matthew Wilcox willy@infradead.org Cc: yangerkun yangerkun@huawei.com Reported-by: Hulk Robot hulkci@huawei.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Hanjun Guo guohanjun@huawei.com --- fs/io_uring.c | 43 +++++++++++++++---------------------------- 1 file changed, 15 insertions(+), 28 deletions(-)
diff --git a/fs/io_uring.c b/fs/io_uring.c index cd93bf5..fb63cc8 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -344,7 +344,7 @@ struct io_ring_ctx { struct socket *ring_sock; #endif
- struct idr io_buffer_idr; + struct xarray io_buffers;
struct xarray personalities; u32 pers_next; @@ -1212,7 +1212,7 @@ static struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p) INIT_LIST_HEAD(&ctx->cq_overflow_list); init_completion(&ctx->ref_comp); init_completion(&ctx->sq_thread_comp); - idr_init(&ctx->io_buffer_idr); + xa_init_flags(&ctx->io_buffers, XA_FLAGS_ALLOC1); xa_init_flags(&ctx->personalities, XA_FLAGS_ALLOC1); mutex_init(&ctx->uring_lock); init_waitqueue_head(&ctx->wait); @@ -2990,7 +2990,7 @@ static struct io_buffer *io_buffer_select(struct io_kiocb *req, size_t *len,
lockdep_assert_held(&req->ctx->uring_lock);
- head = idr_find(&req->ctx->io_buffer_idr, bgid); + head = xa_load(&req->ctx->io_buffers, bgid); if (head) { if (!list_empty(&head->list)) { kbuf = list_last_entry(&head->list, struct io_buffer, @@ -2998,7 +2998,7 @@ static struct io_buffer *io_buffer_select(struct io_kiocb *req, size_t *len, list_del(&kbuf->list); } else { kbuf = head; - idr_remove(&req->ctx->io_buffer_idr, bgid); + xa_erase(&req->ctx->io_buffers, bgid); } if (*len > kbuf->len) *len = kbuf->len; @@ -3960,7 +3960,7 @@ static int __io_remove_buffers(struct io_ring_ctx *ctx, struct io_buffer *buf, } i++; kfree(buf); - idr_remove(&ctx->io_buffer_idr, bgid); + xa_erase(&ctx->io_buffers, bgid);
return i; } @@ -3978,7 +3978,7 @@ static int io_remove_buffers(struct io_kiocb *req, bool force_nonblock, lockdep_assert_held(&ctx->uring_lock);
ret = -ENOENT; - head = idr_find(&ctx->io_buffer_idr, p->bgid); + head = xa_load(&ctx->io_buffers, p->bgid); if (head) ret = __io_remove_buffers(ctx, head, p->bgid, p->nbufs); if (ret < 0) @@ -4069,21 +4069,14 @@ static int io_provide_buffers(struct io_kiocb *req, bool force_nonblock,
lockdep_assert_held(&ctx->uring_lock);
- list = head = idr_find(&ctx->io_buffer_idr, p->bgid); + list = head = xa_load(&ctx->io_buffers, p->bgid);
ret = io_add_buffers(p, &head); - if (ret < 0) - goto out; - - if (!list) { - ret = idr_alloc(&ctx->io_buffer_idr, head, p->bgid, p->bgid + 1, - GFP_KERNEL); - if (ret < 0) { + if (ret >= 0 && !list) { + ret = xa_insert(&ctx->io_buffers, p->bgid, head, GFP_KERNEL); + if (ret < 0) __io_remove_buffers(ctx, head, p->bgid, -1U); - goto out; - } } -out: if (ret < 0) req_set_fail_links(req);
@@ -8411,19 +8404,13 @@ static int io_eventfd_unregister(struct io_ring_ctx *ctx) return -ENXIO; }
-static int __io_destroy_buffers(int id, void *p, void *data) -{ - struct io_ring_ctx *ctx = data; - struct io_buffer *buf = p; - - __io_remove_buffers(ctx, buf, id, -1U); - return 0; -} - static void io_destroy_buffers(struct io_ring_ctx *ctx) { - idr_for_each(&ctx->io_buffer_idr, __io_destroy_buffers, ctx); - idr_destroy(&ctx->io_buffer_idr); + struct io_buffer *buf; + unsigned long index; + + xa_for_each(&ctx->io_buffers, index, buf) + __io_remove_buffers(ctx, buf, index, -1U); }
static void io_ring_ctx_free(struct io_ring_ctx *ctx)
From: Gulam Mohamed gulam.mohamed@oracle.com
commit 9e67600ed6b8565da4b85698ec659b5879a6c1c6 upstream.
A kernel panic was observed due to a timing issue between the sync thread and the initiator processing a login response from the target. The session reopen can be invoked both from the session sync thread when iscsid restarts and from iscsid through the error handler. Before the initiator receives the response to a login, another reopen request can be sent from the error handler/sync session. When the initial login response is subsequently processed, the connection has been closed and the socket has been released.
To fix this a new connection state, ISCSI_CONN_BOUND, is added:
- Set the connection state value to ISCSI_CONN_DOWN upon iscsi_if_ep_disconnect() and iscsi_if_stop_conn()
- Set the connection state to the newly created value ISCSI_CONN_BOUND after bind connection (transport->bind_conn())
- In iscsi_set_param(), return -ENOTCONN if the connection state is not either ISCSI_CONN_BOUND or ISCSI_CONN_UP
Link: https://lore.kernel.org/r/20210325093248.284678-1-gulam.mohamed@oracle.com Reviewed-by: Mike Christie michael.christie@oracle.com Signed-off-by: Gulam Mohamed gulam.mohamed@oracle.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Hanjun Guo guohanjun@huawei.com --- drivers/scsi/scsi_transport_iscsi.c | 14 +++++++++++++- include/scsi/scsi_transport_iscsi.h | 1 + 2 files changed, 14 insertions(+), 1 deletion(-)
diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c index c520239..cb7b74a0 100644 --- a/drivers/scsi/scsi_transport_iscsi.c +++ b/drivers/scsi/scsi_transport_iscsi.c @@ -2480,6 +2480,7 @@ static void iscsi_if_stop_conn(struct iscsi_cls_conn *conn, int flag) */ mutex_lock(&conn_mutex); conn->transport->stop_conn(conn, flag); + conn->state = ISCSI_CONN_DOWN; mutex_unlock(&conn_mutex);
} @@ -2906,6 +2907,13 @@ int iscsi_session_event(struct iscsi_cls_session *session, default: err = transport->set_param(conn, ev->u.set_param.param, data, ev->u.set_param.len); + if ((conn->state == ISCSI_CONN_BOUND) || + (conn->state == ISCSI_CONN_UP)) { + err = transport->set_param(conn, ev->u.set_param.param, + data, ev->u.set_param.len); + } else { + return -ENOTCONN; + } }
return err; @@ -2965,6 +2973,7 @@ static int iscsi_if_ep_disconnect(struct iscsi_transport *transport, mutex_lock(&conn->ep_mutex); conn->ep = NULL; mutex_unlock(&conn->ep_mutex); + conn->state = ISCSI_CONN_DOWN; }
transport->ep_disconnect(ep); @@ -3732,6 +3741,8 @@ static int iscsi_logout_flashnode_sid(struct iscsi_transport *transport, ev->r.retcode = transport->bind_conn(session, conn, ev->u.b_conn.transport_eph, ev->u.b_conn.is_leading); + if (!ev->r.retcode) + conn->state = ISCSI_CONN_BOUND; mutex_unlock(&conn_mutex);
if (ev->r.retcode || !transport->ep_connect) @@ -3971,7 +3982,8 @@ static ISCSI_CLASS_ATTR(conn, field, S_IRUGO, show_conn_param_##param, \ static const char *const connection_state_names[] = { [ISCSI_CONN_UP] = "up", [ISCSI_CONN_DOWN] = "down", - [ISCSI_CONN_FAILED] = "failed" + [ISCSI_CONN_FAILED] = "failed", + [ISCSI_CONN_BOUND] = "bound" };
static ssize_t show_conn_state(struct device *dev, diff --git a/include/scsi/scsi_transport_iscsi.h b/include/scsi/scsi_transport_iscsi.h index 8a26a2f..fc5a398 100644 --- a/include/scsi/scsi_transport_iscsi.h +++ b/include/scsi/scsi_transport_iscsi.h @@ -193,6 +193,7 @@ enum iscsi_connection_state { ISCSI_CONN_UP = 0, ISCSI_CONN_DOWN, ISCSI_CONN_FAILED, + ISCSI_CONN_BOUND, };
struct iscsi_cls_conn {
From: Mike Christie michael.christie@oracle.com
commit 0dcf8febcb7b9d42bec98bc068e01d1a6ea578b8 upstream.
In commit 9e67600ed6b8 ("scsi: iscsi: Fix race condition between login and sync thread") I missed that libiscsi was now setting the iSCSI class state, and that patch ended up resetting the state during conn stoppage and using the wrong state value during ep_disconnect. This patch moves the setting of the class state to the class module and then fixes the two issues above.
Link: https://lore.kernel.org/r/20210406171746.5016-1-michael.christie@oracle.com Fixes: 9e67600ed6b8 ("scsi: iscsi: Fix race condition between login and sync thread") Cc: Gulam Mohamed gulam.mohamed@oracle.com Signed-off-by: Mike Christie michael.christie@oracle.com Signed-off-by: Martin K. Petersen martin.petersen@oracle.com Signed-off-by: Hanjun Guo guohanjun@huawei.com --- drivers/scsi/libiscsi.c | 26 +++----------------------- drivers/scsi/scsi_transport_iscsi.c | 18 +++++++++++++++--- 2 files changed, 18 insertions(+), 26 deletions(-)
diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c index 41b8192..41023fc 100644 --- a/drivers/scsi/libiscsi.c +++ b/drivers/scsi/libiscsi.c @@ -3089,9 +3089,10 @@ int iscsi_conn_start(struct iscsi_cls_conn *cls_conn) } }
-static void iscsi_start_session_recovery(struct iscsi_session *session, - struct iscsi_conn *conn, int flag) +void iscsi_conn_stop(struct iscsi_cls_conn *cls_conn, int flag) { + struct iscsi_conn *conn = cls_conn->dd_data; + struct iscsi_session *session = conn->session; int old_stop_stage;
mutex_lock(&session->eh_mutex); @@ -3149,27 +3150,6 @@ static void iscsi_start_session_recovery(struct iscsi_session *session, spin_unlock_bh(&session->frwd_lock); mutex_unlock(&session->eh_mutex); } - -void iscsi_conn_stop(struct iscsi_cls_conn *cls_conn, int flag) -{ - struct iscsi_conn *conn = cls_conn->dd_data; - struct iscsi_session *session = conn->session; - - switch (flag) { - case STOP_CONN_RECOVER: - cls_conn->state = ISCSI_CONN_FAILED; - break; - case STOP_CONN_TERM: - cls_conn->state = ISCSI_CONN_DOWN; - break; - default: - iscsi_conn_printk(KERN_ERR, conn, - "invalid stop flag %d\n", flag); - return; - } - - iscsi_start_session_recovery(session, conn, flag); -} EXPORT_SYMBOL_GPL(iscsi_conn_stop);
int iscsi_conn_bind(struct iscsi_cls_session *cls_session, diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c index cb7b74a0..2735178 100644 --- a/drivers/scsi/scsi_transport_iscsi.c +++ b/drivers/scsi/scsi_transport_iscsi.c @@ -2479,10 +2479,22 @@ static void iscsi_if_stop_conn(struct iscsi_cls_conn *conn, int flag) * it works. */ mutex_lock(&conn_mutex); + switch (flag) { + case STOP_CONN_RECOVER: + conn->state = ISCSI_CONN_FAILED; + break; + case STOP_CONN_TERM: + conn->state = ISCSI_CONN_DOWN; + break; + default: + iscsi_cls_conn_printk(KERN_ERR, conn, + "invalid stop flag %d\n", flag); + goto unlock; + } + conn->transport->stop_conn(conn, flag); - conn->state = ISCSI_CONN_DOWN; +unlock: mutex_unlock(&conn_mutex); - }
static void stop_conn_work_fn(struct work_struct *work) @@ -2973,7 +2985,7 @@ static int iscsi_if_ep_disconnect(struct iscsi_transport *transport, mutex_lock(&conn->ep_mutex); conn->ep = NULL; mutex_unlock(&conn->ep_mutex); - conn->state = ISCSI_CONN_DOWN; + conn->state = ISCSI_CONN_FAILED; }
transport->ep_disconnect(ep);
On Tue, Jul 13, 2021 at 05:18:30PM +0800, Hanjun Guo wrote:
Hi Greg, Sasha,
Patches below are some collections of bugfixes, those fixes are found when we are using the stable 5.10 kernel, please consider to apply them, I also Cced the author and maintainer for each patch to see if any objections.
All look good, thanks for these, now queued up.
greg k-h
linux-stable-mirror@lists.linaro.org