Move start_freeze into nvme_rdma_configure_io_queues(), and there is
at least two benefits:
1) fix unbalanced freeze and unfreeze, since re-connection work may
fail or be broken by removal
2) IO during error recovery can be failfast quickly because nvme fabrics
unquiesces queues after teardown.
One side-effect is that !mpath request may timeout during connecting
because of queue topo change, but that looks not one big deal:
1) same problem exists with current code base
2) compared with !…
[View More]mpath, mpath use case is dominant
Fixes: 9f98772ba307 ("nvme-rdma: fix controller reset hang during traffic")
Cc: stable(a)vger.kernel.org
Signed-off-by: Ming Lei <ming.lei(a)redhat.com>
---
drivers/nvme/host/rdma.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index d433b2ec07a6..337a624a537c 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -883,6 +883,7 @@ static int nvme_rdma_configure_io_queues(struct nvme_rdma_ctrl *ctrl, bool new)
goto out_cleanup_tagset;
if (!new) {
+ nvme_start_freeze(&ctrl->ctrl);
nvme_unquiesce_io_queues(&ctrl->ctrl);
if (!nvme_wait_freeze_timeout(&ctrl->ctrl, NVME_IO_TIMEOUT)) {
/*
@@ -891,6 +892,7 @@ static int nvme_rdma_configure_io_queues(struct nvme_rdma_ctrl *ctrl, bool new)
* to be safe.
*/
ret = -ENODEV;
+ nvme_unfreeze(&ctrl->ctrl);
goto out_wait_freeze_timed_out;
}
blk_mq_update_nr_hw_queues(ctrl->ctrl.tagset,
@@ -940,7 +942,6 @@ static void nvme_rdma_teardown_io_queues(struct nvme_rdma_ctrl *ctrl,
bool remove)
{
if (ctrl->ctrl.queue_count > 1) {
- nvme_start_freeze(&ctrl->ctrl);
nvme_quiesce_io_queues(&ctrl->ctrl);
nvme_sync_io_queues(&ctrl->ctrl);
nvme_rdma_stop_io_queues(ctrl);
--
2.40.1
[View Less]
Move start_freeze into nvme_tcp_configure_io_queues(), and there is
at least two benefits:
1) fix unbalanced freeze and unfreeze, since re-connection work may
fail or be broken by removal
2) IO during error recovery can be failfast quickly because nvme fabrics
unquiesces queues after teardown.
One side-effect is that !mpath request may timeout during connecting
because of queue topo change, but that looks not one big deal:
1) same problem exists with current code base
2) compared with !…
[View More]mpath, mpath use case is dominant
Fixes: 2875b0aecabe ("nvme-tcp: fix controller reset hang during traffic")
Cc: stable(a)vger.kernel.org
Signed-off-by: Ming Lei <ming.lei(a)redhat.com>
---
drivers/nvme/host/tcp.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 3e7dd6f91832..fb24cd8ac46c 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -1868,6 +1868,7 @@ static int nvme_tcp_configure_io_queues(struct nvme_ctrl *ctrl, bool new)
goto out_cleanup_connect_q;
if (!new) {
+ nvme_start_freeze(ctrl);
nvme_unquiesce_io_queues(ctrl);
if (!nvme_wait_freeze_timeout(ctrl, NVME_IO_TIMEOUT)) {
/*
@@ -1876,6 +1877,7 @@ static int nvme_tcp_configure_io_queues(struct nvme_ctrl *ctrl, bool new)
* to be safe.
*/
ret = -ENODEV;
+ nvme_unfreeze(ctrl);
goto out_wait_freeze_timed_out;
}
blk_mq_update_nr_hw_queues(ctrl->tagset,
@@ -1980,7 +1982,6 @@ static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl,
if (ctrl->queue_count <= 1)
return;
nvme_quiesce_admin_queue(ctrl);
- nvme_start_freeze(ctrl);
nvme_quiesce_io_queues(ctrl);
nvme_sync_io_queues(ctrl);
nvme_tcp_stop_io_queues(ctrl);
--
2.40.1
[View Less]
These are ZDI Vulnerability patches that was not applied in linux 5.15
stable kernel.
Namjae Jeon (4):
ksmbd: use ksmbd_req_buf_next() in ksmbd_smb2_check_message()
ksmbd: validate command payload size
ksmbd: fix out-of-bound read in smb2_write
ksmbd: validate session id and tree id in the compound request
fs/ksmbd/server.c | 33 ++++++++++++++++++++-------------
fs/ksmbd/smb2misc.c | 38 ++++++++++++++++++++------------------
fs/ksmbd/smb2pdu.c | 44 +++++++++++++++++++++++++++++++…
[View More]++++++++-----
3 files changed, 79 insertions(+), 36 deletions(-)
--
2.25.1
[View Less]
Hi,
as there are new hardware directives, we need a little adaptation
for the AUX invalidation sequence.
In this version we support all the engines affected by this
change.
The stable backport has some challenges because the original
patch that this series fixes has had more changes in between.
This patch is slowly exploding with code refactorings and
features added and fixed.
Thanks a lot Nirmoy, Andrzej and Matt for your review and for the
fruitful discussions!
Thanks,
Andi
Changelog:
=…
[View More]========
v5 -> v6
- Fixed ccs flush in the engines VE and BCS. They are sent as a
separate command instead of added in the pipe control.
- Separated the CCS flusing in the pipe control patch with the
quiescing of the memory. They were meant to be on separate
patch already in the previous verision, but apparently I
squashed them by mistake.
v4 -> v5
- The AUX CCS is added as a device property instead of checking
against FLAT CCS. This adds the new HAS_AUX_CCS check
(Patch 2, new).
- little and trivial refactoring here and there.
- extended the flags{0,1}/bit_group_{0,1} renaming to other
functions.
- Created an intel_emit_pipe_control_cs() wrapper for submitting
the pipe control.
- Quiesce memory for all the engines, not just RCS (Patch 6,
new).
- The PIPE_CONTROL_CCS_FLUSH is added to all the engines.
- Remove redundant EMIT_FLUSH_CCS mode flag.
- Remove unnecessary NOOPs from the command streamer for
invalidating the CCS table.
- Use INVALID_MMIO_REG and gen12_get_aux_inv_reg() instad of
__MMIO(0) and reg.reg.
- Remove useless wrapper and just use gen12_get_aux_inv_reg().
v3 -> v4
- A trivial patch 3 is added to rename the flags with
bit_group_{0,1} to align with the datasheet naming.
- Patch 4 fixes a confusion I made where the CCS flag was
applied to the wrong bit group.
v2 -> v3
- added r-b from Nirmoy in patch 1 and 4.
- added patch 3 which enables the ccs_flush in the control pipe
for mtl+ compute and render engines.
- added redundant checks in patch 2 for enabling the EMIT_FLUSH
flag.
v1 -> v2
- add a clean up preliminary patch for the existing registers
- add support for more engines
- add the Fixes tag
Andi Shyti (7):
drm/i915/gt: Cleanup aux invalidation registers
drm/i915: Add the has_aux_ccs device property
drm/i915/gt: Rename flags with bit_group_X according to the datasheet
drm/i915/gt: Refactor intel_emit_pipe_control_cs() in a single
function
drm/i915/gt: Ensure memory quiesced before invalidation for all
engines
drm/i915/gt: Enable the CCS_FLUSH bit in the pipe control
drm/i915/gt: Support aux invalidation on all engines
Jonathan Cavitt (2):
drm/i915/gt: Ensure memory quiesced before invalidation
drm/i915/gt: Poll aux invalidation register bit on invalidation
drivers/gpu/drm/i915/gt/gen8_engine_cs.c | 222 +++++++++++++------
drivers/gpu/drm/i915/gt/gen8_engine_cs.h | 21 +-
drivers/gpu/drm/i915/gt/intel_gpu_commands.h | 2 +
drivers/gpu/drm/i915/gt/intel_gt_regs.h | 16 +-
drivers/gpu/drm/i915/gt/intel_lrc.c | 17 +-
drivers/gpu/drm/i915/i915_drv.h | 1 +
drivers/gpu/drm/i915/i915_pci.c | 5 +-
drivers/gpu/drm/i915/intel_device_info.h | 1 +
8 files changed, 186 insertions(+), 99 deletions(-)
--
2.40.1
[View Less]
Dear kernel developers!
Something unintended must have happened in kernel version 4.20.0.
On my PC I have an Iomega REV 35 drive. Up to kernel version 4.19.125
that device can be mounted with write access for the root user.
(Unfortunately I could not find a way to have write access as a normal
user.)
The command I use for mounting:
-----
sudo mount -t udf -o
rw,nosuid,nodev,relatime,uid=1000,gid=1000,iocharset=utf8 /dev/sr1
/media/rm_l1604/REV35
-----
The very same command gives the …
[View More]following message with kernel versions
>= 4.20.0.
-----
mount: /dev/sr1 is write-protected, mounting read-only
-----
As a result there is no chance go get write access to the device. At
least I could not find a way to get write access.
Is there any workaround for this problem? Please let me know!
Thank you and best regards,
Reinhold Mannsberger
[View Less]