The indicated patch introduced a barrier in the sysfs_delete attribute for the controller that rejects the request if the controller isn't created. "Created" is defined as at least 1 call to nvme_start_ctrl().
This is problematic in error-injection testing. If an error occurs on the initial attempt to create an association and the controller enters reconnect(s) attempts, the admin cannot delete the controller until either there is a successful association created or ctrl_loss_tmo times out.
Where this issue is particularly hurtful is when the "admin" is the nvme-cli, it is performing a connection to a discovery controller, and it is initiated via auto-connect scripts. With the FC transport, if the first connection attempt fails, the controller enters a normal reconnect state but returns control to the cli thread that created the controller. In this scenario, the cli attempts to read the discovery log via ioctl, which fails, causing the cli to see it as an empty log and then proceeds to delete the discovery controller. The delete is rejected and the controller is left live. If the discovery controller reconnect then succeeds, there is no action to delete it, and it sits live doing nothing.
Cc: stable@vger.kernel.org # v5.7+ Fixes: ce1518139e69 ("nvme: Fix controller creation races with teardown flow") Signed-off-by: James Smart james.smart@broadcom.com CC: Israel Rukshin israelr@mellanox.com CC: Max Gurtovoy maxg@mellanox.com CC: Christoph Hellwig hch@lst.de CC: Keith Busch kbusch@kernel.org CC: Sagi Grimberg sagi@grimberg.me --- drivers/nvme/host/core.c | 5 ----- drivers/nvme/host/nvme.h | 1 - 2 files changed, 6 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 05aa568a60af..86abce864aa9 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -3481,10 +3481,6 @@ static ssize_t nvme_sysfs_delete(struct device *dev, { struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
- /* Can't delete non-created controllers */ - if (!ctrl->created) - return -EBUSY; - if (device_remove_file_self(dev, attr)) nvme_delete_ctrl_sync(ctrl); return count; @@ -4355,7 +4351,6 @@ void nvme_start_ctrl(struct nvme_ctrl *ctrl) nvme_queue_scan(ctrl); nvme_start_queues(ctrl); } - ctrl->created = true; } EXPORT_SYMBOL_GPL(nvme_start_ctrl);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index c5c1bac797aa..45cf360fefbc 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -300,7 +300,6 @@ struct nvme_ctrl { struct nvme_command ka_cmd; struct work_struct fw_act_work; unsigned long events; - bool created;
#ifdef CONFIG_NVME_MULTIPATH /* asymmetric namespace access: */
The indicated patch introduced a barrier in the sysfs_delete attribute for the controller that rejects the request if the controller isn't created. "Created" is defined as at least 1 call to nvme_start_ctrl().
This is problematic in error-injection testing. If an error occurs on the initial attempt to create an association and the controller enters reconnect(s) attempts, the admin cannot delete the controller until either there is a successful association created or ctrl_loss_tmo times out.
Where this issue is particularly hurtful is when the "admin" is the nvme-cli, it is performing a connection to a discovery controller, and it is initiated via auto-connect scripts. With the FC transport, if the first connection attempt fails, the controller enters a normal reconnect state but returns control to the cli thread that created the controller. In this scenario, the cli attempts to read the discovery log via ioctl, which fails, causing the cli to see it as an empty log and then proceeds to delete the discovery controller. The delete is rejected and the controller is left live. If the discovery controller reconnect then succeeds, there is no action to delete it, and it sits live doing nothing.
This is indeed a regression.
Perhaps we should also revert: 12a0b6622107 ("nvme: don't hold nvmf_transports_rwsem for more than transport lookups")
Which inherently caused this by removing the serialization of .create_ctrl()...
On 8/28/2020 12:08 PM, Sagi Grimberg wrote:
The indicated patch introduced a barrier in the sysfs_delete attribute for the controller that rejects the request if the controller isn't created. "Created" is defined as at least 1 call to nvme_start_ctrl().
This is problematic in error-injection testing. If an error occurs on the initial attempt to create an association and the controller enters reconnect(s) attempts, the admin cannot delete the controller until either there is a successful association created or ctrl_loss_tmo times out.
Where this issue is particularly hurtful is when the "admin" is the nvme-cli, it is performing a connection to a discovery controller, and it is initiated via auto-connect scripts. With the FC transport, if the first connection attempt fails, the controller enters a normal reconnect state but returns control to the cli thread that created the controller. In this scenario, the cli attempts to read the discovery log via ioctl, which fails, causing the cli to see it as an empty log and then proceeds to delete the discovery controller. The delete is rejected and the controller is left live. If the discovery controller reconnect then succeeds, there is no action to delete it, and it sits live doing nothing.
This is indeed a regression.
Perhaps we should also revert: 12a0b6622107 ("nvme: don't hold nvmf_transports_rwsem for more than transport lookups")
Which inherently caused this by removing the serialization of .create_ctrl()...
no, I believe the patch on the semaphore is correct. Otherwise - things can be blocked a long time.. a minute (1 cmd timeout) or even multiple minutes in the case where a command failure in core layers effectively gets ignored and thus doesn't cause the error path in the transport. There can be multiple /dev/nvme-fabrics commands stacked up that can make the delays look much longer to the last guy.
as far as creation vs teardown... yeah, not fun, but there are other ways to deal with it. FC: I got rid of the separate create/reconnect threads a while ago thus the return-control-while-reconnecting behavior, so I've had to deal with it. It's one area it'd be nice to see some convergence in implementation again between transports.
-- james
This is indeed a regression.
Perhaps we should also revert: 12a0b6622107 ("nvme: don't hold nvmf_transports_rwsem for more than transport lookups")
Which inherently caused this by removing the serialization of .create_ctrl()...
no, I believe the patch on the semaphore is correct. Otherwise - things can be blocked a long time.. a minute (1 cmd timeout) or even multiple minutes in the case where a command failure in core layers effectively gets ignored and thus doesn't cause the error path in the transport. There can be multiple /dev/nvme-fabrics commands stacked up that can make the delays look much longer to the last guy.
as far as creation vs teardown... yeah, not fun, but there are other ways to deal with it. FC: I got rid of the separate create/reconnect threads a while ago thus the return-control-while-reconnecting behavior, so I've had to deal with it. It's one area it'd be nice to see some convergence in implementation again between transports.
Doesn't fc have a bug there? in create_ctrl after flushing the connect_work, what is telling it if delete is running in with it (or that it already ran...)
On 8/28/2020 4:59 PM, Sagi Grimberg wrote:
This is indeed a regression.
Perhaps we should also revert: 12a0b6622107 ("nvme: don't hold nvmf_transports_rwsem for more than transport lookups")
Which inherently caused this by removing the serialization of .create_ctrl()...
no, I believe the patch on the semaphore is correct. Otherwise - things can be blocked a long time.. a minute (1 cmd timeout) or even multiple minutes in the case where a command failure in core layers effectively gets ignored and thus doesn't cause the error path in the transport. There can be multiple /dev/nvme-fabrics commands stacked up that can make the delays look much longer to the last guy.
as far as creation vs teardown... yeah, not fun, but there are other ways to deal with it. FC: I got rid of the separate create/reconnect threads a while ago thus the return-control-while-reconnecting behavior, so I've had to deal with it. It's one area it'd be nice to see some convergence in implementation again between transports.
Doesn't fc have a bug there? in create_ctrl after flushing the connect_work, what is telling it if delete is running in with it (or that it already ran...)
I guess I don't understand what the bug is you are thinking about. Maybe there's a short period that the ctrl ptr is perhaps freed, thus the pointer shouldn't be used - but I don't see it as almost everything is simply looking at the value of the pointer, not dereferencing it.
I do have a bug or two with delete association fighting with create_association - but it's mainly due to nvme_fc_error_recovery not the delete routine. I've reworked this area after seeing your other patches and will be posting after some more testing. But no reason for synchronizing all ctrl creates.
-- james
This is indeed a regression.
Perhaps we should also revert: 12a0b6622107 ("nvme: don't hold nvmf_transports_rwsem for more than transport lookups")
Which inherently caused this by removing the serialization of .create_ctrl()...
no, I believe the patch on the semaphore is correct. Otherwise - things can be blocked a long time.. a minute (1 cmd timeout) or even multiple minutes in the case where a command failure in core layers effectively gets ignored and thus doesn't cause the error path in the transport. There can be multiple /dev/nvme-fabrics commands stacked up that can make the delays look much longer to the last guy.
as far as creation vs teardown... yeah, not fun, but there are other ways to deal with it. FC: I got rid of the separate create/reconnect threads a while ago thus the return-control-while-reconnecting behavior, so I've had to deal with it. It's one area it'd be nice to see some convergence in implementation again between transports.
Doesn't fc have a bug there? in create_ctrl after flushing the connect_work, what is telling it if delete is running in with it (or that it already ran...)
I guess I don't understand what the bug is you are thinking about. Maybe there's a short period that the ctrl ptr is perhaps freed, thus the pointer shouldn't be used - but I don't see it as almost everything is simply looking at the value of the pointer, not dereferencing it.
I'm referring to nvme_fc_init_ctrl, if delete happens while it is waiting in flush_delayed_work(&ctrl->connect_work); won't you dereference and return a controller that is possibly already deleted/freed?
I do have a bug or two with delete association fighting with create_association - but it's mainly due to nvme_fc_error_recovery not the delete routine. I've reworked this area after seeing your other patches and will be posting after some more testing. But no reason for synchronizing all ctrl creates.
Is it that big of an issue? it should fail rather quickly shouldn't it?
On 8/31/2020 4:15 PM, Sagi Grimberg wrote:
This is indeed a regression.
Perhaps we should also revert: 12a0b6622107 ("nvme: don't hold nvmf_transports_rwsem for more than transport lookups")
Which inherently caused this by removing the serialization of .create_ctrl()...
no, I believe the patch on the semaphore is correct. Otherwise - things can be blocked a long time.. a minute (1 cmd timeout) or even multiple minutes in the case where a command failure in core layers effectively gets ignored and thus doesn't cause the error path in the transport. There can be multiple /dev/nvme-fabrics commands stacked up that can make the delays look much longer to the last guy.
as far as creation vs teardown... yeah, not fun, but there are other ways to deal with it. FC: I got rid of the separate create/reconnect threads a while ago thus the return-control-while-reconnecting behavior, so I've had to deal with it. It's one area it'd be nice to see some convergence in implementation again between transports.
Doesn't fc have a bug there? in create_ctrl after flushing the connect_work, what is telling it if delete is running in with it (or that it already ran...)
I guess I don't understand what the bug is you are thinking about. Maybe there's a short period that the ctrl ptr is perhaps freed, thus the pointer shouldn't be used - but I don't see it as almost everything is simply looking at the value of the pointer, not dereferencing it.
I'm referring to nvme_fc_init_ctrl, if delete happens while it is waiting in flush_delayed_work(&ctrl->connect_work); won't you dereference and return a controller that is possibly already deleted/freed?
ok - that matches my "short period" and it is possible as there's one immediate printf that may dereference the ptr. After that, it's comparisons of the pointer value. I can move the printf to avoid the issue. That window's rather small.
I do have a bug or two with delete association fighting with create_association - but it's mainly due to nvme_fc_error_recovery not the delete routine. I've reworked this area after seeing your other patches and will be posting after some more testing. But no reason for synchronizing all ctrl creates.
Is it that big of an issue? it should fail rather quickly shouldn't it?
not sure what you are asking. if it's how long to fail the creation of a new association - it's at least 60s (an admin command timeout).
-- james
This is indeed a regression.
Perhaps we should also revert: 12a0b6622107 ("nvme: don't hold nvmf_transports_rwsem for more than transport lookups")
Which inherently caused this by removing the serialization of .create_ctrl()...
no, I believe the patch on the semaphore is correct. Otherwise - things can be blocked a long time.. a minute (1 cmd timeout) or even multiple minutes in the case where a command failure in core layers effectively gets ignored and thus doesn't cause the error path in the transport. There can be multiple /dev/nvme-fabrics commands stacked up that can make the delays look much longer to the last guy.
as far as creation vs teardown... yeah, not fun, but there are other ways to deal with it. FC: I got rid of the separate create/reconnect threads a while ago thus the return-control-while-reconnecting behavior, so I've had to deal with it. It's one area it'd be nice to see some convergence in implementation again between transports.
Doesn't fc have a bug there? in create_ctrl after flushing the connect_work, what is telling it if delete is running in with it (or that it already ran...)
I guess I don't understand what the bug is you are thinking about. Maybe there's a short period that the ctrl ptr is perhaps freed, thus the pointer shouldn't be used - but I don't see it as almost everything is simply looking at the value of the pointer, not dereferencing it.
I'm referring to nvme_fc_init_ctrl, if delete happens while it is waiting in flush_delayed_work(&ctrl->connect_work); won't you dereference and return a controller that is possibly already deleted/freed?
ok - that matches my "short period" and it is possible as there's one immediate printf that may dereference the ptr. After that, it's comparisons of the pointer value. I can move the printf to avoid the issue. That window's rather small.
But you also return back &ctrl->ctrl, that is another dereference, and what will make ctrl to be an ERR_PTR?
Anyway, we should probably come up with something more robust...
I do have a bug or two with delete association fighting with create_association - but it's mainly due to nvme_fc_error_recovery not the delete routine. I've reworked this area after seeing your other patches and will be posting after some more testing. But no reason for synchronizing all ctrl creates.
Is it that big of an issue? it should fail rather quickly shouldn't it?
not sure what you are asking. if it's how long to fail the creation of a new association - it's at least 60s (an admin command timeout).
That's the worst case (admin command timeout), but is it the most common case?
Would making the timeouts shorter in the initial connect make sense? Just throwing out ideas...
On 9/1/2020 3:01 PM, Sagi Grimberg wrote:
But you also return back &ctrl->ctrl, that is another dereference, and what will make ctrl to be an ERR_PTR?
Anyway, we should probably come up with something more robust...
ok
not sure what you are asking. if it's how long to fail the creation of a new association - it's at least 60s (an admin command timeout).
That's the worst case (admin command timeout), but is it the most common case?
Yes - as it currently corresponds to packet drops. command failures are rare. FC has a feature to speed this up but it's not widely implemented yet.
Would making the timeouts shorter in the initial connect make sense? Just throwing out ideas...
I'd enjoy a reduced time regardless. But the main concern is how long we have the systemd processes blocked.
-- james
linux-stable-mirror@lists.linaro.org