On Wed, Apr 23, 2025 at 5:14 PM Jakub Kicinski kuba@kernel.org wrote:
On Thu, 17 Apr 2025 20:43:23 +0000 Harshitha Ramamurthy wrote:
Also this patch cleans up the error handling code of gve_adminq_destroy_tx_queue.
static int gve_adminq_destroy_tx_queue(struct gve_priv *priv, u32 queue_index) { union gve_adminq_command cmd;
int err; memset(&cmd, 0, sizeof(cmd)); cmd.opcode = cpu_to_be32(GVE_ADMINQ_DESTROY_TX_QUEUE);
@@ -808,11 +820,7 @@ static int gve_adminq_destroy_tx_queue(struct gve_priv *priv, u32 queue_index) .queue_id = cpu_to_be32(queue_index), };
err = gve_adminq_issue_cmd(priv, &cmd);
if (err)
return err;
return 0;
return gve_adminq_issue_cmd(priv, &cmd);
}
You mean this cleanup? That's not appropriate for a stable fix...
Could you also explain which callers of this core are not already under rtnl_lock and/pr the netdev instance lock?
I discovered this and thought that this applied more widely, but upon rereading it turns out it only applies to upcoming timestamping patches and a previous flow steering code attempt that was scuttled. Current callers are under rtnl_lock or netdev_lock. Should not have been sent to the net. So will send as part of the timestamping series. Thanks.
-- pw-bot: cr