On 2/8/24 14:14, Jens Axboe wrote:
On 2/8/24 2:55 PM, Bart Van Assche wrote:
-static int ffs_aio_cancel(struct kiocb *kiocb) +static void ffs_epfile_cancel_kiocb(struct kiocb *kiocb) { struct ffs_io_data *io_data = kiocb->private; struct ffs_epfile *epfile = kiocb->ki_filp->private_data; unsigned long flags;
- int value;
spin_lock_irqsave(&epfile->ffs->eps_lock, flags);
- if (io_data && io_data->ep && io_data->req)
value = usb_ep_dequeue(io_data->ep, io_data->req);
- else
value = -EINVAL;
spin_unlock_irqrestore(&epfile->ffs->eps_lock, flags);usb_ep_dequeue(io_data->ep, io_data->req);
- return value; }
I'm assuming the NULL checks can go because it's just the async parts now?
Probably. I will look into this.
+static void aio_cancel_and_del(struct aio_kiocb *req);
Just move the function higher up? It doesn't have any dependencies.
aio_cancel_and_del() calls aio_poll_cancel(). aio_poll_cancel() calls poll_iocb_lock_wq(). poll_iocb_lock_wq() is defined below the first call of aio_cancel_and_del(). It's probably possible to get rid of that function declaration but a nontrivial amount of code would have to be moved.
@@ -1552,6 +1538,24 @@ static ssize_t aio_setup_rw(int rw, const struct iocb *iocb, return __import_iovec(rw, buf, len, UIO_FASTIOV, iovec, iter, compat); } +static void aio_add_rw_to_active_reqs(struct kiocb *req) +{
- struct aio_kiocb *aio = container_of(req, struct aio_kiocb, rw);
- struct kioctx *ctx = aio->ki_ctx;
- unsigned long flags;
- /*
* If the .cancel_kiocb() callback has been set, add the request
* to the list of active requests.
*/
- if (!req->ki_filp->f_op->cancel_kiocb)
return;
- spin_lock_irqsave(&ctx->ctx_lock, flags);
- list_add_tail(&aio->ki_list, &ctx->active_reqs);
- spin_unlock_irqrestore(&ctx->ctx_lock, flags);
+}
This can use spin_lock_irq(), always called from process context.
I will make this change.
+{
- void (*cancel_kiocb)(struct kiocb *) =
req->rw.ki_filp->f_op->cancel_kiocb;
- struct kioctx *ctx = req->ki_ctx;
- lockdep_assert_held(&ctx->ctx_lock);
- switch (req->ki_opcode) {
- case IOCB_CMD_PREAD:
- case IOCB_CMD_PWRITE:
- case IOCB_CMD_PREADV:
- case IOCB_CMD_PWRITEV:
if (cancel_kiocb)
cancel_kiocb(&req->rw);
break;
- case IOCB_CMD_FSYNC:
- case IOCB_CMD_FDSYNC:
break;
- case IOCB_CMD_POLL:
aio_poll_cancel(req);
break;
- default:
WARN_ONCE(true, "invalid aio operation %d\n", req->ki_opcode);
- }
- list_del_init(&req->ki_list);
+}
Why don't you just keep ki_cancel() and just change it to a void return that takes an aio_kiocb? Then you don't need this odd switch, or adding an opcode field just for this. That seems cleaner.
Keeping .ki_cancel() means that it must be set before I/O starts and only if the I/O is submitted by libaio. That would require an approach to recognize whether or not a struct kiocb is embedded in struct aio_kiocb, e.g. the patch that you posted as a reply on version one of this patch. Does anyone else want to comment on this?
Thanks,
Bart.