On Sat, Sep 06, 2025 at 07:47:00AM +0530, Harshit Mogalapalli wrote:
Hi Jens,
On 06/09/25 01:28, Jens Axboe wrote:
On 9/5/25 5:04 AM, Harshit Mogalapalli wrote:
diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h index 5ce332fc6ff5..3b27d9bcf298 100644 --- a/include/linux/io_uring_types.h +++ b/include/linux/io_uring_types.h @@ -648,6 +648,8 @@ struct io_kiocb { struct io_task_work io_task_work; /* for polled requests, i.e. IORING_OP_POLL_ADD and async armed poll */ struct hlist_node hash_node;
- /* for private io_kiocb freeing */
- struct rcu_head rcu_head; /* internal polling, see IORING_FEAT_FAST_POLL */ struct async_poll *apoll; /* opcode allocated if it needs to store data for async defer */
Thanks a lot for looking into this one.
This should go into a union with hash_node, rather than bloat the struct. That's how it was done upstream, not sure why this one is different?
We don't have commit: 01ee194d1aba ("io_uring: add support for hybrid IOPOLL") which moves hlist_node to a union along with iopoll_start,
struct io_task_work io_task_work;
/* for polled requests, i.e. IORING_OP_POLL_ADD and async armed poll
*/
struct hlist_node hash_node;
union {
/*
* for polled requests, i.e. IORING_OP_POLL_ADD and async
armed
* poll
*/
struct hlist_node hash_node;
/* For IOPOLL setup queues, with hybrid polling */
u64 iopoll_start;
};
given that we don't need the above commit, and partly because I didn't realize about the bloat benefit we would get I added rcu_head without a union. Thanks a lot for correctly. I will check the size bloat next time when I run into this situation.
Thank you very much for correcting this and providing a backport.
Greg/Sasha: Should I send a v2 of this series with my backport swapped with the one from Jens ?
I just took Jens's patch. So I'll drop your patch from this series too.
thanks,
greg k-h