On 9/6/25 3:04 PM, Greg KH wrote:
On Sat, Sep 06, 2025 at 02:47:04PM -0600, Jens Axboe wrote:
On 9/6/25 12:36 PM, Greg KH wrote:
On Fri, Sep 05, 2025 at 07:23:00PM -0600, Jens Axboe wrote:
On 9/5/25 1:58 PM, Jens Axboe wrote:
On 9/5/25 5:04 AM, Harshit Mogalapalli wrote:
diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h index 5ce332fc6ff5..3b27d9bcf298 100644 --- a/include/linux/io_uring_types.h +++ b/include/linux/io_uring_types.h @@ -648,6 +648,8 @@ struct io_kiocb { struct io_task_work io_task_work; /* for polled requests, i.e. IORING_OP_POLL_ADD and async armed poll */ struct hlist_node hash_node;
- /* for private io_kiocb freeing */
- struct rcu_head rcu_head; /* internal polling, see IORING_FEAT_FAST_POLL */ struct async_poll *apoll; /* opcode allocated if it needs to store data for async defer */
This should go into a union with hash_node, rather than bloat the struct. That's how it was done upstream, not sure why this one is different?
Here's a test variant with that sorted. Greg, I never got a FAILED email on this one, as far as I can tell. When a patch is marked with CC: stable@vger.kernel.org and the origin of the bug clearly marked with Fixes, I'm expecting to have a 100% reliable notification if it fails to apply. If not, I just kind of assume patches flow into stable.
Was this missed on my side, or was it on the stable side? If the latter, how did that happen? I always ensure that stable has what it needs and play nice on my side, but if misses like this can happen with the tooling, that makes me a bit nervous.
This looks like a failure on my side, sorry. I don't see any FAILED email that went out for this anywhere, so I messed up.
sorry about that, and Harshit, thanks for noticing it.
Thanks for confirming, because I was worried it was on my side. But I thought these things were fully automated? I'm going to add something on my side to catch these in the future, just in case.
Hah, "fully automated", I wish...
Just because "learning how the sausage is made" is something that some people are curious about, here's how I apply stable patches:
Was hoping to learn this :-)
- I get a mbox full of patches that are in Linus's tree with a cc:stable in them, when he applies them to his tree. How that happens is another story...
- In mutt, I open the mbox, and pick a patch to look at. if it seems sane (almost all do), I look for a "Fixes:" tag. if it's there, I press a key and a script of mine + a local database I've hacked together, tells me just how far back that "Fixes: " commit went. I try to remember that version number.
- I press a different key, and the mail is turned into a patch, and then attempted to be applied to each branch of the currently active stable trees using quilt. It tells me about fuzz, or failures, or other things, and can let me resolve failures if I want to, one per branch (I have to manually continue on after each attempt because I can cancel it all if it stops applying).
- If the patch didn't apply all the way back, I go to a different terminal window and run 'bad_stable GIT_ID' with GIT_ID the id from the original commit which I had selected in the original email. I'm then offered up which tree to say it failed for by the script, and it sends the email off.
Notice the "I try to remember how far back" stage. Normally that works just fine. Sometimes it doesn't. This time it didn't. Overall my % is pretty good in the past 20+ years of doing this. Or no one is really paying attention and my % is way worse, hard to tell...
OK I can see how mistakes would creep in. I do pay pretty good attention, and I think perhaps (I'd need to dig through emails) I've caught missing patches 1-2 before. Which isn't too bad. But the manual parts of the above does mean that I need to check if things fell through the crack. I naively just assumed that if it has cc stable and either a version marker or a fixes tag, that it'd be a guarantee that it a) it gets applied, or b) I get a FAILED email. Nothing in between.
And yes, I've tried to make the "send the failed email" happen directly from the failure to apply, but that gets into some combinations of "did it really want to go that far back" (some patches do not have Fixes: tags) and sometimes Fixes is actually wrong (hit that just a few minutes ago with a drm patch), and there's some messy terminal/focus issues with running interactive scripts from within a mutt process that sometimes requires me to spawn separate windows to work around, but then I lost the original email involved as that was piped from mutt, and well, it's a mess. So I stick to this process.
I can process stable patches pretty fast now, I'm only rate limited by my test builds, not the "apply from email" dance. And the failure rate is generally pretty low, with the exception of the -rc1 DRM subsystem merge nightmare, but that's another issue...
Anyway, sorry for the wall of text that you weren't looking for :)
Actually really appreciated, this helps manage my expectations and what I can do better on my side. Do I think your setup needs improving? Definitely! But like you said, the failure rate is pretty darn low. I mostly thinks it needs improving so it isn't too much manual work on your side, and that means better automation for this. And that would then also solve (or at least reduce) the gaps of where it can go wrong and reliably give me either success or failure, not missed backport. Which is then less work for me, too :-)