On Mon, 2024-07-29 at 17:35 +0200, Max Kellermann wrote:
On Mon, Jul 29, 2024 at 2:56 PM Jeff Layton jlayton@kernel.org wrote:
Either way, you can add this to both patches:
Reviewed-by: Jeff Layton jlayton@kernel.org
Stop the merge :-)
I just found that my patch introduces another lockup; copy_file_range locks up this way:
[<0>] folio_wait_private_2+0xd9/0x140 [<0>] ceph_write_begin+0x56/0x90 [<0>] generic_perform_write+0xc0/0x210 [<0>] ceph_write_iter+0x4e2/0x650 [<0>] iter_file_splice_write+0x30d/0x550 [<0>] splice_file_range_actor+0x2c/0x40 [<0>] splice_direct_to_actor+0xee/0x270 [<0>] splice_file_range+0x80/0xc0 [<0>] ceph_copy_file_range+0xbb/0x5b0 [<0>] vfs_copy_file_range+0x33e/0x5d0 [<0>] __x64_sys_copy_file_range+0xf7/0x200 [<0>] do_syscall_64+0x64/0x100 [<0>] entry_SYSCALL_64_after_hwframe+0x76/0x7e
Turns out that there are still private_2 users left in both fs/ceph and fs/netfs. My patches fix one problem, but cause another problem. Too bad!
This leaves me confused again: how shall I fix this? Can all folio_wait_private_2() calls simply be removed? This looks like some refactoring gone wrong, and some parts don't make sense (like netfs and ceph claim ownership of the folio_private pointer). I could try to fix the mess, but I need to know how this is meant to be. David, can you enlighten me?
Max
I suspect the folio_wait_private_2 call in ceph_write_begin should have also been removed in ae678317b95, and it just got missed somehow in the original patch. All of the other callsites that did anything with private_2 were removed in that patch.
David, can you confirm that?