On Fri, Aug 23, 2024 at 09:07:30PM +0800, Julian Sun wrote:
Hi, all
Recently I noticed a bug[1] in btrfs, after digged it into and I believe it'a race in vfs.
Let's assume there's a inode (ie ino 261) with i_count 1 is called by iput(), and there's a concurrent thread calling generic_shutdown_super().
cpu0: cpu1: iput() // i_count is 1 ->spin_lock(inode) ->dec i_count to 0 ->iput_final() generic_shutdown_super() ->__inode_add_lru() ->evict_inodes() // cause some reason[2] ->if (atomic_read(inode->i_count)) continue; // return before // inode 261 passed the above check // list_lru_add_obj() // and then schedule out ->spin_unlock() // note here: the inode 261 // was still at sb list and hash list, // and I_FREEING|I_WILL_FREE was not been set
btrfs_iget() // after some function calls ->find_inode() // found the above inode 261 ->spin_lock(inode) // check I_FREEING|I_WILL_FREE // and passed ->__iget() ->spin_unlock(inode) // schedule back ->spin_lock(inode) // check (I_NEW|I_FREEING|I_WILL_FREE) flags, // passed and set I_FREEING iput() ->spin_unlock(inode) ->spin_lock(inode) ->evict() // dec i_count to 0 ->iput_final() ->spin_unlock() ->evict()
Now, we have two threads simultaneously evicting the same inode, which may trigger the BUG(inode->i_state & I_CLEAR) statement both within clear_inode() and iput().
To fix the bug, recheck the inode->i_count after holding i_lock. Because in the most scenarios, the first check is valid, and the overhead of spin_lock() can be reduced.
If there is any misunderstanding, please let me know, thanks.
[2]: The reason might be 1. SB_ACTIVE was removed or 2. mapping_shrinkable() return false when I reproduced the bug.
Reported-by: syzbot+67ba3c42bcbb4665d3ad@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=67ba3c42bcbb4665d3ad CC: stable@vger.kernel.org Fixes: 63997e98a3be ("split invalidate_inodes()") Signed-off-by: Julian Sun sunjunchao2870@gmail.com
fs/inode.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/fs/inode.c b/fs/inode.c index 3a41f83a4ba5..011f630777d0 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -723,6 +723,10 @@ void evict_inodes(struct super_block *sb) continue; spin_lock(&inode->i_lock);
if (atomic_read(&inode->i_count)) {
spin_unlock(&inode->i_lock);
continue;
if (inode->i_state & (I_NEW | I_FREEING | I_WILL_FREE)) { spin_unlock(&inode->i_lock); continue;}
This looks correct to me, albeit I would argue the commit message is overly verbose making it harder to understand the gist of the problem: evict_inodes() fails to re-check i_count after acquiring the spin lock, while the flags blocking 0->1 i_count transisions are not set yet, making it possible to race against such transition.
The real remark I have here is that evict_inodes(), modulo the bug, is identical to invalidate_inodes(). Perhaps a separate patch (*not* for stable) to whack it would be prudent?