[BUG] There are at least two problems when run_delalloc_nocow() hits some error and has to go cleanup routine:
- It doesn't clear the folio dirty flags of any successfully ran range This breaks the regular error handling protocol for folio writeback, which should clear the dirty flag of the failed range. This clean up protocol is adapted by both iomap and btrfs (if the error happened at the very beginning of the whole delalloc range).
- It can start writeback/unlock folios which is already unlocked This is done by calling extent_clear_unlock_delalloc() with PAGE_START_WRITEBACK or PAGE_UNLOCK flag. This will trigger the VM_BUG_ON() for folio_start_writeback(), which requires the folio to be locked.
[CAUSE] The problem of not clearing the folio dirty flag is a common bug, shared between cow_file_range() and run_delalloc_nocow(). We just need to clear the folio dirty flags according to the @cur_offset cursor.
For the extent_clear_unlock_delalloc() on unlocked folios, it's because the double error handling, one from cow_file_range() (inside fallback_to_cow()), one from run_delalloc_nocow() itself.
[FIX] - Clear folio dirty for range [@start, @cur_offset) Introduce a helper, cleanup_dirty_folios(), which will find and lock the folio in the range, clear the dirty flag and start/end the writeback, with the extra handling for the @locked_folio.
- Introduce a helper to record the last failed COW range end This is to trace which range we should skip, to avoid double unlocking.
- Skip the failed COW range for the error handling
Cc: stable@vger.kernel.org Signed-off-by: Qu Wenruo wqu@suse.com --- Changelog: v3: - Fix the double error handling on the COW range Which can lead to VM_BUG_ON() for extent_clear_unlock_delalloc(), as the folio is already unlocked by the error handling inside cow_file_range().
- Update the commit message to explain the bug better
- Add a comment inside the error handling explaining the error patterns
v2: - Fix the incorrect @cur_offset assignment to @end The @end is not aligned to sector size, nor @cur_offset should be updated before fallback_to_cow() succeeded.
- Add one extra ASSERT() to make sure the range is properly aligned --- fs/btrfs/inode.c | 93 ++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 86 insertions(+), 7 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 9517fb2df649..069599b025a6 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -1969,6 +1969,48 @@ static int can_nocow_file_extent(struct btrfs_path *path, return ret < 0 ? ret : can_nocow; }
+static void cleanup_dirty_folios(struct btrfs_inode *inode, + struct folio *locked_folio, + u64 start, u64 end, int error) +{ + struct btrfs_fs_info *fs_info = inode->root->fs_info; + struct address_space *mapping = inode->vfs_inode.i_mapping; + pgoff_t start_index = start >> PAGE_SHIFT; + pgoff_t end_index = end >> PAGE_SHIFT; + u32 len; + + ASSERT(end + 1 - start < U32_MAX); + ASSERT(IS_ALIGNED(start, fs_info->sectorsize) && + IS_ALIGNED(end + 1, fs_info->sectorsize)); + len = end + 1 - start; + + /* + * Handle the locked folio first. + * btrfs_folio_clamp_*() helpers can handle range out of the folio case. + */ + btrfs_folio_clamp_clear_dirty(fs_info, locked_folio, start, len); + btrfs_folio_clamp_set_writeback(fs_info, locked_folio, start, len); + btrfs_folio_clamp_clear_writeback(fs_info, locked_folio, start, len); + + for (pgoff_t index = start_index; index <= end_index; index++) { + struct folio *folio; + + /* Already handled at the beginning. */ + if (index == locked_folio->index) + continue; + folio = __filemap_get_folio(mapping, index, FGP_LOCK, GFP_NOFS); + /* Cache already dropped, no need to do any cleanup. */ + if (IS_ERR(folio)) + continue; + btrfs_folio_clamp_clear_dirty(fs_info, folio, start, len); + btrfs_folio_clamp_set_writeback(fs_info, folio, start, len); + btrfs_folio_clamp_clear_writeback(fs_info, folio, start, len); + folio_unlock(folio); + folio_put(folio); + } + mapping_set_error(mapping, error); +} + /* * when nowcow writeback call back. This checks for snapshots or COW copies * of the extents that exist in the file, and COWs the file as required. @@ -1984,6 +2026,11 @@ static noinline int run_delalloc_nocow(struct btrfs_inode *inode, struct btrfs_root *root = inode->root; struct btrfs_path *path; u64 cow_start = (u64)-1; + /* + * If not 0, represents the inclusive end of the last fallback_to_cow() + * range. Only for error handling. + */ + u64 cow_end = 0; u64 cur_offset = start; int ret; bool check_prev = true; @@ -2144,6 +2191,7 @@ static noinline int run_delalloc_nocow(struct btrfs_inode *inode, found_key.offset - 1); cow_start = (u64)-1; if (ret) { + cow_end = found_key.offset - 1; btrfs_dec_nocow_writers(nocow_bg); goto error; } @@ -2217,11 +2265,12 @@ static noinline int run_delalloc_nocow(struct btrfs_inode *inode, cow_start = cur_offset;
if (cow_start != (u64)-1) { - cur_offset = end; ret = fallback_to_cow(inode, locked_folio, cow_start, end); cow_start = (u64)-1; - if (ret) + if (ret) { + cow_end = end; goto error; + } }
btrfs_free_path(path); @@ -2229,12 +2278,42 @@ static noinline int run_delalloc_nocow(struct btrfs_inode *inode,
error: /* - * If an error happened while a COW region is outstanding, cur_offset - * needs to be reset to cow_start to ensure the COW region is unlocked - * as well. + * There are several error cases: + * + * 1) Failed without falling back to COW + * start cur_start end + * |/////////////| | + * + * For range [start, cur_start) the folios are already unlocked (except + * @locked_folio), EXTENT_DELALLOC already removed. + * Only need to clear the dirty flag as they will never be submitted. + * Ordered extent and extent maps are handled by + * btrfs_mark_ordered_io_finished() inside run_delalloc_range(). + * + * 2) Failed with error from fallback_to_cow() + * start cur_start cow_end end + * |/////////////|-----------| | + * + * For range [start, cur_start) it's the same as case 1). + * But for range [cur_start, cow_end), the folios have dirty flag + * cleared and unlocked, EXTENT_DEALLLOC cleared. + * There may or may not be any ordered extents/extent maps allocated. + * + * We should not call extent_clear_unlock_delalloc() on range [cur_start, + * cow_end), as the folios are already unlocked. + * + * So clear the folio dirty flags for [start, cur_offset) first. */ - if (cow_start != (u64)-1) - cur_offset = cow_start; + if (cur_offset > start) + cleanup_dirty_folios(inode, locked_folio, start, cur_offset - 1, ret); + + /* + * If an error happened while a COW region is outstanding, cur_offset + * needs to be reset to @cow_end + 1 to skip the COW range, as + * cow_file_range() will do the proper cleanup at error. + */ + if (cow_end) + cur_offset = cow_end + 1;
/* * We need to lock the extent here because we're clearing DELALLOC and
linux-stable-mirror@lists.linaro.org