在 2025/5/23 07:31, Sasha Levin 写道:
This is a note to let you know that I've just added the patch titled
btrfs: prevent inline data extents read from touching blocks beyond its range
to the 6.12-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git%3Ba=su...
The filename of the patch is: btrfs-prevent-inline-data-extents-read-from-touching.patch and it can be found in the queue-6.12 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree, please let stable@vger.kernel.org know about it.
Please drop this one from all stable trees.
Although the patch won't cause any behavior change, the main reason for this patch is to prepare for the subpage optimization (and future large folios support).
Thanks, Qu
commit 98504dd74a2688ff63dba6bf1d9f8abc7f0b322e Author: Qu Wenruo wqu@suse.com Date: Fri Nov 15 19:15:34 2024 +1030
btrfs: prevent inline data extents read from touching blocks beyond its range [ Upstream commit 1a5b5668d711d3d1ef447446beab920826decec3 ] Currently reading an inline data extent will zero out the remaining range in the page. This is not yet causing problems even for block size < page size (subpage) cases because: 1) An inline data extent always starts at file offset 0 Meaning at page read, we always read the inline extent first, before any other blocks in the page. Then later blocks are properly read out and re-fill the zeroed out ranges. 2) Currently btrfs will read out the whole page if a buffered write is not page aligned So a page is either fully uptodate at buffered write time (covers the whole page), or we will read out the whole page first. Meaning there is nothing to lose for such an inline extent read. But it's still not ideal: - We're zeroing out the page twice Once done by read_inline_extent()/uncompress_inline(), once done by btrfs_do_readpage() for ranges beyond i_size. - We're touching blocks that don't belong to the inline extent In the incoming patches, we can have a partial uptodate folio, of which some dirty blocks can exist while the page is not fully uptodate: The page size is 16K and block size is 4K: 0 4K 8K 12K 16K | | |/////////| | And range [8K, 12K) is dirtied by a buffered write, the remaining blocks are not uptodate. If range [0, 4K) contains an inline data extent, and we try to read the whole page, the current behavior will overwrite range [8K, 12K) with zero and cause data loss. So to make the behavior more consistent and in preparation for future changes, limit the inline data extents read to only zero out the range inside the first block, not the whole page. Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 0da2611fb9c85..ee8c18d298758 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -6825,6 +6825,7 @@ static noinline int uncompress_inline(struct btrfs_path *path, { int ret; struct extent_buffer *leaf = path->nodes[0];
- const u32 blocksize = leaf->fs_info->sectorsize; char *tmp; size_t max_size; unsigned long inline_size;
@@ -6841,7 +6842,7 @@ static noinline int uncompress_inline(struct btrfs_path *path, read_extent_buffer(leaf, tmp, ptr, inline_size);
- max_size = min_t(unsigned long, PAGE_SIZE, max_size);
- max_size = min_t(unsigned long, blocksize, max_size); ret = btrfs_decompress(compress_type, tmp, folio, 0, inline_size, max_size);
@@ -6853,8 +6854,8 @@ static noinline int uncompress_inline(struct btrfs_path *path, * cover that region here. */
- if (max_size < PAGE_SIZE)
folio_zero_range(folio, max_size, PAGE_SIZE - max_size);
- if (max_size < blocksize)
kfree(tmp); return ret; }folio_zero_range(folio, max_size, blocksize - max_size);
@@ -6862,6 +6863,7 @@ static noinline int uncompress_inline(struct btrfs_path *path, static int read_inline_extent(struct btrfs_inode *inode, struct btrfs_path *path, struct folio *folio) {
- const u32 blocksize = path->nodes[0]->fs_info->sectorsize; struct btrfs_file_extent_item *fi; void *kaddr; size_t copy_size;
@@ -6876,14 +6878,14 @@ static int read_inline_extent(struct btrfs_inode *inode, struct btrfs_path *path if (btrfs_file_extent_compression(path->nodes[0], fi) != BTRFS_COMPRESS_NONE) return uncompress_inline(path, folio, fi);
- copy_size = min_t(u64, PAGE_SIZE,
- copy_size = min_t(u64, blocksize, btrfs_file_extent_ram_bytes(path->nodes[0], fi)); kaddr = kmap_local_folio(folio, 0); read_extent_buffer(path->nodes[0], kaddr, btrfs_file_extent_inline_start(fi), copy_size); kunmap_local(kaddr);
- if (copy_size < PAGE_SIZE)
folio_zero_range(folio, copy_size, PAGE_SIZE - copy_size);
- if (copy_size < blocksize)
return 0; }folio_zero_range(folio, copy_size, blocksize - copy_size);