We are accessing the start and len field in em after it is free'd.
This patch moves the line accessing the free'd values in em before they were free'd so we won't access free'd memory.
Reported-by: syzbot+853d80cba98ce1157ae6@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=853d80cba98ce1157ae6 Signed-off-by: Pei Li peili.dev@gmail.com --- Syzbot reported the following error: BUG: KASAN: slab-use-after-free in add_ra_bio_pages.constprop.0.isra.0+0xf03/0xfb0 fs/btrfs/compression.c:529
This is because we are reading the values from em right after freeing it before through free_extent_map(em).
This patch moves the line accessing the free'd values in em before they were free'd so we won't access free'd memory.
Fixes: 6a4049102055 ("btrfs: subpage: make add_ra_bio_pages() compatible") --- Changes in v2: - Adapt Qu's suggestion to move the read-after-free line before freeing - Cc stable kernel - Link to v1: https://lore.kernel.org/r/20240710-bug11-v1-1-aa02297fbbc9@gmail.com --- fs/btrfs/compression.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c index 6441e47d8a5e..f271df10ef1c 100644 --- a/fs/btrfs/compression.c +++ b/fs/btrfs/compression.c @@ -514,6 +514,8 @@ static noinline int add_ra_bio_pages(struct inode *inode, put_page(page); break; } + add_size = min(em->start + em->len, page_end + 1) - cur; + free_extent_map(em);
if (page->index == end_index) { @@ -526,7 +528,6 @@ static noinline int add_ra_bio_pages(struct inode *inode, } }
- add_size = min(em->start + em->len, page_end + 1) - cur; ret = bio_add_page(orig_bio, page, add_size, offset_in_page(cur)); if (ret != add_size) { unlock_extent(tree, cur, page_end, NULL);
--- base-commit: 563a50672d8a86ec4b114a4a2f44d6e7ff855f5b change-id: 20240710-bug11-a8ac18afb724
Best regards,
Hi,
Thanks for your patch.
FYI: kernel test robot notices the stable kernel rule is not satisfied.
The check is based on https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html#opti...
Rule: add the tag "Cc: stable@vger.kernel.org" in the sign-off area to have the patch automatically included in the stable tree. Subject: [PATCH v2] btrfs: Fix slab-use-after-free Read in add_ra_bio_pages Link: https://lore.kernel.org/stable/20240710-bug11-v2-1-e7bc61f32e5d%40gmail.com
在 2024/7/11 13:59, Pei Li 写道:
We are accessing the start and len field in em after it is free'd.
This patch moves the line accessing the free'd values in em before they were free'd so we won't access free'd memory.
Reported-by: syzbot+853d80cba98ce1157ae6@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=853d80cba98ce1157ae6 Signed-off-by: Pei Li peili.dev@gmail.com
Syzbot reported the following error: BUG: KASAN: slab-use-after-free in add_ra_bio_pages.constprop.0.isra.0+0xf03/0xfb0 fs/btrfs/compression.c:529
This is because we are reading the values from em right after freeing it before through free_extent_map(em).
This patch moves the line accessing the free'd values in em before they were free'd so we won't access free'd memory.
Fixes: 6a4049102055 ("btrfs: subpage: make add_ra_bio_pages() compatible")
This fixes tag, along with the syzbot report, should be in the main commit message, not after the "---" line, which would be discarded when applying.
Changes in v2:
- Adapt Qu's suggestion to move the read-after-free line before freeing
- Cc stable kernel
It's not just Ccing to the stable list, but with a version tag.
For all the proper tags usage, you can check this commit, it has all the correct tags.
b2a616676839 ("btrfs: fix rw device counting in __btrfs_free_extra_devids")
Otherwise the code looks good to me.
Reviewed-by: Qu Wenruo wqu@suse.com
Thanks, Qu
fs/btrfs/compression.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c index 6441e47d8a5e..f271df10ef1c 100644 --- a/fs/btrfs/compression.c +++ b/fs/btrfs/compression.c @@ -514,6 +514,8 @@ static noinline int add_ra_bio_pages(struct inode *inode, put_page(page); break; }
add_size = min(em->start + em->len, page_end + 1) - cur;
free_extent_map(em);
if (page->index == end_index) {
@@ -526,7 +528,6 @@ static noinline int add_ra_bio_pages(struct inode *inode, } }
ret = bio_add_page(orig_bio, page, add_size, offset_in_page(cur)); if (ret != add_size) { unlock_extent(tree, cur, page_end, NULL);add_size = min(em->start + em->len, page_end + 1) - cur;
base-commit: 563a50672d8a86ec4b114a4a2f44d6e7ff855f5b change-id: 20240710-bug11-a8ac18afb724
Best regards,
On Thu, Jul 11, 2024 at 5:29 AM Pei Li peili.dev@gmail.com wrote:
We are accessing the start and len field in em after it is free'd.
This patch moves the line accessing the free'd values in em before they were free'd so we won't access free'd memory.
Reported-by: syzbot+853d80cba98ce1157ae6@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=853d80cba98ce1157ae6 Signed-off-by: Pei Li peili.dev@gmail.com
Syzbot reported the following error: BUG: KASAN: slab-use-after-free in add_ra_bio_pages.constprop.0.isra.0+0xf03/0xfb0 fs/btrfs/compression.c:529
This is because we are reading the values from em right after freeing it before through free_extent_map(em).
This patch moves the line accessing the free'd values in em before they were free'd so we won't access free'd memory.
Fixes: 6a4049102055 ("btrfs: subpage: make add_ra_bio_pages() compatible")
This type of useful information should be in the changelog, not after the ---
And btw, this was already fixed last week and it's in for-next:
https://github.com/btrfs/linux/commit/aaa2c8b3f54e7b4f31616fd03bb302cc17cccf... https://lore.kernel.org/linux-btrfs/20240704171031.GX21023@twin.jikos.cz/T/#...
Thanks.
Changes in v2:
- Adapt Qu's suggestion to move the read-after-free line before freeing
- Cc stable kernel
- Link to v1: https://lore.kernel.org/r/20240710-bug11-v1-1-aa02297fbbc9@gmail.com
fs/btrfs/compression.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c index 6441e47d8a5e..f271df10ef1c 100644 --- a/fs/btrfs/compression.c +++ b/fs/btrfs/compression.c @@ -514,6 +514,8 @@ static noinline int add_ra_bio_pages(struct inode *inode, put_page(page); break; }
add_size = min(em->start + em->len, page_end + 1) - cur;
free_extent_map(em); if (page->index == end_index) {
@@ -526,7 +528,6 @@ static noinline int add_ra_bio_pages(struct inode *inode, } }
add_size = min(em->start + em->len, page_end + 1) - cur; ret = bio_add_page(orig_bio, page, add_size, offset_in_page(cur)); if (ret != add_size) { unlock_extent(tree, cur, page_end, NULL);
base-commit: 563a50672d8a86ec4b114a4a2f44d6e7ff855f5b change-id: 20240710-bug11-a8ac18afb724
Best regards,
Pei Li peili.dev@gmail.com
linux-stable-mirror@lists.linaro.org