The memcpy() will unconditionally copy PAGE_SIZE bytes, which far exceeds the length of the array (96 bytes) that it's copying from. You can't see the results using read() because it'll be limmited by i_size (which is less than 96 bytes), but if you mmap the file, you can load the bytes from the page which are beyond i_size. We need to zero the tail of the page before marking it uptodate.
Cc: stable@vger.kernel.org Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") # actually v2.4.4.4 Signed-off-by: Matthew Wilcox (Oracle) willy@infradead.org --- fs/freevxfs/vxfs_immed.c | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-)
diff --git a/fs/freevxfs/vxfs_immed.c b/fs/freevxfs/vxfs_immed.c index 9b49ec36e667..c49612a24c18 100644 --- a/fs/freevxfs/vxfs_immed.c +++ b/fs/freevxfs/vxfs_immed.c @@ -30,15 +30,12 @@ */ static int vxfs_immed_read_folio(struct file *fp, struct folio *folio) { - struct vxfs_inode_info *vip = VXFS_INO(folio->mapping->host); - void *src = vip->vii_immed.vi_immed + folio_pos(folio); - unsigned long i; - - for (i = 0; i < folio_nr_pages(folio); i++) { - memcpy_to_page(folio_page(folio, i), 0, src, PAGE_SIZE); - src += PAGE_SIZE; - } + struct inode *inode = folio->mapping->host; + struct vxfs_inode_info *vip = VXFS_INO(inode); + loff_t isize = i_size_read(inode);
+ memcpy_to_file_folio(folio, 0, vip->vii_immed.vi_immed, isize); + folio_zero_segment(folio, isize, folio_size(folio)); folio_mark_uptodate(folio); folio_unlock(folio);
On Wed, Mar 1, 2023 at 9:10 AM Matthew Wilcox (Oracle) willy@infradead.org wrote:
memcpy_to_file_folio(folio, 0, vip->vii_immed.vi_immed, isize);
Well, this function doesn't exist upstream yet, much less in any stable kernels..
In fact, I can't find any sign of that function *anywhere*. Searching for it on lkml finds zero hits, as does linux-next.
Is it something hidden in your personal tree, or is my grep failing because the function generation is behind some macro magic?
And while I'm on this subject: the "memcpy_from_file_folio()" interface is horrendously broken. For the highmem case, it shouldn't be an inline function, and it should loop over pages - instead of leaving the callers having to do that.
Of course, callers don't actually do that (since there are no callers - unless I'm again missing it due to some macro games with token concatenation), but I really wish it was fixed before any callers appear.
Linus
On Wed, Mar 01, 2023 at 09:26:27AM -0800, Linus Torvalds wrote:
On Wed, Mar 1, 2023 at 9:10 AM Matthew Wilcox (Oracle) willy@infradead.org wrote:
memcpy_to_file_folio(folio, 0, vip->vii_immed.vi_immed, isize);
Well, this function doesn't exist upstream yet, much less in any stable kernels..
Oops, see other email.
And while I'm on this subject: the "memcpy_from_file_folio()" interface is horrendously broken. For the highmem case, it shouldn't be an inline function, and it should loop over pages - instead of leaving the callers having to do that.
Of course, callers don't actually do that (since there are no callers
- unless I'm again missing it due to some macro games with token
concatenation), but I really wish it was fixed before any callers appear.
The first caller was pagecache_read() in fs/ext4/verity.c. I think that's waiting for next merge window. It does make sense in that context -- the caller genuinely wants to loop over multiple folios because its read may cross folio boundaries, so it makes sense. It actually reduces how much calculation the caller does:
while (count) { - size_t n = min_t(size_t, count, - PAGE_SIZE - offset_in_page(pos)); - struct page *page; + struct folio *folio; + size_t n;
- page = read_mapping_page(inode->i_mapping, pos >> PAGE_SHIFT, + folio = read_mapping_folio(inode->i_mapping, pos >> PAGE_SHIFT, NULL); - if (IS_ERR(page)) - return PTR_ERR(page); - - memcpy_from_page(buf, page, offset_in_page(pos), n); + if (IS_ERR(folio)) + return PTR_ERR(folio);
- put_page(page); + n = memcpy_from_file_folio(buf, folio, pos, count); + folio_put(folio);
buf += n; pos += n;
Are there other callers which only handle a single folio and really wish that memcpy_from_file_folio() handled the loop internally? Probably. I don't have a good survey yet.
On Wed, Mar 1, 2023 at 9:38 AM Matthew Wilcox willy@infradead.org wrote:
The first caller was pagecache_read() in fs/ext4/verity.c. I think that's waiting for next merge window.
Ahh.
It does make sense in that context -- the caller genuinely wants to loop over multiple folios because its read may cross folio boundaries, so it makes sense.
Hmm. Can we make it a lot more obvious that you really have to do that some way?
Perhaps in the name ("partial" somewhere or something) or even with a __must_check on the return value or similar?
Because as-is, it really implies "copy to folio", and the natural reaction would be to loop over folios.
The ext4 code you quote doesn't loop over folios, it loops over bytes, so in that context the function does indeed make sense.
Linus
linux-stable-mirror@lists.linaro.org