Inline data needs to be flushed from the kernel's view of a page before it's mapped by userspace.
Cc: stable@vger.kernel.org Fixes: 19e0c58f6552 ("iomap: generic inline data handling") Signed-off-by: Matthew Wilcox (Oracle) willy@infradead.org --- fs/iomap/buffered-io.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 41da4f14c00b..fe60c603f4ca 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -222,6 +222,7 @@ iomap_read_inline_data(struct inode *inode, struct page *page, memcpy(addr, iomap->inline_data, size); memset(addr + size, 0, PAGE_SIZE - size); kunmap_atomic(addr); + flush_dcache_page(page); SetPageUptodate(page); }
On Fri, Jul 16, 2021 at 04:00:32PM +0100, Matthew Wilcox (Oracle) wrote:
Inline data needs to be flushed from the kernel's view of a page before it's mapped by userspace.
Cc: stable@vger.kernel.org Fixes: 19e0c58f6552 ("iomap: generic inline data handling") Signed-off-by: Matthew Wilcox (Oracle) willy@infradead.org
fs/iomap/buffered-io.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 41da4f14c00b..fe60c603f4ca 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -222,6 +222,7 @@ iomap_read_inline_data(struct inode *inode, struct page *page, memcpy(addr, iomap->inline_data, size); memset(addr + size, 0, PAGE_SIZE - size); kunmap_atomic(addr);
- flush_dcache_page(page);
.. and all writes into a kmap also need such a flush, so this needs to move a line up. My plan was to add a memcpy_to_page_and_pad helper ala memcpy_to_page to get various file systems and drivers out of the business of cache flushing as much as we can.
On Fri, Jul 16, 2021 at 04:04:18PM +0100, Christoph Hellwig wrote:
On Fri, Jul 16, 2021 at 04:00:32PM +0100, Matthew Wilcox (Oracle) wrote:
Inline data needs to be flushed from the kernel's view of a page before it's mapped by userspace.
Cc: stable@vger.kernel.org Fixes: 19e0c58f6552 ("iomap: generic inline data handling") Signed-off-by: Matthew Wilcox (Oracle) willy@infradead.org
fs/iomap/buffered-io.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 41da4f14c00b..fe60c603f4ca 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -222,6 +222,7 @@ iomap_read_inline_data(struct inode *inode, struct page *page, memcpy(addr, iomap->inline_data, size); memset(addr + size, 0, PAGE_SIZE - size); kunmap_atomic(addr);
- flush_dcache_page(page);
.. and all writes into a kmap also need such a flush, so this needs to move a line up. My plan was to add a memcpy_to_page_and_pad helper ala memcpy_to_page to get various file systems and drivers out of the business of cache flushing as much as we can.
hm? It's absolutely allowed to flush the page after calling kunmap. Look at zero_user_segments(), for example.
On Fri, Jul 16, 2021 at 06:28:10PM +0100, Matthew Wilcox wrote:
memcpy(addr, iomap->inline_data, size); memset(addr + size, 0, PAGE_SIZE - size); kunmap_atomic(addr);
- flush_dcache_page(page);
.. and all writes into a kmap also need such a flush, so this needs to move a line up. My plan was to add a memcpy_to_page_and_pad helper ala memcpy_to_page to get various file systems and drivers out of the business of cache flushing as much as we can.
hm? It's absolutely allowed to flush the page after calling kunmap. Look at zero_user_segments(), for example.
Documentation/core-api/cachetlb.rst states that any user page obtained using kmap needs a flush_kernel_dcache_page after modification. flush_dcache_page is a strict superset of flush_kernel_dcache_page. That beeing said flushing after kmap updates is a complete mess. arm as probably the poster child for dcache challenged plus highmem architectures always flushed caches from kunmap and, and arc has a flush_dcache_page that doesn't work at all on a highmem page that is not kmapped (where kmap_atomic and kmap_local_page don't count as kmapped as they don't set page->virtual).
On Mon, Jul 19, 2021 at 09:39:17AM +0100, Christoph Hellwig wrote:
On Fri, Jul 16, 2021 at 06:28:10PM +0100, Matthew Wilcox wrote:
memcpy(addr, iomap->inline_data, size); memset(addr + size, 0, PAGE_SIZE - size); kunmap_atomic(addr);
- flush_dcache_page(page);
.. and all writes into a kmap also need such a flush, so this needs to move a line up. My plan was to add a memcpy_to_page_and_pad helper ala memcpy_to_page to get various file systems and drivers out of the business of cache flushing as much as we can.
hm? It's absolutely allowed to flush the page after calling kunmap. Look at zero_user_segments(), for example.
Documentation/core-api/cachetlb.rst states that any user page obtained using kmap needs a flush_kernel_dcache_page after modification. flush_dcache_page is a strict superset of flush_kernel_dcache_page.
Looks like (the other) Christoph broke this in 2008 with commit eebd2aa35569 ("Pagecache zeroing: zero_user_segment, zero_user_segments and zero_user"):
It has one line about it in the changelog:
Also extract the flushing of the caches to be outside of the kmap.
... which doesn't even attempt to justify why it's safe to do so.
- memset((char *)kaddr + (offset), 0, (size)); \ - flush_dcache_page(page); \ - kunmap_atomic(kaddr, (km_type)); \ + kunmap_atomic(kaddr, KM_USER0); + flush_dcache_page(page);
Looks like it came from https://lore.kernel.org/lkml/20070911060425.472862098@sgi.com/ but there was no discussion of this ... plenty of discussion about other conceptual problems with the entire patchset.
That beeing said flushing after kmap updates is a complete mess. arm as probably the poster child for dcache challenged plus highmem architectures always flushed caches from kunmap and, and arc has a flush_dcache_page that doesn't work at all on a highmem page that is not kmapped (where kmap_atomic and kmap_local_page don't count as kmapped as they don't set page->virtual).
On Fri, Jul 16, 2021 at 04:00:32PM +0100, Matthew Wilcox (Oracle) wrote:
Inline data needs to be flushed from the kernel's view of a page before it's mapped by userspace.
Cc: stable@vger.kernel.org Fixes: 19e0c58f6552 ("iomap: generic inline data handling") Signed-off-by: Matthew Wilcox (Oracle) willy@infradead.org
Reviewed-by: Gao Xiang hsiangkao@linux.alibaba.com (will update on my side as well)
Thanks, Gao Xiang
fs/iomap/buffered-io.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 41da4f14c00b..fe60c603f4ca 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -222,6 +222,7 @@ iomap_read_inline_data(struct inode *inode, struct page *page, memcpy(addr, iomap->inline_data, size); memset(addr + size, 0, PAGE_SIZE - size); kunmap_atomic(addr);
- flush_dcache_page(page); SetPageUptodate(page);
} -- 2.30.2
linux-stable-mirror@lists.linaro.org