On Fri, Jul 16, 2021 at 04:04:18PM +0100, Christoph Hellwig wrote:
On Fri, Jul 16, 2021 at 04:00:32PM +0100, Matthew Wilcox (Oracle) wrote:
Inline data needs to be flushed from the kernel's view of a page before it's mapped by userspace.
Cc: stable@vger.kernel.org Fixes: 19e0c58f6552 ("iomap: generic inline data handling") Signed-off-by: Matthew Wilcox (Oracle) willy@infradead.org
fs/iomap/buffered-io.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 41da4f14c00b..fe60c603f4ca 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -222,6 +222,7 @@ iomap_read_inline_data(struct inode *inode, struct page *page, memcpy(addr, iomap->inline_data, size); memset(addr + size, 0, PAGE_SIZE - size); kunmap_atomic(addr);
- flush_dcache_page(page);
.. and all writes into a kmap also need such a flush, so this needs to move a line up. My plan was to add a memcpy_to_page_and_pad helper ala memcpy_to_page to get various file systems and drivers out of the business of cache flushing as much as we can.
hm? It's absolutely allowed to flush the page after calling kunmap. Look at zero_user_segments(), for example.