On Mon, Jul 19, 2021 at 09:39:17AM +0100, Christoph Hellwig wrote:
On Fri, Jul 16, 2021 at 06:28:10PM +0100, Matthew Wilcox wrote:
memcpy(addr, iomap->inline_data, size); memset(addr + size, 0, PAGE_SIZE - size); kunmap_atomic(addr);
- flush_dcache_page(page);
.. and all writes into a kmap also need such a flush, so this needs to move a line up. My plan was to add a memcpy_to_page_and_pad helper ala memcpy_to_page to get various file systems and drivers out of the business of cache flushing as much as we can.
hm? It's absolutely allowed to flush the page after calling kunmap. Look at zero_user_segments(), for example.
Documentation/core-api/cachetlb.rst states that any user page obtained using kmap needs a flush_kernel_dcache_page after modification. flush_dcache_page is a strict superset of flush_kernel_dcache_page.
Looks like (the other) Christoph broke this in 2008 with commit eebd2aa35569 ("Pagecache zeroing: zero_user_segment, zero_user_segments and zero_user"):
It has one line about it in the changelog:
Also extract the flushing of the caches to be outside of the kmap.
... which doesn't even attempt to justify why it's safe to do so.
- memset((char *)kaddr + (offset), 0, (size)); \ - flush_dcache_page(page); \ - kunmap_atomic(kaddr, (km_type)); \ + kunmap_atomic(kaddr, KM_USER0); + flush_dcache_page(page);
Looks like it came from https://lore.kernel.org/lkml/20070911060425.472862098@sgi.com/ but there was no discussion of this ... plenty of discussion about other conceptual problems with the entire patchset.
That beeing said flushing after kmap updates is a complete mess. arm as probably the poster child for dcache challenged plus highmem architectures always flushed caches from kunmap and, and arc has a flush_dcache_page that doesn't work at all on a highmem page that is not kmapped (where kmap_atomic and kmap_local_page don't count as kmapped as they don't set page->virtual).