From: Alistair Popple apopple@nvidia.com
[ Upstream commit 00fa15e0d56482e32d8ca1f51d76b0ee00afb16b ]
Commit 793917d997df ("mm/readahead: Add large folio readahead") introduced support for using large folios for filebacked pages if the filesystem supports it.
page_cache_ra_order() was introduced to allocate and add these large folios to the page cache. However adding pages to the page cache should be serialized against truncation and hole punching by taking invalidate_lock. Not doing so can lead to data races resulting in stale data getting added to the page cache and marked up-to-date. See commit 730633f0b7f9 ("mm: Protect operations adding pages to page cache with invalidate_lock") for more details.
This issue was found by inspection but a testcase revealed it was possible to observe in practice on XFS. Fix this by taking invalidate_lock in page_cache_ra_order(), to mirror what is done for the non-thp case in page_cache_ra_unbounded().
Signed-off-by: Alistair Popple apopple@nvidia.com Fixes: 793917d997df ("mm/readahead: Add large folio readahead") Reviewed-by: Jan Kara jack@suse.cz Signed-off-by: Matthew Wilcox (Oracle) willy@infradead.org Signed-off-by: Sasha Levin sashal@kernel.org --- mm/readahead.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/mm/readahead.c b/mm/readahead.c index 4a60cdb64262..38635af5bab7 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -508,6 +508,7 @@ void page_cache_ra_order(struct readahead_control *ractl, new_order--; }
+ filemap_invalidate_lock_shared(mapping); while (index <= limit) { unsigned int order = new_order;
@@ -534,6 +535,7 @@ void page_cache_ra_order(struct readahead_control *ractl, }
read_pages(ractl); + filemap_invalidate_unlock_shared(mapping);
/* * If there were already pages in the page cache, then we may have