On Sat 06-10-18 22:59:56, Dan Williams wrote:
In the presence of multi-order entries the typical pagevec_lookup_entries() pattern may loop forever:
while (index < end && pagevec_lookup_entries(&pvec, mapping, index, min(end - index, (pgoff_t)PAGEVEC_SIZE), indices)) { ... for (i = 0; i < pagevec_count(&pvec); i++) { index = indices[i]; ... } index++; /* BUG */ }
The loop updates 'index' for each index found and then increments to the next possible page to continue the lookup. However, if the last entry in the pagevec is multi-order then the next possible page index is more than 1 page away. Fix this locally for the filesystem-dax case by checking for dax-multi-order entries. Going forward new users of multi-order entries need to be similarly careful, or we need a generic way to report the page increment in the radix iterator.
Fixes: 5fac7408d828 ("mm, fs, dax: handle layout changes to pinned dax...") Cc: stable@vger.kernel.org Cc: Jan Kara jack@suse.cz Cc: Ross Zwisler zwisler@kernel.org Cc: Matthew Wilcox willy@infradead.org Signed-off-by: Dan Williams dan.j.williams@intel.com
Changes in v2:
- Only update nr_pages if the last entry in the pagevec is multi-order.
fs/dax.c | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-)
Thanks for fixing this up! It is somewhat ugly but nicer fixes would be more intrusive so I agree with this approach for the ease of backporting and let's clean up the iteration code later. Feel free to add:
Reviewed-by: Jan Kara jack@suse.cz
Honza
diff --git a/fs/dax.c b/fs/dax.c index 4becbf168b7f..0fb270f0a0ef 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -666,6 +666,8 @@ struct page *dax_layout_busy_page(struct address_space *mapping) while (index < end && pagevec_lookup_entries(&pvec, mapping, index, min(end - index, (pgoff_t)PAGEVEC_SIZE), indices)) {
pgoff_t nr_pages = 1;
- for (i = 0; i < pagevec_count(&pvec); i++) { struct page *pvec_ent = pvec.pages[i]; void *entry;
@@ -680,8 +682,15 @@ struct page *dax_layout_busy_page(struct address_space *mapping) xa_lock_irq(&mapping->i_pages); entry = get_unlocked_mapping_entry(mapping, index, NULL);
if (entry)
if (entry) { page = dax_busy_page(entry);
/*
* Account for multi-order entries at
* the end of the pagevec.
*/
if (i + 1 >= pagevec_count(&pvec))
nr_pages = 1UL << dax_radix_order(entry);
} put_unlocked_mapping_entry(mapping, index, entry); xa_unlock_irq(&mapping->i_pages); if (page)
@@ -696,7 +705,7 @@ struct page *dax_layout_busy_page(struct address_space *mapping) */ pagevec_remove_exceptionals(&pvec); pagevec_release(&pvec);
index++;
index += nr_pages;
if (page) break;