In the presence of multi-order entries the typical pagevec_lookup_entries() pattern may loop forever:
while (index < end && pagevec_lookup_entries(&pvec, mapping, index, min(end - index, (pgoff_t)PAGEVEC_SIZE), indices)) { ... for (i = 0; i < pagevec_count(&pvec); i++) { index = indices[i]; ... } index++; /* BUG */ }
The loop updates 'index' for each index found and then increments to the next possible page to continue the lookup. However, if the last entry in the pagevec is multi-order then the next possible page index is more than 1 page away. Fix this locally for the filesystem-dax case by checking for dax-multi-order entries. Going forward new users of multi-order entries need to be similarly careful, or we need a generic way to report the page increment in the radix iterator.
Fixes: 5fac7408d828 ("mm, fs, dax: handle layout changes to pinned dax...") Cc: stable@vger.kernel.org Cc: Jan Kara jack@suse.cz Cc: Ross Zwisler zwisler@kernel.org Cc: Matthew Wilcox willy@infradead.org Signed-off-by: Dan Williams dan.j.williams@intel.com --- fs/dax.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/fs/dax.c b/fs/dax.c index 4becbf168b7f..c1472eede1f7 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -666,6 +666,8 @@ struct page *dax_layout_busy_page(struct address_space *mapping) while (index < end && pagevec_lookup_entries(&pvec, mapping, index, min(end - index, (pgoff_t)PAGEVEC_SIZE), indices)) { + pgoff_t nr_pages = 1; + for (i = 0; i < pagevec_count(&pvec); i++) { struct page *pvec_ent = pvec.pages[i]; void *entry; @@ -680,8 +682,11 @@ struct page *dax_layout_busy_page(struct address_space *mapping)
xa_lock_irq(&mapping->i_pages); entry = get_unlocked_mapping_entry(mapping, index, NULL); - if (entry) + if (entry) { page = dax_busy_page(entry); + /* account for multi-order entries */ + nr_pages = 1UL << dax_radix_order(entry); + } put_unlocked_mapping_entry(mapping, index, entry); xa_unlock_irq(&mapping->i_pages); if (page) @@ -696,7 +701,7 @@ struct page *dax_layout_busy_page(struct address_space *mapping) */ pagevec_remove_exceptionals(&pvec); pagevec_release(&pvec); - index++; + index += nr_pages;
if (page) break;
On Sat, Oct 6, 2018 at 11:14 AM Dan Williams dan.j.williams@intel.com wrote:
In the presence of multi-order entries the typical pagevec_lookup_entries() pattern may loop forever:
while (index < end && pagevec_lookup_entries(&pvec, mapping, index, min(end - index, (pgoff_t)PAGEVEC_SIZE), indices)) { ... for (i = 0; i < pagevec_count(&pvec); i++) { index = indices[i]; ... } index++; /* BUG */ }
The loop updates 'index' for each index found and then increments to the next possible page to continue the lookup. However, if the last entry in the pagevec is multi-order then the next possible page index is more than 1 page away. Fix this locally for the filesystem-dax case by checking for dax-multi-order entries. Going forward new users of multi-order entries need to be similarly careful, or we need a generic way to report the page increment in the radix iterator.
Fixes: 5fac7408d828 ("mm, fs, dax: handle layout changes to pinned dax...") Cc: stable@vger.kernel.org Cc: Jan Kara jack@suse.cz Cc: Ross Zwisler zwisler@kernel.org Cc: Matthew Wilcox willy@infradead.org Signed-off-by: Dan Williams dan.j.williams@intel.com
fs/dax.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/fs/dax.c b/fs/dax.c index 4becbf168b7f..c1472eede1f7 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -666,6 +666,8 @@ struct page *dax_layout_busy_page(struct address_space *mapping) while (index < end && pagevec_lookup_entries(&pvec, mapping, index, min(end - index, (pgoff_t)PAGEVEC_SIZE), indices)) {
pgoff_t nr_pages = 1;
for (i = 0; i < pagevec_count(&pvec); i++) { struct page *pvec_ent = pvec.pages[i]; void *entry;
@@ -680,8 +682,11 @@ struct page *dax_layout_busy_page(struct address_space *mapping)
xa_lock_irq(&mapping->i_pages); entry = get_unlocked_mapping_entry(mapping, index, NULL);
if (entry)
if (entry) { page = dax_busy_page(entry);
/* account for multi-order entries */
nr_pages = 1UL << dax_radix_order(entry);
}
Thinking about this a bit further the next index will be at least nr_pages away, but we don't want to accidentally skip over entries as this patch might do. So nr_pages should only be adjusted by the entry size if it is the last entry.
linux-stable-mirror@lists.linaro.org