On Sun, May 08, 2022 at 07:47:02PM -0700, Sultan Alsawaf wrote:
From: Sultan Alsawaf sultan@kerneltoast.com
The asynchronous zspage free worker tries to lock a zspage's entire page list without defending against page migration. Since pages which haven't yet been locked can concurrently migrate off the zspage page list while lock_zspage() churns away, lock_zspage() can suffer from a few different lethal races. It can lock a page which no longer belongs to the zspage and unsafely dereference page_private(), it can unsafely dereference a torn pointer to the next page (since there's a data race), and it can observe a spurious NULL pointer to the next page and thus not lock all of the zspage's pages (since a single page migration will reconstruct the entire page list, and create_page_chain() unconditionally zeroes out each list pointer in the process).
Fix the races by using migrate_read_lock() in lock_zspage() to synchronize with page migration.
Cc: stable@vger.kernel.org Fixes: 48b4800a1c6a ("zsmalloc: page migration support")
Shouldn't the fix be Fixes: 77ff465799c6 ("zsmalloc: zs_page_migrate: skip unnecessary loops but not return -EBUSY if zspage is not inuse)? Because we didn't migrate ZS_EMPTY pages before.
Signed-off-by: Sultan Alsawaf sultan@kerneltoast.com
mm/zsmalloc.c | 37 +++++++++++++++++++++++++++++++++---- 1 file changed, 33 insertions(+), 4 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 9152fbde33b5..5d5fc04385b8 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1718,11 +1718,40 @@ static enum fullness_group putback_zspage(struct size_class *class, */ static void lock_zspage(struct zspage *zspage) {
- struct page *page = get_first_page(zspage);
- struct page *curr_page, *page;
- do {
lock_page(page);
- } while ((page = get_next_page(page)) != NULL);
- /*
* Pages we haven't locked yet can be migrated off the list while we're
* trying to lock them, so we need to be careful and only attempt to
* lock each page under migrate_read_lock(). Otherwise, the page we lock
* may no longer belong to the zspage. This means that we may wait for
* the wrong page to unlock, so we must take a reference to the page
* prior to waiting for it to unlock outside migrate_read_lock().
I couldn't get the point here. Why couldn't we simple lock zspage migration?
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 9152fbde33b5..05ff2315b7b1 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1987,7 +1987,10 @@ static void async_free_zspage(struct work_struct *work)
list_for_each_entry_safe(zspage, tmp, &free_pages, list) { list_del(&zspage->list); + + migrate_read_lock(zspage); lock_zspage(zspage); + migrate_read_unlock(zspage);
get_zspage_mapping(zspage, &class_idx, &fullness); VM_BUG_ON(fullness != ZS_EMPTY);
*/
- while (1) {
migrate_read_lock(zspage);
page = get_first_page(zspage);
if (trylock_page(page))
break;
get_page(page);
migrate_read_unlock(zspage);
wait_on_page_locked(page);
put_page(page);
- }
- curr_page = page;
- while ((page = get_next_page(curr_page))) {
if (trylock_page(page)) {
curr_page = page;
} else {
get_page(page);
migrate_read_unlock(zspage);
wait_on_page_locked(page);
put_page(page);
migrate_read_lock(zspage);
}
- }
- migrate_read_unlock(zspage);
} static int zs_init_fs_context(struct fs_context *fc) -- 2.36.0