In zswap_writeback_entry(), after we get a folio from __read_swap_cache_async(), we grab the tree lock again to check that the swap entry was not invalidated and recycled. If it was, we delete the folio we just added to the swap cache and exit.
However, __read_swap_cache_async() returns the folio locked when it is newly allocated, which is always true for this path, and the folio is ref'd. Make sure to unlock and put the folio before returning.
This was discovered by code inspection, probably because this path handles a race condition that should not happen often, and the bug would not crash the system, it will only strand the folio indefinitely.
Fixes: 04fc7816089c ("mm: fix zswap writeback race condition") Cc: stable@vger.kernel.org Signed-off-by: Yosry Ahmed yosryahmed@google.com --- mm/zswap.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/mm/zswap.c b/mm/zswap.c index 8f4a7efc2bdae..00e90b9b5417d 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1448,6 +1448,8 @@ static int zswap_writeback_entry(struct zswap_entry *entry, if (zswap_rb_search(&tree->rbroot, swp_offset(entry->swpentry)) != entry) { spin_unlock(&tree->lock); delete_from_swap_cache(folio); + folio_unlock(folio); + folio_put(folio); return -ENOMEM; } spin_unlock(&tree->lock);
On 2024/1/25 16:51, Yosry Ahmed wrote:
In zswap_writeback_entry(), after we get a folio from __read_swap_cache_async(), we grab the tree lock again to check that the swap entry was not invalidated and recycled. If it was, we delete the folio we just added to the swap cache and exit.
However, __read_swap_cache_async() returns the folio locked when it is newly allocated, which is always true for this path, and the folio is ref'd. Make sure to unlock and put the folio before returning.
This was discovered by code inspection, probably because this path handles a race condition that should not happen often, and the bug would not crash the system, it will only strand the folio indefinitely.
Fixes: 04fc7816089c ("mm: fix zswap writeback race condition") Cc: stable@vger.kernel.org Signed-off-by: Yosry Ahmed yosryahmed@google.com
LGTM, thanks!
Reviewed-by: Chengming Zhou zhouchengming@bytedance.com
mm/zswap.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/mm/zswap.c b/mm/zswap.c index 8f4a7efc2bdae..00e90b9b5417d 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1448,6 +1448,8 @@ static int zswap_writeback_entry(struct zswap_entry *entry, if (zswap_rb_search(&tree->rbroot, swp_offset(entry->swpentry)) != entry) { spin_unlock(&tree->lock); delete_from_swap_cache(folio);
folio_unlock(folio);
return -ENOMEM; } spin_unlock(&tree->lock);folio_put(folio);
On Thu, Jan 25, 2024 at 08:51:27AM +0000, Yosry Ahmed wrote:
In zswap_writeback_entry(), after we get a folio from __read_swap_cache_async(), we grab the tree lock again to check that the swap entry was not invalidated and recycled. If it was, we delete the folio we just added to the swap cache and exit.
However, __read_swap_cache_async() returns the folio locked when it is newly allocated, which is always true for this path, and the folio is ref'd. Make sure to unlock and put the folio before returning.
This was discovered by code inspection, probably because this path handles a race condition that should not happen often, and the bug would not crash the system, it will only strand the folio indefinitely.
Fixes: 04fc7816089c ("mm: fix zswap writeback race condition") Cc: stable@vger.kernel.org Signed-off-by: Yosry Ahmed yosryahmed@google.com
Ouch, good catch.
Acked-by: Johannes Weiner hannes@cmpxchg.org
On Thu, Jan 25, 2024 at 12:51 AM Yosry Ahmed yosryahmed@google.com wrote:
In zswap_writeback_entry(), after we get a folio from __read_swap_cache_async(), we grab the tree lock again to check that the swap entry was not invalidated and recycled. If it was, we delete the folio we just added to the swap cache and exit.
However, __read_swap_cache_async() returns the folio locked when it is newly allocated, which is always true for this path, and the folio is ref'd. Make sure to unlock and put the folio before returning.
This was discovered by code inspection, probably because this path handles a race condition that should not happen often, and the bug would not crash the system, it will only strand the folio indefinitely.
Fixes: 04fc7816089c ("mm: fix zswap writeback race condition") Cc: stable@vger.kernel.org Signed-off-by: Yosry Ahmed yosryahmed@google.com
mm/zswap.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/mm/zswap.c b/mm/zswap.c index 8f4a7efc2bdae..00e90b9b5417d 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1448,6 +1448,8 @@ static int zswap_writeback_entry(struct zswap_entry *entry, if (zswap_rb_search(&tree->rbroot, swp_offset(entry->swpentry)) != entry) { spin_unlock(&tree->lock); delete_from_swap_cache(folio);
folio_unlock(folio);
folio_put(folio); return -ENOMEM; } spin_unlock(&tree->lock);
-- 2.43.0.429.g432eaa2c6b-goog
Oof. Yeah this is probably rare IRL (that looks like a very specific race condition), and the symptoms are rather subtle (no kernel crash). LGTM. Reviewed-by: Nhat Pham nphamcs@gmail.com
linux-stable-mirror@lists.linaro.org