From: Kairui Song kasong@tencent.com
When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads swapin the same entry at the same time, they get different pages (A, B). Before one thread (T0) finishes the swapin and installs page (A) to the PTE, another thread (T1) could finish swapin of page (B), swap_free the entry, then swap out the possibly modified page reusing the same entry. It breaks the pte_same check in (T0) because PTE value is unchanged, causing ABA problem. Thread (T0) will install a stalled page (A) into the PTE and cause data corruption.
One possible callstack is like this:
CPU0 CPU1 ---- ---- do_swap_page() do_swap_page() with same entry <direct swapin path> <direct swapin path> <alloc page A> <alloc page B> swap_read_folio() <- read to page A swap_read_folio() <- read to page B <slow on later locks or interrupt> <finished swapin first> ... set_pte_at() swap_free() <- entry is free <write to page B, now page A stalled> <swap out page B to same swap entry> pte_same() <- Check pass, PTE seems unchanged, but page A is stalled! swap_free() <- page B content lost! set_pte_at() <- staled page A installed!
And besides, for ZRAM, swap_free() allows the swap device to discard the entry content, so even if page (B) is not modified, if swap_read_folio() on CPU0 happens later than swap_free() on CPU1, it may also cause data loss.
To fix this, reuse swapcache_prepare which will pin the swap entry using the cache flag, and allow only one thread to pin it. Release the pin after PT unlocked. Racers will simply wait since it's a rare and very short event. A schedule() call is added to avoid wasting too much CPU or adding too much noise to perf statistics
Other methods like increasing the swap count don't seem to be a good idea after some tests, that will cause racers to fall back to use the swap cache again. Parallel swapin using different methods leads to a much more complex scenario.
Reproducer:
This race issue can be triggered easily using a well constructed reproducer and patched brd (with a delay in read path) [1]:
With latest 6.8 mainline, race caused data loss can be observed easily: $ gcc -g -lpthread test-thread-swap-race.c && ./a.out Polulating 32MB of memory region... Keep swapping out... Starting round 0... Spawning 65536 workers... 32746 workers spawned, wait for done... Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss! Round 0 Failed, 15 data loss!
This reproducer spawns multiple threads sharing the same memory region using a small swap device. Every two threads updates mapped pages one by one in opposite direction trying to create a race, with one dedicated thread keep swapping out the data out using madvise.
The reproducer created a reproduce rate of about once every 5 minutes, so the race should be totally possible in production.
After this patch, I ran the reproducer for over a few hundred rounds and no data loss observed.
Performance overhead is minimal, microbenchmark swapin 10G from 32G zram:
Before: 10934698 us After: 11157121 us Non-direct: 13155355 us (Dropping SWP_SYNCHRONOUS_IO flag)
Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchronous device") Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1] Reported-by: "Huang, Ying" ying.huang@intel.com Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.... Signed-off-by: Kairui Song kasong@tencent.com Cc: stable@vger.kernel.org
--- Update from V2: - Add a schedule() if raced to prevent repeated page faults wasting CPU and add noise to perf statistics. - Use a bool to state the special case instead of reusing existing variables fixing error handling [Minchan Kim].
V2: https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/
Update from V1: - Add some words on ZRAM case, it will discard swap content on swap_free so the race window is a bit different but cure is the same. [Barry Song] - Update comments make it cleaner [Huang, Ying] - Add a function place holder to fix CONFIG_SWAP=n built [SeongJae Park] - Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO instead of "direct swapin path" [Yu Zhao] - Update commit message. - Collect Review and Acks.
V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/
include/linux/swap.h | 5 +++++ mm/memory.c | 20 ++++++++++++++++++++ mm/swap.h | 5 +++++ mm/swapfile.c | 13 +++++++++++++ 4 files changed, 43 insertions(+)
diff --git a/include/linux/swap.h b/include/linux/swap.h index 4db00ddad261..8d28f6091a32 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp) return 0; }
+static inline int swapcache_prepare(swp_entry_t swp) +{ + return 0; +} + static inline void swap_free(swp_entry_t swp) { } diff --git a/mm/memory.c b/mm/memory.c index 7e1f4849463a..7059230d0a54 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) struct page *page; struct swap_info_struct *si = NULL; rmap_t rmap_flags = RMAP_NONE; + bool need_clear_cache = false; bool exclusive = false; swp_entry_t entry; pte_t pte; @@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!folio) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) { + /* + * Prevent parallel swapin from proceeding with + * the cache flag. Otherwise, another thread may + * finish swapin first, free the entry, and swapout + * reusing the same entry. It's undetectable as + * pte_same() returns true due to entry reuse. + */ + if (swapcache_prepare(entry)) { + /* Relax a bit to prevent rapid repeated page faults */ + schedule(); + goto out; + } + need_clear_cache = true; + /* skip swapcache */ folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, vmf->address, false); @@ -4117,6 +4132,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (vmf->pte) pte_unmap_unlock(vmf->pte, vmf->ptl); out: + /* Clear the swap cache pin for direct swapin after PTL unlock */ + if (need_clear_cache) + swapcache_clear(si, entry); if (si) put_swap_device(si); return ret; @@ -4131,6 +4149,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) folio_unlock(swapcache); folio_put(swapcache); } + if (need_clear_cache) + swapcache_clear(si, entry); if (si) put_swap_device(si); return ret; diff --git a/mm/swap.h b/mm/swap.h index 758c46ca671e..fc2f6ade7f80 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -41,6 +41,7 @@ void __delete_from_swap_cache(struct folio *folio, void delete_from_swap_cache(struct folio *folio); void clear_shadow_from_swap_cache(int type, unsigned long begin, unsigned long end); +void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry); struct folio *swap_cache_get_folio(swp_entry_t entry, struct vm_area_struct *vma, unsigned long addr); struct folio *filemap_get_incore_folio(struct address_space *mapping, @@ -97,6 +98,10 @@ static inline int swap_writepage(struct page *p, struct writeback_control *wbc) return 0; }
+static inline void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry) +{ +} + static inline struct folio *swap_cache_get_folio(swp_entry_t entry, struct vm_area_struct *vma, unsigned long addr) { diff --git a/mm/swapfile.c b/mm/swapfile.c index 556ff7347d5f..746aa9da5302 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -3365,6 +3365,19 @@ int swapcache_prepare(swp_entry_t entry) return __swap_duplicate(entry, SWAP_HAS_CACHE); }
+void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry) +{ + struct swap_cluster_info *ci; + unsigned long offset = swp_offset(entry); + unsigned char usage; + + ci = lock_cluster_or_swap_info(si, offset); + usage = __swap_entry_free_locked(si, offset, SWAP_HAS_CACHE); + unlock_cluster_or_swap_info(si, ci); + if (!usage) + free_swap_slot(entry); +} + struct swap_info_struct *swp_swap_info(swp_entry_t entry) { return swap_type_to_swap_info(swp_type(entry));
On Fri, Feb 16, 2024 at 10:53 PM Kairui Song ryncsn@gmail.com wrote:
From: Kairui Song kasong@tencent.com
When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads swapin the same entry at the same time, they get different pages (A, B). Before one thread (T0) finishes the swapin and installs page (A) to the PTE, another thread (T1) could finish swapin of page (B), swap_free the entry, then swap out the possibly modified page reusing the same entry. It breaks the pte_same check in (T0) because PTE value is unchanged, causing ABA problem. Thread (T0) will install a stalled page (A) into the PTE and cause data corruption.
One possible callstack is like this:
CPU0 CPU1
do_swap_page() do_swap_page() with same entry <direct swapin path> <direct swapin path> <alloc page A> <alloc page B> swap_read_folio() <- read to page A swap_read_folio() <- read to page B <slow on later locks or interrupt> <finished swapin first> .. set_pte_at() swap_free() <- entry is free <write to page B, now page A stalled> <swap out page B to same swap entry> pte_same() <- Check pass, PTE seems unchanged, but page A is stalled! swap_free() <- page B content lost! set_pte_at() <- staled page A installed!
And besides, for ZRAM, swap_free() allows the swap device to discard the entry content, so even if page (B) is not modified, if swap_read_folio() on CPU0 happens later than swap_free() on CPU1, it may also cause data loss.
To fix this, reuse swapcache_prepare which will pin the swap entry using the cache flag, and allow only one thread to pin it. Release the pin after PT unlocked. Racers will simply wait since it's a rare and very short event. A schedule() call is added to avoid wasting too much CPU or adding too much noise to perf statistics
Other methods like increasing the swap count don't seem to be a good idea after some tests, that will cause racers to fall back to use the swap cache again. Parallel swapin using different methods leads to a much more complex scenario.
Reproducer:
This race issue can be triggered easily using a well constructed reproducer and patched brd (with a delay in read path) [1]:
With latest 6.8 mainline, race caused data loss can be observed easily: $ gcc -g -lpthread test-thread-swap-race.c && ./a.out Polulating 32MB of memory region... Keep swapping out... Starting round 0... Spawning 65536 workers... 32746 workers spawned, wait for done... Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss! Round 0 Failed, 15 data loss!
This reproducer spawns multiple threads sharing the same memory region using a small swap device. Every two threads updates mapped pages one by one in opposite direction trying to create a race, with one dedicated thread keep swapping out the data out using madvise.
The reproducer created a reproduce rate of about once every 5 minutes, so the race should be totally possible in production.
After this patch, I ran the reproducer for over a few hundred rounds and no data loss observed.
Performance overhead is minimal, microbenchmark swapin 10G from 32G zram:
Before: 10934698 us After: 11157121 us Non-direct: 13155355 us (Dropping SWP_SYNCHRONOUS_IO flag)
Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchronous device") Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1] Reported-by: "Huang, Ying" ying.huang@intel.com Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.... Signed-off-by: Kairui Song kasong@tencent.com Cc: stable@vger.kernel.org
Update from V2:
- Add a schedule() if raced to prevent repeated page faults wasting CPU and add noise to perf statistics.
- Use a bool to state the special case instead of reusing existing variables fixing error handling [Minchan Kim].
V2: https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/
Update from V1:
- Add some words on ZRAM case, it will discard swap content on swap_free so the race window is a bit different but cure is the same. [Barry Song]
- Update comments make it cleaner [Huang, Ying]
- Add a function place holder to fix CONFIG_SWAP=n built [SeongJae Park]
- Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO instead of "direct swapin path" [Yu Zhao]
- Update commit message.
- Collect Review and Acks.
V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/
include/linux/swap.h | 5 +++++ mm/memory.c | 20 ++++++++++++++++++++ mm/swap.h | 5 +++++ mm/swapfile.c | 13 +++++++++++++ 4 files changed, 43 insertions(+)
diff --git a/include/linux/swap.h b/include/linux/swap.h index 4db00ddad261..8d28f6091a32 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp) return 0; }
+static inline int swapcache_prepare(swp_entry_t swp) +{
return 0;
+}
static inline void swap_free(swp_entry_t swp) { } diff --git a/mm/memory.c b/mm/memory.c index 7e1f4849463a..7059230d0a54 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) struct page *page; struct swap_info_struct *si = NULL; rmap_t rmap_flags = RMAP_NONE;
bool need_clear_cache = false; bool exclusive = false; swp_entry_t entry; pte_t pte;
@@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!folio) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) {
/*
* Prevent parallel swapin from proceeding with
* the cache flag. Otherwise, another thread may
* finish swapin first, free the entry, and swapout
* reusing the same entry. It's undetectable as
* pte_same() returns true due to entry reuse.
*/
if (swapcache_prepare(entry)) {
/* Relax a bit to prevent rapid repeated page faults */
schedule();
goto out;
}
need_clear_cache = true;
Hi Kairui, i remember Ying had a suggestion to move swapcache_prepare after swap_read_folio() to decrease race window. does that one even help more to somehow "fix" the counting issue?
/* skip swapcache */ folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, vmf->address, false);
@@ -4117,6 +4132,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (vmf->pte) pte_unmap_unlock(vmf->pte, vmf->ptl); out:
/* Clear the swap cache pin for direct swapin after PTL unlock */
if (need_clear_cache)
swapcache_clear(si, entry); if (si) put_swap_device(si); return ret;
@@ -4131,6 +4149,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) folio_unlock(swapcache); folio_put(swapcache); }
if (need_clear_cache)
swapcache_clear(si, entry); if (si) put_swap_device(si); return ret;
diff --git a/mm/swap.h b/mm/swap.h index 758c46ca671e..fc2f6ade7f80 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -41,6 +41,7 @@ void __delete_from_swap_cache(struct folio *folio, void delete_from_swap_cache(struct folio *folio); void clear_shadow_from_swap_cache(int type, unsigned long begin, unsigned long end); +void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry); struct folio *swap_cache_get_folio(swp_entry_t entry, struct vm_area_struct *vma, unsigned long addr); struct folio *filemap_get_incore_folio(struct address_space *mapping, @@ -97,6 +98,10 @@ static inline int swap_writepage(struct page *p, struct writeback_control *wbc) return 0; }
+static inline void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry) +{ +}
static inline struct folio *swap_cache_get_folio(swp_entry_t entry, struct vm_area_struct *vma, unsigned long addr) { diff --git a/mm/swapfile.c b/mm/swapfile.c index 556ff7347d5f..746aa9da5302 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -3365,6 +3365,19 @@ int swapcache_prepare(swp_entry_t entry) return __swap_duplicate(entry, SWAP_HAS_CACHE); }
+void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry) +{
struct swap_cluster_info *ci;
unsigned long offset = swp_offset(entry);
unsigned char usage;
ci = lock_cluster_or_swap_info(si, offset);
usage = __swap_entry_free_locked(si, offset, SWAP_HAS_CACHE);
unlock_cluster_or_swap_info(si, ci);
if (!usage)
free_swap_slot(entry);
+}
struct swap_info_struct *swp_swap_info(swp_entry_t entry) { return swap_type_to_swap_info(swp_type(entry)); -- 2.43.0
Thanks Barry
On Fri, Feb 16, 2024 at 6:15 PM Barry Song 21cnbao@gmail.com wrote:
On Fri, Feb 16, 2024 at 10:53 PM Kairui Song ryncsn@gmail.com wrote:
From: Kairui Song kasong@tencent.com
When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads swapin the same entry at the same time, they get different pages (A, B). Before one thread (T0) finishes the swapin and installs page (A) to the PTE, another thread (T1) could finish swapin of page (B), swap_free the entry, then swap out the possibly modified page reusing the same entry. It breaks the pte_same check in (T0) because PTE value is unchanged, causing ABA problem. Thread (T0) will install a stalled page (A) into the PTE and cause data corruption.
One possible callstack is like this:
CPU0 CPU1
do_swap_page() do_swap_page() with same entry <direct swapin path> <direct swapin path> <alloc page A> <alloc page B> swap_read_folio() <- read to page A swap_read_folio() <- read to page B <slow on later locks or interrupt> <finished swapin first> .. set_pte_at() swap_free() <- entry is free <write to page B, now page A stalled> <swap out page B to same swap entry> pte_same() <- Check pass, PTE seems unchanged, but page A is stalled! swap_free() <- page B content lost! set_pte_at() <- staled page A installed!
And besides, for ZRAM, swap_free() allows the swap device to discard the entry content, so even if page (B) is not modified, if swap_read_folio() on CPU0 happens later than swap_free() on CPU1, it may also cause data loss.
To fix this, reuse swapcache_prepare which will pin the swap entry using the cache flag, and allow only one thread to pin it. Release the pin after PT unlocked. Racers will simply wait since it's a rare and very short event. A schedule() call is added to avoid wasting too much CPU or adding too much noise to perf statistics
Other methods like increasing the swap count don't seem to be a good idea after some tests, that will cause racers to fall back to use the swap cache again. Parallel swapin using different methods leads to a much more complex scenario.
Reproducer:
This race issue can be triggered easily using a well constructed reproducer and patched brd (with a delay in read path) [1]:
With latest 6.8 mainline, race caused data loss can be observed easily: $ gcc -g -lpthread test-thread-swap-race.c && ./a.out Polulating 32MB of memory region... Keep swapping out... Starting round 0... Spawning 65536 workers... 32746 workers spawned, wait for done... Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss! Round 0 Failed, 15 data loss!
This reproducer spawns multiple threads sharing the same memory region using a small swap device. Every two threads updates mapped pages one by one in opposite direction trying to create a race, with one dedicated thread keep swapping out the data out using madvise.
The reproducer created a reproduce rate of about once every 5 minutes, so the race should be totally possible in production.
After this patch, I ran the reproducer for over a few hundred rounds and no data loss observed.
Performance overhead is minimal, microbenchmark swapin 10G from 32G zram:
Before: 10934698 us After: 11157121 us Non-direct: 13155355 us (Dropping SWP_SYNCHRONOUS_IO flag)
Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchronous device") Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1] Reported-by: "Huang, Ying" ying.huang@intel.com Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.... Signed-off-by: Kairui Song kasong@tencent.com Cc: stable@vger.kernel.org
Update from V2:
- Add a schedule() if raced to prevent repeated page faults wasting CPU and add noise to perf statistics.
- Use a bool to state the special case instead of reusing existing variables fixing error handling [Minchan Kim].
V2: https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/
Update from V1:
- Add some words on ZRAM case, it will discard swap content on swap_free so the race window is a bit different but cure is the same. [Barry Song]
- Update comments make it cleaner [Huang, Ying]
- Add a function place holder to fix CONFIG_SWAP=n built [SeongJae Park]
- Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO instead of "direct swapin path" [Yu Zhao]
- Update commit message.
- Collect Review and Acks.
V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/
include/linux/swap.h | 5 +++++ mm/memory.c | 20 ++++++++++++++++++++ mm/swap.h | 5 +++++ mm/swapfile.c | 13 +++++++++++++ 4 files changed, 43 insertions(+)
diff --git a/include/linux/swap.h b/include/linux/swap.h index 4db00ddad261..8d28f6091a32 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp) return 0; }
+static inline int swapcache_prepare(swp_entry_t swp) +{
return 0;
+}
static inline void swap_free(swp_entry_t swp) { } diff --git a/mm/memory.c b/mm/memory.c index 7e1f4849463a..7059230d0a54 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) struct page *page; struct swap_info_struct *si = NULL; rmap_t rmap_flags = RMAP_NONE;
bool need_clear_cache = false; bool exclusive = false; swp_entry_t entry; pte_t pte;
@@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!folio) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) {
/*
* Prevent parallel swapin from proceeding with
* the cache flag. Otherwise, another thread may
* finish swapin first, free the entry, and swapout
* reusing the same entry. It's undetectable as
* pte_same() returns true due to entry reuse.
*/
if (swapcache_prepare(entry)) {
/* Relax a bit to prevent rapid repeated page faults */
schedule();
goto out;
}
need_clear_cache = true;
Hi Kairui, i remember Ying had a suggestion to move swapcache_prepare after swap_read_folio() to decrease race window. does that one even help more to somehow "fix" the counting issue?
Hi Barry,
Thanks for the comments!
Yes, that's one of the suggestions, I found the result already looks good enough in test scenario I've posted after adding scheduler() change. After moving swapcache_prepare actually waste more CPU as it need to alloc/free extra folios now, and test result for the counting issue barely changed, so I kept the original code.
Kairui Song ryncsn@gmail.com writes:
On Fri, Feb 16, 2024 at 6:15 PM Barry Song 21cnbao@gmail.com wrote:
On Fri, Feb 16, 2024 at 10:53 PM Kairui Song ryncsn@gmail.com wrote:
From: Kairui Song kasong@tencent.com
When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads swapin the same entry at the same time, they get different pages (A, B). Before one thread (T0) finishes the swapin and installs page (A) to the PTE, another thread (T1) could finish swapin of page (B), swap_free the entry, then swap out the possibly modified page reusing the same entry. It breaks the pte_same check in (T0) because PTE value is unchanged, causing ABA problem. Thread (T0) will install a stalled page (A) into the PTE and cause data corruption.
One possible callstack is like this:
CPU0 CPU1
do_swap_page() do_swap_page() with same entry <direct swapin path> <direct swapin path> <alloc page A> <alloc page B> swap_read_folio() <- read to page A swap_read_folio() <- read to page B <slow on later locks or interrupt> <finished swapin first> .. set_pte_at() swap_free() <- entry is free <write to page B, now page A stalled> <swap out page B to same swap entry> pte_same() <- Check pass, PTE seems unchanged, but page A is stalled! swap_free() <- page B content lost! set_pte_at() <- staled page A installed!
And besides, for ZRAM, swap_free() allows the swap device to discard the entry content, so even if page (B) is not modified, if swap_read_folio() on CPU0 happens later than swap_free() on CPU1, it may also cause data loss.
To fix this, reuse swapcache_prepare which will pin the swap entry using the cache flag, and allow only one thread to pin it. Release the pin after PT unlocked. Racers will simply wait since it's a rare and very short event. A schedule() call is added to avoid wasting too much CPU or adding too much noise to perf statistics
Other methods like increasing the swap count don't seem to be a good idea after some tests, that will cause racers to fall back to use the swap cache again. Parallel swapin using different methods leads to a much more complex scenario.
Reproducer:
This race issue can be triggered easily using a well constructed reproducer and patched brd (with a delay in read path) [1]:
With latest 6.8 mainline, race caused data loss can be observed easily: $ gcc -g -lpthread test-thread-swap-race.c && ./a.out Polulating 32MB of memory region... Keep swapping out... Starting round 0... Spawning 65536 workers... 32746 workers spawned, wait for done... Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss! Round 0 Failed, 15 data loss!
This reproducer spawns multiple threads sharing the same memory region using a small swap device. Every two threads updates mapped pages one by one in opposite direction trying to create a race, with one dedicated thread keep swapping out the data out using madvise.
The reproducer created a reproduce rate of about once every 5 minutes, so the race should be totally possible in production.
After this patch, I ran the reproducer for over a few hundred rounds and no data loss observed.
Performance overhead is minimal, microbenchmark swapin 10G from 32G zram:
Before: 10934698 us After: 11157121 us Non-direct: 13155355 us (Dropping SWP_SYNCHRONOUS_IO flag)
Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchronous device") Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1] Reported-by: "Huang, Ying" ying.huang@intel.com Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.... Signed-off-by: Kairui Song kasong@tencent.com Cc: stable@vger.kernel.org
Update from V2:
- Add a schedule() if raced to prevent repeated page faults wasting CPU and add noise to perf statistics.
- Use a bool to state the special case instead of reusing existing variables fixing error handling [Minchan Kim].
V2: https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/
Update from V1:
- Add some words on ZRAM case, it will discard swap content on swap_free so the race window is a bit different but cure is the same. [Barry Song]
- Update comments make it cleaner [Huang, Ying]
- Add a function place holder to fix CONFIG_SWAP=n built [SeongJae Park]
- Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO instead of "direct swapin path" [Yu Zhao]
- Update commit message.
- Collect Review and Acks.
V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/
include/linux/swap.h | 5 +++++ mm/memory.c | 20 ++++++++++++++++++++ mm/swap.h | 5 +++++ mm/swapfile.c | 13 +++++++++++++ 4 files changed, 43 insertions(+)
diff --git a/include/linux/swap.h b/include/linux/swap.h index 4db00ddad261..8d28f6091a32 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp) return 0; }
+static inline int swapcache_prepare(swp_entry_t swp) +{
return 0;
+}
static inline void swap_free(swp_entry_t swp) { } diff --git a/mm/memory.c b/mm/memory.c index 7e1f4849463a..7059230d0a54 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) struct page *page; struct swap_info_struct *si = NULL; rmap_t rmap_flags = RMAP_NONE;
bool need_clear_cache = false; bool exclusive = false; swp_entry_t entry; pte_t pte;
@@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!folio) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) {
/*
* Prevent parallel swapin from proceeding with
* the cache flag. Otherwise, another thread may
* finish swapin first, free the entry, and swapout
* reusing the same entry. It's undetectable as
* pte_same() returns true due to entry reuse.
*/
if (swapcache_prepare(entry)) {
/* Relax a bit to prevent rapid repeated page faults */
schedule();
goto out;
}
need_clear_cache = true;
Hi Kairui, i remember Ying had a suggestion to move swapcache_prepare after swap_read_folio() to decrease race window. does that one even help more to somehow "fix" the counting issue?
Hi Barry,
Thanks for the comments!
Yes, that's one of the suggestions, I found the result already looks good enough in test scenario I've posted after adding scheduler() change. After moving swapcache_prepare actually waste more CPU as it need to alloc/free extra folios now, and test result for the counting issue barely changed, so I kept the original code.
The time to allocate folios is unbounded, especially when direct reclaiming is triggered. And we can clear swap cache flag earlier to reduce the race window too.
-- Best Regards, Huang, Ying
On 16.02.24 10:51, Kairui Song wrote:
From: Kairui Song kasong@tencent.com
When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads swapin the same entry at the same time, they get different pages (A, B). Before one thread (T0) finishes the swapin and installs page (A) to the PTE, another thread (T1) could finish swapin of page (B), swap_free the entry, then swap out the possibly modified page reusing the same entry. It breaks the pte_same check in (T0) because PTE value is unchanged, causing ABA problem. Thread (T0) will install a stalled page (A) into the PTE and cause data corruption.
One possible callstack is like this:
CPU0 CPU1
do_swap_page() do_swap_page() with same entry <direct swapin path> <direct swapin path> <alloc page A> <alloc page B> swap_read_folio() <- read to page A swap_read_folio() <- read to page B <slow on later locks or interrupt> <finished swapin first> ... set_pte_at() swap_free() <- entry is free <write to page B, now page A stalled> <swap out page B to same swap entry> pte_same() <- Check pass, PTE seems unchanged, but page A is stalled! swap_free() <- page B content lost! set_pte_at() <- staled page A installed!
And besides, for ZRAM, swap_free() allows the swap device to discard the entry content, so even if page (B) is not modified, if swap_read_folio() on CPU0 happens later than swap_free() on CPU1, it may also cause data loss.
To fix this, reuse swapcache_prepare which will pin the swap entry using the cache flag, and allow only one thread to pin it. Release the pin after PT unlocked. Racers will simply wait since it's a rare and very short event. A schedule() call is added to avoid wasting too much CPU or adding too much noise to perf statistics
Other methods like increasing the swap count don't seem to be a good idea after some tests, that will cause racers to fall back to use the swap cache again. Parallel swapin using different methods leads to a much more complex scenario.
Reproducer:
This race issue can be triggered easily using a well constructed reproducer and patched brd (with a delay in read path) [1]:
With latest 6.8 mainline, race caused data loss can be observed easily: $ gcc -g -lpthread test-thread-swap-race.c && ./a.out Polulating 32MB of memory region... Keep swapping out... Starting round 0... Spawning 65536 workers... 32746 workers spawned, wait for done... Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss! Round 0 Failed, 15 data loss!
This reproducer spawns multiple threads sharing the same memory region using a small swap device. Every two threads updates mapped pages one by one in opposite direction trying to create a race, with one dedicated thread keep swapping out the data out using madvise.
The reproducer created a reproduce rate of about once every 5 minutes, so the race should be totally possible in production.
After this patch, I ran the reproducer for over a few hundred rounds and no data loss observed.
Performance overhead is minimal, microbenchmark swapin 10G from 32G zram:
Before: 10934698 us After: 11157121 us Non-direct: 13155355 us (Dropping SWP_SYNCHRONOUS_IO flag)
Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchronous device") Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1] Reported-by: "Huang, Ying" ying.huang@intel.com Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.... Signed-off-by: Kairui Song kasong@tencent.com Cc: stable@vger.kernel.org
Update from V2:
- Add a schedule() if raced to prevent repeated page faults wasting CPU and add noise to perf statistics.
- Use a bool to state the special case instead of reusing existing variables fixing error handling [Minchan Kim].
V2: https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/
Update from V1:
- Add some words on ZRAM case, it will discard swap content on swap_free so the race window is a bit different but cure is the same. [Barry Song]
- Update comments make it cleaner [Huang, Ying]
- Add a function place holder to fix CONFIG_SWAP=n built [SeongJae Park]
- Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO instead of "direct swapin path" [Yu Zhao]
- Update commit message.
- Collect Review and Acks.
V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/
include/linux/swap.h | 5 +++++ mm/memory.c | 20 ++++++++++++++++++++ mm/swap.h | 5 +++++ mm/swapfile.c | 13 +++++++++++++ 4 files changed, 43 insertions(+)
diff --git a/include/linux/swap.h b/include/linux/swap.h index 4db00ddad261..8d28f6091a32 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp) return 0; } +static inline int swapcache_prepare(swp_entry_t swp) +{
- return 0;
+}
- static inline void swap_free(swp_entry_t swp) { }
diff --git a/mm/memory.c b/mm/memory.c index 7e1f4849463a..7059230d0a54 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) struct page *page; struct swap_info_struct *si = NULL; rmap_t rmap_flags = RMAP_NONE;
- bool need_clear_cache = false; bool exclusive = false; swp_entry_t entry; pte_t pte;
@@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!folio) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) {
/*
* Prevent parallel swapin from proceeding with
* the cache flag. Otherwise, another thread may
* finish swapin first, free the entry, and swapout
* reusing the same entry. It's undetectable as
* pte_same() returns true due to entry reuse.
*/
if (swapcache_prepare(entry)) {
/* Relax a bit to prevent rapid repeated page faults */
schedule();
goto out;
}
need_clear_cache = true;
I took a closer look at __read_swap_cache_async() and it essentially does something similar.
Instead of returning, it keeps retrying until it finds that swapcache_prepare() fails for another reason than -EEXISTS (e.g., freed concurrently) or it finds the entry in the swapcache.
So if you would succeed here on a freed+reused swap entry, __read_swap_cache_async() would simply retry.
It spells that out:
/* * We might race against __delete_from_swap_cache(), and * stumble across a swap_map entry whose SWAP_HAS_CACHE * has not yet been cleared. Or race against another * __read_swap_cache_async(), which has set SWAP_HAS_CACHE * in swap_map, but not yet added its folio to swap cache. */
Whereby we could not race against this code here as well where we speculatively set SWAP_HAS_CACHE and might never add something to the swap cache.
I'd probably avoid the wrong returns and do something even closer to __read_swap_cache_async().
while (true) { /* * Fake that we are trying to insert a page into the swapcache, to * serialize against concurrent threads wanting to do the same. * [more from your description] */ ret = swapcache_prepare(entry); if (likely(!ret) /* * Move forward with swapin, we'll recheck if the PTE hasn't * changed later. */ break; else if (ret != -EEXIST) goto out; /* * See __read_swap_cache_async(). We might either have raced against * another thread, or the entry could have been freed and reused in the * meantime. Make sure that the PTE did not change, to detect freeing. */ vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); if (!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte)) goto unlock;
schedule(); }
I was skeptical about the schedule(), but __read_swap_cache_async() does it already because there is no better way to wait for the event to happen.
With something like above you would no longer depend on the speed of schedule() to determine how often you would retry the fault, which would likely make sense.
I do wonder about the schedule() vs. schedule_timeout_uninterruptible(), though. No expert on that area, do you have any idea?
On 16.02.24 17:53, David Hildenbrand wrote:
On 16.02.24 10:51, Kairui Song wrote:
From: Kairui Song kasong@tencent.com
When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads swapin the same entry at the same time, they get different pages (A, B). Before one thread (T0) finishes the swapin and installs page (A) to the PTE, another thread (T1) could finish swapin of page (B), swap_free the entry, then swap out the possibly modified page reusing the same entry. It breaks the pte_same check in (T0) because PTE value is unchanged, causing ABA problem. Thread (T0) will install a stalled page (A) into the PTE and cause data corruption.
One possible callstack is like this:
CPU0 CPU1
do_swap_page() do_swap_page() with same entry <direct swapin path> <direct swapin path> <alloc page A> <alloc page B> swap_read_folio() <- read to page A swap_read_folio() <- read to page B <slow on later locks or interrupt> <finished swapin first> ... set_pte_at() swap_free() <- entry is free <write to page B, now page A stalled> <swap out page B to same swap entry> pte_same() <- Check pass, PTE seems unchanged, but page A is stalled! swap_free() <- page B content lost! set_pte_at() <- staled page A installed!
And besides, for ZRAM, swap_free() allows the swap device to discard the entry content, so even if page (B) is not modified, if swap_read_folio() on CPU0 happens later than swap_free() on CPU1, it may also cause data loss.
To fix this, reuse swapcache_prepare which will pin the swap entry using the cache flag, and allow only one thread to pin it. Release the pin after PT unlocked. Racers will simply wait since it's a rare and very short event. A schedule() call is added to avoid wasting too much CPU or adding too much noise to perf statistics
Other methods like increasing the swap count don't seem to be a good idea after some tests, that will cause racers to fall back to use the swap cache again. Parallel swapin using different methods leads to a much more complex scenario.
Reproducer:
This race issue can be triggered easily using a well constructed reproducer and patched brd (with a delay in read path) [1]:
With latest 6.8 mainline, race caused data loss can be observed easily: $ gcc -g -lpthread test-thread-swap-race.c && ./a.out Polulating 32MB of memory region... Keep swapping out... Starting round 0... Spawning 65536 workers... 32746 workers spawned, wait for done... Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss! Round 0 Failed, 15 data loss!
This reproducer spawns multiple threads sharing the same memory region using a small swap device. Every two threads updates mapped pages one by one in opposite direction trying to create a race, with one dedicated thread keep swapping out the data out using madvise.
The reproducer created a reproduce rate of about once every 5 minutes, so the race should be totally possible in production.
After this patch, I ran the reproducer for over a few hundred rounds and no data loss observed.
Performance overhead is minimal, microbenchmark swapin 10G from 32G zram:
Before: 10934698 us After: 11157121 us Non-direct: 13155355 us (Dropping SWP_SYNCHRONOUS_IO flag)
Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchronous device") Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1] Reported-by: "Huang, Ying" ying.huang@intel.com Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.... Signed-off-by: Kairui Song kasong@tencent.com Cc: stable@vger.kernel.org
Update from V2:
- Add a schedule() if raced to prevent repeated page faults wasting CPU and add noise to perf statistics.
- Use a bool to state the special case instead of reusing existing variables fixing error handling [Minchan Kim].
V2: https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/
Update from V1:
- Add some words on ZRAM case, it will discard swap content on swap_free so the race window is a bit different but cure is the same. [Barry Song]
- Update comments make it cleaner [Huang, Ying]
- Add a function place holder to fix CONFIG_SWAP=n built [SeongJae Park]
- Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO instead of "direct swapin path" [Yu Zhao]
- Update commit message.
- Collect Review and Acks.
V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/
include/linux/swap.h | 5 +++++ mm/memory.c | 20 ++++++++++++++++++++ mm/swap.h | 5 +++++ mm/swapfile.c | 13 +++++++++++++ 4 files changed, 43 insertions(+)
diff --git a/include/linux/swap.h b/include/linux/swap.h index 4db00ddad261..8d28f6091a32 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp) return 0; } +static inline int swapcache_prepare(swp_entry_t swp) +{
- return 0;
+}
- static inline void swap_free(swp_entry_t swp) { }
diff --git a/mm/memory.c b/mm/memory.c index 7e1f4849463a..7059230d0a54 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) struct page *page; struct swap_info_struct *si = NULL; rmap_t rmap_flags = RMAP_NONE;
- bool need_clear_cache = false; bool exclusive = false; swp_entry_t entry; pte_t pte;
@@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!folio) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) {
/*
* Prevent parallel swapin from proceeding with
* the cache flag. Otherwise, another thread may
* finish swapin first, free the entry, and swapout
* reusing the same entry. It's undetectable as
* pte_same() returns true due to entry reuse.
*/
if (swapcache_prepare(entry)) {
/* Relax a bit to prevent rapid repeated page faults */
schedule();
goto out;
}
need_clear_cache = true;
I took a closer look at __read_swap_cache_async() and it essentially does something similar.
Instead of returning, it keeps retrying until it finds that swapcache_prepare() fails for another reason than -EEXISTS (e.g., freed concurrently) or it finds the entry in the swapcache.
So if you would succeed here on a freed+reused swap entry, __read_swap_cache_async() would simply retry.
It spells that out:
/* * We might race against __delete_from_swap_cache(), and * stumble across a swap_map entry whose SWAP_HAS_CACHE * has not yet been cleared. Or race against another * __read_swap_cache_async(), which has set SWAP_HAS_CACHE * in swap_map, but not yet added its folio to swap cache. */
Whereby we could not race against this code here as well where we speculatively set SWAP_HAS_CACHE and might never add something to the swap cache.
I'd probably avoid the wrong returns and do something even closer to __read_swap_cache_async().
while (true) { /* * Fake that we are trying to insert a page into the swapcache, to * serialize against concurrent threads wanting to do the same. * [more from your description] */ ret = swapcache_prepare(entry); if (likely(!ret) /* * Move forward with swapin, we'll recheck if the PTE hasn't * changed later. */ break; else if (ret != -EEXIST) goto out; /* * See __read_swap_cache_async(). We might either have raced against * another thread, or the entry could have been freed and reused in the * meantime. Make sure that the PTE did not change, to detect freeing. */ vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); if (!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte)) goto unlock;
schedule(); }
I was skeptical about the schedule(), but __read_swap_cache_async() does it already because there is no better way to wait for the event to happen.
With something like above you would no longer depend on the speed of schedule() to determine how often you would retry the fault, which would likely make sense.
I do wonder about the schedule() vs. schedule_timeout_uninterruptible(), though. No expert on that area, do you have any idea?
Forgot to add
Acked-by: David Hildenbrand david@redhat.com
But I suspect we do not want to not rely on schedule() to actually sleep, and instead keep retrying until the other thread finished, similar to above.
On Sat, Feb 17, 2024 at 2:02 AM David Hildenbrand david@redhat.com wrote:
On 16.02.24 17:53, David Hildenbrand wrote:
On 16.02.24 10:51, Kairui Song wrote:
From: Kairui Song kasong@tencent.com
When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads swapin the same entry at the same time, they get different pages (A, B). Before one thread (T0) finishes the swapin and installs page (A) to the PTE, another thread (T1) could finish swapin of page (B), swap_free the entry, then swap out the possibly modified page reusing the same entry. It breaks the pte_same check in (T0) because PTE value is unchanged, causing ABA problem. Thread (T0) will install a stalled page (A) into the PTE and cause data corruption.
One possible callstack is like this:
CPU0 CPU1
do_swap_page() do_swap_page() with same entry <direct swapin path> <direct swapin path> <alloc page A> <alloc page B> swap_read_folio() <- read to page A swap_read_folio() <- read to page B <slow on later locks or interrupt> <finished swapin first> ... set_pte_at() swap_free() <- entry is free <write to page B, now page A stalled> <swap out page B to same swap entry> pte_same() <- Check pass, PTE seems unchanged, but page A is stalled! swap_free() <- page B content lost! set_pte_at() <- staled page A installed!
And besides, for ZRAM, swap_free() allows the swap device to discard the entry content, so even if page (B) is not modified, if swap_read_folio() on CPU0 happens later than swap_free() on CPU1, it may also cause data loss.
To fix this, reuse swapcache_prepare which will pin the swap entry using the cache flag, and allow only one thread to pin it. Release the pin after PT unlocked. Racers will simply wait since it's a rare and very short event. A schedule() call is added to avoid wasting too much CPU or adding too much noise to perf statistics
Other methods like increasing the swap count don't seem to be a good idea after some tests, that will cause racers to fall back to use the swap cache again. Parallel swapin using different methods leads to a much more complex scenario.
Reproducer:
This race issue can be triggered easily using a well constructed reproducer and patched brd (with a delay in read path) [1]:
With latest 6.8 mainline, race caused data loss can be observed easily: $ gcc -g -lpthread test-thread-swap-race.c && ./a.out Polulating 32MB of memory region... Keep swapping out... Starting round 0... Spawning 65536 workers... 32746 workers spawned, wait for done... Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss! Round 0 Failed, 15 data loss!
This reproducer spawns multiple threads sharing the same memory region using a small swap device. Every two threads updates mapped pages one by one in opposite direction trying to create a race, with one dedicated thread keep swapping out the data out using madvise.
The reproducer created a reproduce rate of about once every 5 minutes, so the race should be totally possible in production.
After this patch, I ran the reproducer for over a few hundred rounds and no data loss observed.
Performance overhead is minimal, microbenchmark swapin 10G from 32G zram:
Before: 10934698 us After: 11157121 us Non-direct: 13155355 us (Dropping SWP_SYNCHRONOUS_IO flag)
Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchronous device") Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1] Reported-by: "Huang, Ying" ying.huang@intel.com Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.... Signed-off-by: Kairui Song kasong@tencent.com Cc: stable@vger.kernel.org
Update from V2:
- Add a schedule() if raced to prevent repeated page faults wasting CPU and add noise to perf statistics.
- Use a bool to state the special case instead of reusing existing variables fixing error handling [Minchan Kim].
V2: https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/
Update from V1:
- Add some words on ZRAM case, it will discard swap content on swap_free so the race window is a bit different but cure is the same. [Barry Song]
- Update comments make it cleaner [Huang, Ying]
- Add a function place holder to fix CONFIG_SWAP=n built [SeongJae Park]
- Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO instead of "direct swapin path" [Yu Zhao]
- Update commit message.
- Collect Review and Acks.
V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/
include/linux/swap.h | 5 +++++ mm/memory.c | 20 ++++++++++++++++++++ mm/swap.h | 5 +++++ mm/swapfile.c | 13 +++++++++++++ 4 files changed, 43 insertions(+)
diff --git a/include/linux/swap.h b/include/linux/swap.h index 4db00ddad261..8d28f6091a32 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp) return 0; }
+static inline int swapcache_prepare(swp_entry_t swp) +{
- return 0;
+}
- static inline void swap_free(swp_entry_t swp) { }
diff --git a/mm/memory.c b/mm/memory.c index 7e1f4849463a..7059230d0a54 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) struct page *page; struct swap_info_struct *si = NULL; rmap_t rmap_flags = RMAP_NONE;
- bool need_clear_cache = false; bool exclusive = false; swp_entry_t entry; pte_t pte;
@@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!folio) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) {
/*
* Prevent parallel swapin from proceeding with
* the cache flag. Otherwise, another thread may
* finish swapin first, free the entry, and swapout
* reusing the same entry. It's undetectable as
* pte_same() returns true due to entry reuse.
*/
if (swapcache_prepare(entry)) {
/* Relax a bit to prevent rapid repeated page faults */
schedule();
goto out;
}
need_clear_cache = true;
Hi David
Thanks for the review! I saw you added more replies so I'll just post reply on your last mail.
I took a closer look at __read_swap_cache_async() and it essentially does something similar.
Instead of returning, it keeps retrying until it finds that swapcache_prepare() fails for another reason than -EEXISTS (e.g., freed concurrently) or it finds the entry in the swapcache.
So if you would succeed here on a freed+reused swap entry, __read_swap_cache_async() would simply retry.
It spells that out:
/* * We might race against __delete_from_swap_cache(), and * stumble across a swap_map entry whose SWAP_HAS_CACHE * has not yet been cleared. Or race against another * __read_swap_cache_async(), which has set SWAP_HAS_CACHE * in swap_map, but not yet added its folio to swap cache. */
Whereby we could not race against this code here as well where we speculatively set SWAP_HAS_CACHE and might never add something to the swap cache.
I'd probably avoid the wrong returns and do something even closer to __read_swap_cache_async().
while (true) { /* * Fake that we are trying to insert a page into the swapcache, to * serialize against concurrent threads wanting to do the same. * [more from your description] */ ret = swapcache_prepare(entry); if (likely(!ret) /* * Move forward with swapin, we'll recheck if the PTE hasn't * changed later. */ break; else if (ret != -EEXIST) goto out;
/* * See __read_swap_cache_async(). We might either have raced against * another thread, or the entry could have been freed and reused in the * meantime. Make sure that the PTE did not change, to detect freeing. */ vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); if (!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte)) goto unlock; schedule();
}
It was discussed earlier about looping in the page fault. One issue is about swap entry may stuck in swapcache for a long time, so at lease an extra cache loop up is need. To be safe we need to implement a similar loop like the one in mm/swap_state.c, which I doubt is necessary...
I was skeptical about the schedule(), but __read_swap_cache_async() does it already because there is no better way to wait for the event to happen.
With something like above you would no longer depend on the speed of schedule() to determine how often you would retry the fault, which would likely make sense.
I do wonder about the schedule() vs. schedule_timeout_uninterruptible(), though. No expert on that area, do you have any idea?
schedule_timeout_uninterruptible seems more reasonable here from its name (delay a bit to wait). My idea here is that SWP_SYNCHRONOUS_IO devices are supposed to be super fast, so usually a second try will just work (when tested with a less stressed test case and that seems to be always true), and the race itself is rare enough to be ignore for 7 years.
But when system is really stressed (eg. the reproducer I provided), it may take longer to finish (SWP_SYNCHRONOUS_IO devices are CPU bound). So a schedule() can help to avoid one task from looping page fault, for better statistic and CPU usage.
Previous test results: https://lore.kernel.org/lkml/CAMgjq7BvTJmxrWQOJvkLt4g_jnvmx07NdU63sGeRMGde4O... It showed schedule() works fine here.
David Hildenbrand david@redhat.com writes:
On 16.02.24 10:51, Kairui Song wrote:
From: Kairui Song kasong@tencent.com When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads swapin the same entry at the same time, they get different pages (A, B). Before one thread (T0) finishes the swapin and installs page (A) to the PTE, another thread (T1) could finish swapin of page (B), swap_free the entry, then swap out the possibly modified page reusing the same entry. It breaks the pte_same check in (T0) because PTE value is unchanged, causing ABA problem. Thread (T0) will install a stalled page (A) into the PTE and cause data corruption. One possible callstack is like this: CPU0 CPU1
do_swap_page() do_swap_page() with same entry <direct swapin path> <direct swapin path> <alloc page A> <alloc page B> swap_read_folio() <- read to page A swap_read_folio() <- read to page B <slow on later locks or interrupt> <finished swapin first> ... set_pte_at() swap_free() <- entry is free <write to page B, now page A stalled> <swap out page B to same swap entry> pte_same() <- Check pass, PTE seems unchanged, but page A is stalled! swap_free() <- page B content lost! set_pte_at() <- staled page A installed! And besides, for ZRAM, swap_free() allows the swap device to discard the entry content, so even if page (B) is not modified, if swap_read_folio() on CPU0 happens later than swap_free() on CPU1, it may also cause data loss. To fix this, reuse swapcache_prepare which will pin the swap entry using the cache flag, and allow only one thread to pin it. Release the pin after PT unlocked. Racers will simply wait since it's a rare and very short event. A schedule() call is added to avoid wasting too much CPU or adding too much noise to perf statistics Other methods like increasing the swap count don't seem to be a good idea after some tests, that will cause racers to fall back to use the swap cache again. Parallel swapin using different methods leads to a much more complex scenario. Reproducer: This race issue can be triggered easily using a well constructed reproducer and patched brd (with a delay in read path) [1]: With latest 6.8 mainline, race caused data loss can be observed easily: $ gcc -g -lpthread test-thread-swap-race.c && ./a.out Polulating 32MB of memory region... Keep swapping out... Starting round 0... Spawning 65536 workers... 32746 workers spawned, wait for done... Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss! Round 0 Failed, 15 data loss! This reproducer spawns multiple threads sharing the same memory region using a small swap device. Every two threads updates mapped pages one by one in opposite direction trying to create a race, with one dedicated thread keep swapping out the data out using madvise. The reproducer created a reproduce rate of about once every 5 minutes, so the race should be totally possible in production. After this patch, I ran the reproducer for over a few hundred rounds and no data loss observed. Performance overhead is minimal, microbenchmark swapin 10G from 32G zram: Before: 10934698 us After: 11157121 us Non-direct: 13155355 us (Dropping SWP_SYNCHRONOUS_IO flag) Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchronous device") Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1] Reported-by: "Huang, Ying" ying.huang@intel.com Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.... Signed-off-by: Kairui Song kasong@tencent.com Cc: stable@vger.kernel.org
Update from V2:
- Add a schedule() if raced to prevent repeated page faults wasting CPU and add noise to perf statistics.
- Use a bool to state the special case instead of reusing existing variables fixing error handling [Minchan Kim].
V2: https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/ Update from V1:
- Add some words on ZRAM case, it will discard swap content on swap_free so the race window is a bit different but cure is the same. [Barry Song]
- Update comments make it cleaner [Huang, Ying]
- Add a function place holder to fix CONFIG_SWAP=n built [SeongJae Park]
- Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO instead of "direct swapin path" [Yu Zhao]
- Update commit message.
- Collect Review and Acks.
V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/ include/linux/swap.h | 5 +++++ mm/memory.c | 20 ++++++++++++++++++++ mm/swap.h | 5 +++++ mm/swapfile.c | 13 +++++++++++++ 4 files changed, 43 insertions(+) diff --git a/include/linux/swap.h b/include/linux/swap.h index 4db00ddad261..8d28f6091a32 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp) return 0; } +static inline int swapcache_prepare(swp_entry_t swp) +{
- return 0;
+}
- static inline void swap_free(swp_entry_t swp) { }
diff --git a/mm/memory.c b/mm/memory.c index 7e1f4849463a..7059230d0a54 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) struct page *page; struct swap_info_struct *si = NULL; rmap_t rmap_flags = RMAP_NONE;
- bool need_clear_cache = false; bool exclusive = false; swp_entry_t entry; pte_t pte;
@@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!folio) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) {
/*
* Prevent parallel swapin from proceeding with
* the cache flag. Otherwise, another thread may
* finish swapin first, free the entry, and swapout
* reusing the same entry. It's undetectable as
* pte_same() returns true due to entry reuse.
*/
if (swapcache_prepare(entry)) {
/* Relax a bit to prevent rapid repeated page faults */
schedule();
goto out;
}
need_clear_cache = true;
I took a closer look at __read_swap_cache_async() and it essentially does something similar.
Instead of returning, it keeps retrying until it finds that swapcache_prepare() fails for another reason than -EEXISTS (e.g., freed concurrently) or it finds the entry in the swapcache.
So if you would succeed here on a freed+reused swap entry, __read_swap_cache_async() would simply retry.
It spells that out:
/* * We might race against __delete_from_swap_cache(), and * stumble across a swap_map entry whose SWAP_HAS_CACHE * has not yet been cleared. Or race against another * __read_swap_cache_async(), which has set SWAP_HAS_CACHE * in swap_map, but not yet added its folio to swap cache. */
Whereby we could not race against this code here as well where we speculatively set SWAP_HAS_CACHE and might never add something to the swap cache.
I'd probably avoid the wrong returns and do something even closer to __read_swap_cache_async().
while (true) { /* * Fake that we are trying to insert a page into the swapcache, to * serialize against concurrent threads wanting to do the same. * [more from your description] */ ret = swapcache_prepare(entry); if (likely(!ret) /* * Move forward with swapin, we'll recheck if the PTE hasn't * changed later. */ break; else if (ret != -EEXIST) goto out;
The swap entry may be kept in swap cache for long time. For example, it may be read into swap cache via MADV_WILLNEED.
-- Best Regards, Huang, Ying
/* * See __read_swap_cache_async(). We might either have raced against * another thread, or the entry could have been freed and reused in the * meantime. Make sure that the PTE did not change, to detect freeing. */ vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); if (!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte)) goto unlock;
schedule(); }
I was skeptical about the schedule(), but __read_swap_cache_async() does it already because there is no better way to wait for the event to happen.
With something like above you would no longer depend on the speed of schedule() to determine how often you would retry the fault, which would likely make sense.
I do wonder about the schedule() vs. schedule_timeout_uninterruptible(), though. No expert on that area, do you have any idea?
On Sun, Feb 18, 2024 at 9:02 PM Huang, Ying ying.huang@intel.com wrote:
David Hildenbrand david@redhat.com writes:
On 16.02.24 10:51, Kairui Song wrote:
From: Kairui Song kasong@tencent.com When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads swapin the same entry at the same time, they get different pages (A, B). Before one thread (T0) finishes the swapin and installs page (A) to the PTE, another thread (T1) could finish swapin of page (B), swap_free the entry, then swap out the possibly modified page reusing the same entry. It breaks the pte_same check in (T0) because PTE value is unchanged, causing ABA problem. Thread (T0) will install a stalled page (A) into the PTE and cause data corruption. One possible callstack is like this: CPU0 CPU1
do_swap_page() do_swap_page() with same entry <direct swapin path> <direct swapin path> <alloc page A> <alloc page B> swap_read_folio() <- read to page A swap_read_folio() <- read to page B <slow on later locks or interrupt> <finished swapin first> ... set_pte_at() swap_free() <- entry is free <write to page B, now page A stalled> <swap out page B to same swap entry> pte_same() <- Check pass, PTE seems unchanged, but page A is stalled! swap_free() <- page B content lost! set_pte_at() <- staled page A installed! And besides, for ZRAM, swap_free() allows the swap device to discard the entry content, so even if page (B) is not modified, if swap_read_folio() on CPU0 happens later than swap_free() on CPU1, it may also cause data loss. To fix this, reuse swapcache_prepare which will pin the swap entry using the cache flag, and allow only one thread to pin it. Release the pin after PT unlocked. Racers will simply wait since it's a rare and very short event. A schedule() call is added to avoid wasting too much CPU or adding too much noise to perf statistics Other methods like increasing the swap count don't seem to be a good idea after some tests, that will cause racers to fall back to use the swap cache again. Parallel swapin using different methods leads to a much more complex scenario. Reproducer: This race issue can be triggered easily using a well constructed reproducer and patched brd (with a delay in read path) [1]: With latest 6.8 mainline, race caused data loss can be observed easily: $ gcc -g -lpthread test-thread-swap-race.c && ./a.out Polulating 32MB of memory region... Keep swapping out... Starting round 0... Spawning 65536 workers... 32746 workers spawned, wait for done... Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss! Round 0 Failed, 15 data loss! This reproducer spawns multiple threads sharing the same memory region using a small swap device. Every two threads updates mapped pages one by one in opposite direction trying to create a race, with one dedicated thread keep swapping out the data out using madvise. The reproducer created a reproduce rate of about once every 5 minutes, so the race should be totally possible in production. After this patch, I ran the reproducer for over a few hundred rounds and no data loss observed. Performance overhead is minimal, microbenchmark swapin 10G from 32G zram: Before: 10934698 us After: 11157121 us Non-direct: 13155355 us (Dropping SWP_SYNCHRONOUS_IO flag) Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchronous device") Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1] Reported-by: "Huang, Ying" ying.huang@intel.com Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.... Signed-off-by: Kairui Song kasong@tencent.com Cc: stable@vger.kernel.org
Update from V2:
- Add a schedule() if raced to prevent repeated page faults wasting CPU and add noise to perf statistics.
- Use a bool to state the special case instead of reusing existing variables fixing error handling [Minchan Kim].
V2: https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/ Update from V1:
- Add some words on ZRAM case, it will discard swap content on swap_free so the race window is a bit different but cure is the same. [Barry Song]
- Update comments make it cleaner [Huang, Ying]
- Add a function place holder to fix CONFIG_SWAP=n built [SeongJae Park]
- Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO instead of "direct swapin path" [Yu Zhao]
- Update commit message.
- Collect Review and Acks.
V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/ include/linux/swap.h | 5 +++++ mm/memory.c | 20 ++++++++++++++++++++ mm/swap.h | 5 +++++ mm/swapfile.c | 13 +++++++++++++ 4 files changed, 43 insertions(+) diff --git a/include/linux/swap.h b/include/linux/swap.h index 4db00ddad261..8d28f6091a32 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp) return 0; } +static inline int swapcache_prepare(swp_entry_t swp) +{
- return 0;
+}
- static inline void swap_free(swp_entry_t swp) { }
diff --git a/mm/memory.c b/mm/memory.c index 7e1f4849463a..7059230d0a54 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) struct page *page; struct swap_info_struct *si = NULL; rmap_t rmap_flags = RMAP_NONE;
- bool need_clear_cache = false; bool exclusive = false; swp_entry_t entry; pte_t pte;
@@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!folio) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) {
/*
* Prevent parallel swapin from proceeding with
* the cache flag. Otherwise, another thread may
* finish swapin first, free the entry, and swapout
* reusing the same entry. It's undetectable as
* pte_same() returns true due to entry reuse.
*/
if (swapcache_prepare(entry)) {
/* Relax a bit to prevent rapid repeated page faults */
schedule();
goto out;
}
need_clear_cache = true;
I took a closer look at __read_swap_cache_async() and it essentially does something similar.
Instead of returning, it keeps retrying until it finds that swapcache_prepare() fails for another reason than -EEXISTS (e.g., freed concurrently) or it finds the entry in the swapcache.
So if you would succeed here on a freed+reused swap entry, __read_swap_cache_async() would simply retry.
It spells that out:
/* * We might race against __delete_from_swap_cache(), and * stumble across a swap_map entry whose SWAP_HAS_CACHE * has not yet been cleared. Or race against another * __read_swap_cache_async(), which has set SWAP_HAS_CACHE * in swap_map, but not yet added its folio to swap cache. */
Whereby we could not race against this code here as well where we speculatively set SWAP_HAS_CACHE and might never add something to the swap cache.
I'd probably avoid the wrong returns and do something even closer to __read_swap_cache_async().
while (true) { /* * Fake that we are trying to insert a page into the swapcache, to * serialize against concurrent threads wanting to do the same. * [more from your description] */ ret = swapcache_prepare(entry); if (likely(!ret) /* * Move forward with swapin, we'll recheck if the PTE hasn't * changed later. */ break; else if (ret != -EEXIST) goto out;
The swap entry may be kept in swap cache for long time. For example, it may be read into swap cache via MADV_WILLNEED.
This seems fine.
if swapcache has data from WILLNEED, the new page fault will hit it. Thus, we won't go into the SYNC_IO path any more?
-- Best Regards, Huang, Ying
/* * See __read_swap_cache_async(). We might either have raced against * another thread, or the entry could have been freed and reused in the * meantime. Make sure that the PTE did not change, to detect freeing. */ vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); if (!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte)) goto unlock; schedule();
}
I was skeptical about the schedule(), but __read_swap_cache_async() does it already because there is no better way to wait for the event to happen.
With something like above you would no longer depend on the speed of schedule() to determine how often you would retry the fault, which would likely make sense.
I do wonder about the schedule() vs. schedule_timeout_uninterruptible(), though. No expert on that area, do you have any idea?
Barry Song 21cnbao@gmail.com writes:
On Sun, Feb 18, 2024 at 9:02 PM Huang, Ying ying.huang@intel.com wrote:
David Hildenbrand david@redhat.com writes:
On 16.02.24 10:51, Kairui Song wrote:
From: Kairui Song kasong@tencent.com When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads swapin the same entry at the same time, they get different pages (A, B). Before one thread (T0) finishes the swapin and installs page (A) to the PTE, another thread (T1) could finish swapin of page (B), swap_free the entry, then swap out the possibly modified page reusing the same entry. It breaks the pte_same check in (T0) because PTE value is unchanged, causing ABA problem. Thread (T0) will install a stalled page (A) into the PTE and cause data corruption. One possible callstack is like this: CPU0 CPU1
do_swap_page() do_swap_page() with same entry <direct swapin path> <direct swapin path> <alloc page A> <alloc page B> swap_read_folio() <- read to page A swap_read_folio() <- read to page B <slow on later locks or interrupt> <finished swapin first> ... set_pte_at() swap_free() <- entry is free <write to page B, now page A stalled> <swap out page B to same swap entry> pte_same() <- Check pass, PTE seems unchanged, but page A is stalled! swap_free() <- page B content lost! set_pte_at() <- staled page A installed! And besides, for ZRAM, swap_free() allows the swap device to discard the entry content, so even if page (B) is not modified, if swap_read_folio() on CPU0 happens later than swap_free() on CPU1, it may also cause data loss. To fix this, reuse swapcache_prepare which will pin the swap entry using the cache flag, and allow only one thread to pin it. Release the pin after PT unlocked. Racers will simply wait since it's a rare and very short event. A schedule() call is added to avoid wasting too much CPU or adding too much noise to perf statistics Other methods like increasing the swap count don't seem to be a good idea after some tests, that will cause racers to fall back to use the swap cache again. Parallel swapin using different methods leads to a much more complex scenario. Reproducer: This race issue can be triggered easily using a well constructed reproducer and patched brd (with a delay in read path) [1]: With latest 6.8 mainline, race caused data loss can be observed easily: $ gcc -g -lpthread test-thread-swap-race.c && ./a.out Polulating 32MB of memory region... Keep swapping out... Starting round 0... Spawning 65536 workers... 32746 workers spawned, wait for done... Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss! Round 0 Failed, 15 data loss! This reproducer spawns multiple threads sharing the same memory region using a small swap device. Every two threads updates mapped pages one by one in opposite direction trying to create a race, with one dedicated thread keep swapping out the data out using madvise. The reproducer created a reproduce rate of about once every 5 minutes, so the race should be totally possible in production. After this patch, I ran the reproducer for over a few hundred rounds and no data loss observed. Performance overhead is minimal, microbenchmark swapin 10G from 32G zram: Before: 10934698 us After: 11157121 us Non-direct: 13155355 us (Dropping SWP_SYNCHRONOUS_IO flag) Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchronous device") Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1] Reported-by: "Huang, Ying" ying.huang@intel.com Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.... Signed-off-by: Kairui Song kasong@tencent.com Cc: stable@vger.kernel.org
Update from V2:
- Add a schedule() if raced to prevent repeated page faults wasting CPU and add noise to perf statistics.
- Use a bool to state the special case instead of reusing existing variables fixing error handling [Minchan Kim].
V2: https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/ Update from V1:
- Add some words on ZRAM case, it will discard swap content on swap_free so the race window is a bit different but cure is the same. [Barry Song]
- Update comments make it cleaner [Huang, Ying]
- Add a function place holder to fix CONFIG_SWAP=n built [SeongJae Park]
- Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO instead of "direct swapin path" [Yu Zhao]
- Update commit message.
- Collect Review and Acks.
V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/ include/linux/swap.h | 5 +++++ mm/memory.c | 20 ++++++++++++++++++++ mm/swap.h | 5 +++++ mm/swapfile.c | 13 +++++++++++++ 4 files changed, 43 insertions(+) diff --git a/include/linux/swap.h b/include/linux/swap.h index 4db00ddad261..8d28f6091a32 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp) return 0; } +static inline int swapcache_prepare(swp_entry_t swp) +{
- return 0;
+}
- static inline void swap_free(swp_entry_t swp) { }
diff --git a/mm/memory.c b/mm/memory.c index 7e1f4849463a..7059230d0a54 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) struct page *page; struct swap_info_struct *si = NULL; rmap_t rmap_flags = RMAP_NONE;
- bool need_clear_cache = false; bool exclusive = false; swp_entry_t entry; pte_t pte;
@@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!folio) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) {
/*
* Prevent parallel swapin from proceeding with
* the cache flag. Otherwise, another thread may
* finish swapin first, free the entry, and swapout
* reusing the same entry. It's undetectable as
* pte_same() returns true due to entry reuse.
*/
if (swapcache_prepare(entry)) {
/* Relax a bit to prevent rapid repeated page faults */
schedule();
goto out;
}
need_clear_cache = true;
I took a closer look at __read_swap_cache_async() and it essentially does something similar.
Instead of returning, it keeps retrying until it finds that swapcache_prepare() fails for another reason than -EEXISTS (e.g., freed concurrently) or it finds the entry in the swapcache.
So if you would succeed here on a freed+reused swap entry, __read_swap_cache_async() would simply retry.
It spells that out:
/* * We might race against __delete_from_swap_cache(), and * stumble across a swap_map entry whose SWAP_HAS_CACHE * has not yet been cleared. Or race against another * __read_swap_cache_async(), which has set SWAP_HAS_CACHE * in swap_map, but not yet added its folio to swap cache. */
Whereby we could not race against this code here as well where we speculatively set SWAP_HAS_CACHE and might never add something to the swap cache.
I'd probably avoid the wrong returns and do something even closer to __read_swap_cache_async().
while (true) { /* * Fake that we are trying to insert a page into the swapcache, to * serialize against concurrent threads wanting to do the same. * [more from your description] */ ret = swapcache_prepare(entry); if (likely(!ret) /* * Move forward with swapin, we'll recheck if the PTE hasn't * changed later. */ break; else if (ret != -EEXIST) goto out;
The swap entry may be kept in swap cache for long time. For example, it may be read into swap cache via MADV_WILLNEED.
This seems fine.
if swapcache has data from WILLNEED, the new page fault will hit it. Thus, we won't go into the SYNC_IO path any more?
They may happen in parallel. That is, one task is busy looping, while another task read the swap entry into swap cache.
-- Best Regards, Huang, Ying
-- Best Regards, Huang, Ying
/* * See __read_swap_cache_async(). We might either have raced against * another thread, or the entry could have been freed and reused in the * meantime. Make sure that the PTE did not change, to detect freeing. */ vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); if (!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte)) goto unlock; schedule();
}
I was skeptical about the schedule(), but __read_swap_cache_async() does it already because there is no better way to wait for the event to happen.
With something like above you would no longer depend on the speed of schedule() to determine how often you would retry the fault, which would likely make sense.
I do wonder about the schedule() vs. schedule_timeout_uninterruptible(), though. No expert on that area, do you have any idea?
On Sun, Feb 18, 2024 at 9:41 PM Huang, Ying ying.huang@intel.com wrote:
Barry Song 21cnbao@gmail.com writes:
On Sun, Feb 18, 2024 at 9:02 PM Huang, Ying ying.huang@intel.com wrote:
David Hildenbrand david@redhat.com writes:
On 16.02.24 10:51, Kairui Song wrote:
From: Kairui Song kasong@tencent.com When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads swapin the same entry at the same time, they get different pages (A, B). Before one thread (T0) finishes the swapin and installs page (A) to the PTE, another thread (T1) could finish swapin of page (B), swap_free the entry, then swap out the possibly modified page reusing the same entry. It breaks the pte_same check in (T0) because PTE value is unchanged, causing ABA problem. Thread (T0) will install a stalled page (A) into the PTE and cause data corruption. One possible callstack is like this: CPU0 CPU1
do_swap_page() do_swap_page() with same entry <direct swapin path> <direct swapin path> <alloc page A> <alloc page B> swap_read_folio() <- read to page A swap_read_folio() <- read to page B <slow on later locks or interrupt> <finished swapin first> ... set_pte_at() swap_free() <- entry is free <write to page B, now page A stalled> <swap out page B to same swap entry> pte_same() <- Check pass, PTE seems unchanged, but page A is stalled! swap_free() <- page B content lost! set_pte_at() <- staled page A installed! And besides, for ZRAM, swap_free() allows the swap device to discard the entry content, so even if page (B) is not modified, if swap_read_folio() on CPU0 happens later than swap_free() on CPU1, it may also cause data loss. To fix this, reuse swapcache_prepare which will pin the swap entry using the cache flag, and allow only one thread to pin it. Release the pin after PT unlocked. Racers will simply wait since it's a rare and very short event. A schedule() call is added to avoid wasting too much CPU or adding too much noise to perf statistics Other methods like increasing the swap count don't seem to be a good idea after some tests, that will cause racers to fall back to use the swap cache again. Parallel swapin using different methods leads to a much more complex scenario. Reproducer: This race issue can be triggered easily using a well constructed reproducer and patched brd (with a delay in read path) [1]: With latest 6.8 mainline, race caused data loss can be observed easily: $ gcc -g -lpthread test-thread-swap-race.c && ./a.out Polulating 32MB of memory region... Keep swapping out... Starting round 0... Spawning 65536 workers... 32746 workers spawned, wait for done... Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss! Round 0 Failed, 15 data loss! This reproducer spawns multiple threads sharing the same memory region using a small swap device. Every two threads updates mapped pages one by one in opposite direction trying to create a race, with one dedicated thread keep swapping out the data out using madvise. The reproducer created a reproduce rate of about once every 5 minutes, so the race should be totally possible in production. After this patch, I ran the reproducer for over a few hundred rounds and no data loss observed. Performance overhead is minimal, microbenchmark swapin 10G from 32G zram: Before: 10934698 us After: 11157121 us Non-direct: 13155355 us (Dropping SWP_SYNCHRONOUS_IO flag) Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchronous device") Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1] Reported-by: "Huang, Ying" ying.huang@intel.com Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.... Signed-off-by: Kairui Song kasong@tencent.com Cc: stable@vger.kernel.org
Update from V2:
- Add a schedule() if raced to prevent repeated page faults wasting CPU and add noise to perf statistics.
- Use a bool to state the special case instead of reusing existing variables fixing error handling [Minchan Kim].
V2: https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/ Update from V1:
- Add some words on ZRAM case, it will discard swap content on swap_free so the race window is a bit different but cure is the same. [Barry Song]
- Update comments make it cleaner [Huang, Ying]
- Add a function place holder to fix CONFIG_SWAP=n built [SeongJae Park]
- Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO instead of "direct swapin path" [Yu Zhao]
- Update commit message.
- Collect Review and Acks.
V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/ include/linux/swap.h | 5 +++++ mm/memory.c | 20 ++++++++++++++++++++ mm/swap.h | 5 +++++ mm/swapfile.c | 13 +++++++++++++ 4 files changed, 43 insertions(+) diff --git a/include/linux/swap.h b/include/linux/swap.h index 4db00ddad261..8d28f6091a32 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp) return 0; } +static inline int swapcache_prepare(swp_entry_t swp) +{
- return 0;
+}
- static inline void swap_free(swp_entry_t swp) { }
diff --git a/mm/memory.c b/mm/memory.c index 7e1f4849463a..7059230d0a54 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) struct page *page; struct swap_info_struct *si = NULL; rmap_t rmap_flags = RMAP_NONE;
- bool need_clear_cache = false; bool exclusive = false; swp_entry_t entry; pte_t pte;
@@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!folio) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) {
/*
* Prevent parallel swapin from proceeding with
* the cache flag. Otherwise, another thread may
* finish swapin first, free the entry, and swapout
* reusing the same entry. It's undetectable as
* pte_same() returns true due to entry reuse.
*/
if (swapcache_prepare(entry)) {
/* Relax a bit to prevent rapid repeated page faults */
schedule();
goto out;
}
need_clear_cache = true;
I took a closer look at __read_swap_cache_async() and it essentially does something similar.
Instead of returning, it keeps retrying until it finds that swapcache_prepare() fails for another reason than -EEXISTS (e.g., freed concurrently) or it finds the entry in the swapcache.
So if you would succeed here on a freed+reused swap entry, __read_swap_cache_async() would simply retry.
It spells that out:
/* * We might race against __delete_from_swap_cache(), and * stumble across a swap_map entry whose SWAP_HAS_CACHE * has not yet been cleared. Or race against another * __read_swap_cache_async(), which has set SWAP_HAS_CACHE * in swap_map, but not yet added its folio to swap cache. */
Whereby we could not race against this code here as well where we speculatively set SWAP_HAS_CACHE and might never add something to the swap cache.
I'd probably avoid the wrong returns and do something even closer to __read_swap_cache_async().
while (true) { /* * Fake that we are trying to insert a page into the swapcache, to * serialize against concurrent threads wanting to do the same. * [more from your description] */ ret = swapcache_prepare(entry); if (likely(!ret) /* * Move forward with swapin, we'll recheck if the PTE hasn't * changed later. */ break; else if (ret != -EEXIST) goto out;
The swap entry may be kept in swap cache for long time. For example, it may be read into swap cache via MADV_WILLNEED.
This seems fine.
if swapcache has data from WILLNEED, the new page fault will hit it. Thus, we won't go into the SYNC_IO path any more?
They may happen in parallel. That is, one task is busy looping, while another task read the swap entry into swap cache.
do_swap_page isn't busy looping swapcache_prepare, if it fails, it exits, then we have a completely new page fault. this new page fault will lookup swapcache and find it, going into the path to set swapcache to ptes. so the new page fault won't do swapcache_prepare any more.
-- Best Regards, Huang, Ying
-- Best Regards, Huang, Ying
/* * See __read_swap_cache_async(). We might either have raced against * another thread, or the entry could have been freed and reused in the * meantime. Make sure that the PTE did not change, to detect freeing. */ vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); if (!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte)) goto unlock; schedule();
}
I was skeptical about the schedule(), but __read_swap_cache_async() does it already because there is no better way to wait for the event to happen.
With something like above you would no longer depend on the speed of schedule() to determine how often you would retry the fault, which would likely make sense.
I do wonder about the schedule() vs. schedule_timeout_uninterruptible(), though. No expert on that area, do you have any idea?
On Sun, Feb 18, 2024 at 4:47 PM Barry Song 21cnbao@gmail.com wrote:
On Sun, Feb 18, 2024 at 9:41 PM Huang, Ying ying.huang@intel.com wrote:
Barry Song 21cnbao@gmail.com writes:
On Sun, Feb 18, 2024 at 9:02 PM Huang, Ying ying.huang@intel.com wrote:
David Hildenbrand david@redhat.com writes:
On 16.02.24 10:51, Kairui Song wrote:
From: Kairui Song kasong@tencent.com When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads swapin the same entry at the same time, they get different pages (A, B). Before one thread (T0) finishes the swapin and installs page (A) to the PTE, another thread (T1) could finish swapin of page (B), swap_free the entry, then swap out the possibly modified page reusing the same entry. It breaks the pte_same check in (T0) because PTE value is unchanged, causing ABA problem. Thread (T0) will install a stalled page (A) into the PTE and cause data corruption. One possible callstack is like this: CPU0 CPU1
do_swap_page() do_swap_page() with same entry <direct swapin path> <direct swapin path> <alloc page A> <alloc page B> swap_read_folio() <- read to page A swap_read_folio() <- read to page B <slow on later locks or interrupt> <finished swapin first> ... set_pte_at() swap_free() <- entry is free <write to page B, now page A stalled> <swap out page B to same swap entry> pte_same() <- Check pass, PTE seems unchanged, but page A is stalled! swap_free() <- page B content lost! set_pte_at() <- staled page A installed! And besides, for ZRAM, swap_free() allows the swap device to discard the entry content, so even if page (B) is not modified, if swap_read_folio() on CPU0 happens later than swap_free() on CPU1, it may also cause data loss. To fix this, reuse swapcache_prepare which will pin the swap entry using the cache flag, and allow only one thread to pin it. Release the pin after PT unlocked. Racers will simply wait since it's a rare and very short event. A schedule() call is added to avoid wasting too much CPU or adding too much noise to perf statistics Other methods like increasing the swap count don't seem to be a good idea after some tests, that will cause racers to fall back to use the swap cache again. Parallel swapin using different methods leads to a much more complex scenario. Reproducer: This race issue can be triggered easily using a well constructed reproducer and patched brd (with a delay in read path) [1]: With latest 6.8 mainline, race caused data loss can be observed easily: $ gcc -g -lpthread test-thread-swap-race.c && ./a.out Polulating 32MB of memory region... Keep swapping out... Starting round 0... Spawning 65536 workers... 32746 workers spawned, wait for done... Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss! Round 0 Failed, 15 data loss! This reproducer spawns multiple threads sharing the same memory region using a small swap device. Every two threads updates mapped pages one by one in opposite direction trying to create a race, with one dedicated thread keep swapping out the data out using madvise. The reproducer created a reproduce rate of about once every 5 minutes, so the race should be totally possible in production. After this patch, I ran the reproducer for over a few hundred rounds and no data loss observed. Performance overhead is minimal, microbenchmark swapin 10G from 32G zram: Before: 10934698 us After: 11157121 us Non-direct: 13155355 us (Dropping SWP_SYNCHRONOUS_IO flag) Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchronous device") Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1] Reported-by: "Huang, Ying" ying.huang@intel.com Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.... Signed-off-by: Kairui Song kasong@tencent.com Cc: stable@vger.kernel.org
Update from V2:
- Add a schedule() if raced to prevent repeated page faults wasting CPU and add noise to perf statistics.
- Use a bool to state the special case instead of reusing existing variables fixing error handling [Minchan Kim].
V2: https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/ Update from V1:
- Add some words on ZRAM case, it will discard swap content on swap_free so the race window is a bit different but cure is the same. [Barry Song]
- Update comments make it cleaner [Huang, Ying]
- Add a function place holder to fix CONFIG_SWAP=n built [SeongJae Park]
- Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO instead of "direct swapin path" [Yu Zhao]
- Update commit message.
- Collect Review and Acks.
V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/ include/linux/swap.h | 5 +++++ mm/memory.c | 20 ++++++++++++++++++++ mm/swap.h | 5 +++++ mm/swapfile.c | 13 +++++++++++++ 4 files changed, 43 insertions(+) diff --git a/include/linux/swap.h b/include/linux/swap.h index 4db00ddad261..8d28f6091a32 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp) return 0; } +static inline int swapcache_prepare(swp_entry_t swp) +{
- return 0;
+}
- static inline void swap_free(swp_entry_t swp) { }
diff --git a/mm/memory.c b/mm/memory.c index 7e1f4849463a..7059230d0a54 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) struct page *page; struct swap_info_struct *si = NULL; rmap_t rmap_flags = RMAP_NONE;
- bool need_clear_cache = false; bool exclusive = false; swp_entry_t entry; pte_t pte;
@@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!folio) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) {
/*
* Prevent parallel swapin from proceeding with
* the cache flag. Otherwise, another thread may
* finish swapin first, free the entry, and swapout
* reusing the same entry. It's undetectable as
* pte_same() returns true due to entry reuse.
*/
if (swapcache_prepare(entry)) {
/* Relax a bit to prevent rapid repeated page faults */
schedule();
goto out;
}
need_clear_cache = true;
I took a closer look at __read_swap_cache_async() and it essentially does something similar.
Instead of returning, it keeps retrying until it finds that swapcache_prepare() fails for another reason than -EEXISTS (e.g., freed concurrently) or it finds the entry in the swapcache.
So if you would succeed here on a freed+reused swap entry, __read_swap_cache_async() would simply retry.
It spells that out:
/* * We might race against __delete_from_swap_cache(), and * stumble across a swap_map entry whose SWAP_HAS_CACHE * has not yet been cleared. Or race against another * __read_swap_cache_async(), which has set SWAP_HAS_CACHE * in swap_map, but not yet added its folio to swap cache. */
Whereby we could not race against this code here as well where we speculatively set SWAP_HAS_CACHE and might never add something to the swap cache.
I'd probably avoid the wrong returns and do something even closer to __read_swap_cache_async().
while (true) { /* * Fake that we are trying to insert a page into the swapcache, to * serialize against concurrent threads wanting to do the same. * [more from your description] */ ret = swapcache_prepare(entry); if (likely(!ret) /* * Move forward with swapin, we'll recheck if the PTE hasn't * changed later. */ break; else if (ret != -EEXIST) goto out;
The swap entry may be kept in swap cache for long time. For example, it may be read into swap cache via MADV_WILLNEED.
This seems fine.
if swapcache has data from WILLNEED, the new page fault will hit it. Thus, we won't go into the SYNC_IO path any more?
They may happen in parallel. That is, one task is busy looping, while another task read the swap entry into swap cache.
do_swap_page isn't busy looping swapcache_prepare, if it fails, it exits, then we have a completely new page fault. this new page fault will lookup swapcache and find it, going into the path to set swapcache to ptes. so the new page fault won't do swapcache_prepare any more.
Hi Barry
The issue here with this code snip is that we could have swapcache_prepare(entry) == -EEXIST and pte_same == true, not necessarily WILLNEED, a concurrent fault and swapout could cause that. Then we are stuck with while(true) here.
It can be fixed, but still there are other potential issues, we end up with a similar loop in swap_state.c, may still need to bail out the fault for some cases, things seem not improved here.
On Sun, Feb 18, 2024 at 12:41 AM Huang, Ying ying.huang@intel.com wrote:
Barry Song 21cnbao@gmail.com writes:
On Sun, Feb 18, 2024 at 9:02 PM Huang, Ying ying.huang@intel.com wrote:
David Hildenbrand david@redhat.com writes:
On 16.02.24 10:51, Kairui Song wrote:
From: Kairui Song kasong@tencent.com When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads swapin the same entry at the same time, they get different pages (A, B). Before one thread (T0) finishes the swapin and installs page (A) to the PTE, another thread (T1) could finish swapin of page (B), swap_free the entry, then swap out the possibly modified page reusing the same entry. It breaks the pte_same check in (T0) because PTE value is unchanged, causing ABA problem. Thread (T0) will install a stalled page (A) into the PTE and cause data corruption. One possible callstack is like this: CPU0 CPU1
do_swap_page() do_swap_page() with same entry <direct swapin path> <direct swapin path> <alloc page A> <alloc page B> swap_read_folio() <- read to page A swap_read_folio() <- read to page B <slow on later locks or interrupt> <finished swapin first> ... set_pte_at() swap_free() <- entry is free <write to page B, now page A stalled> <swap out page B to same swap entry> pte_same() <- Check pass, PTE seems unchanged, but page A is stalled! swap_free() <- page B content lost! set_pte_at() <- staled page A installed! And besides, for ZRAM, swap_free() allows the swap device to discard the entry content, so even if page (B) is not modified, if swap_read_folio() on CPU0 happens later than swap_free() on CPU1, it may also cause data loss. To fix this, reuse swapcache_prepare which will pin the swap entry using the cache flag, and allow only one thread to pin it. Release the pin after PT unlocked. Racers will simply wait since it's a rare and very short event. A schedule() call is added to avoid wasting too much CPU or adding too much noise to perf statistics Other methods like increasing the swap count don't seem to be a good idea after some tests, that will cause racers to fall back to use the swap cache again. Parallel swapin using different methods leads to a much more complex scenario. Reproducer: This race issue can be triggered easily using a well constructed reproducer and patched brd (with a delay in read path) [1]: With latest 6.8 mainline, race caused data loss can be observed easily: $ gcc -g -lpthread test-thread-swap-race.c && ./a.out Polulating 32MB of memory region... Keep swapping out... Starting round 0... Spawning 65536 workers... 32746 workers spawned, wait for done... Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss! Round 0 Failed, 15 data loss! This reproducer spawns multiple threads sharing the same memory region using a small swap device. Every two threads updates mapped pages one by one in opposite direction trying to create a race, with one dedicated thread keep swapping out the data out using madvise. The reproducer created a reproduce rate of about once every 5 minutes, so the race should be totally possible in production. After this patch, I ran the reproducer for over a few hundred rounds and no data loss observed. Performance overhead is minimal, microbenchmark swapin 10G from 32G zram: Before: 10934698 us After: 11157121 us Non-direct: 13155355 us (Dropping SWP_SYNCHRONOUS_IO flag) Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchronous device") Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1] Reported-by: "Huang, Ying" ying.huang@intel.com Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.... Signed-off-by: Kairui Song kasong@tencent.com Cc: stable@vger.kernel.org
Update from V2:
- Add a schedule() if raced to prevent repeated page faults wasting CPU and add noise to perf statistics.
- Use a bool to state the special case instead of reusing existing variables fixing error handling [Minchan Kim].
V2: https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/ Update from V1:
- Add some words on ZRAM case, it will discard swap content on swap_free so the race window is a bit different but cure is the same. [Barry Song]
- Update comments make it cleaner [Huang, Ying]
- Add a function place holder to fix CONFIG_SWAP=n built [SeongJae Park]
- Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO instead of "direct swapin path" [Yu Zhao]
- Update commit message.
- Collect Review and Acks.
V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/ include/linux/swap.h | 5 +++++ mm/memory.c | 20 ++++++++++++++++++++ mm/swap.h | 5 +++++ mm/swapfile.c | 13 +++++++++++++ 4 files changed, 43 insertions(+) diff --git a/include/linux/swap.h b/include/linux/swap.h index 4db00ddad261..8d28f6091a32 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp) return 0; } +static inline int swapcache_prepare(swp_entry_t swp) +{
- return 0;
+}
- static inline void swap_free(swp_entry_t swp) { }
diff --git a/mm/memory.c b/mm/memory.c index 7e1f4849463a..7059230d0a54 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) struct page *page; struct swap_info_struct *si = NULL; rmap_t rmap_flags = RMAP_NONE;
- bool need_clear_cache = false; bool exclusive = false; swp_entry_t entry; pte_t pte;
@@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!folio) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) {
/*
* Prevent parallel swapin from proceeding with
* the cache flag. Otherwise, another thread may
* finish swapin first, free the entry, and swapout
* reusing the same entry. It's undetectable as
* pte_same() returns true due to entry reuse.
*/
if (swapcache_prepare(entry)) {
/* Relax a bit to prevent rapid repeated page faults */
schedule();
goto out;
}
need_clear_cache = true;
I took a closer look at __read_swap_cache_async() and it essentially does something similar.
Instead of returning, it keeps retrying until it finds that swapcache_prepare() fails for another reason than -EEXISTS (e.g., freed concurrently) or it finds the entry in the swapcache.
So if you would succeed here on a freed+reused swap entry, __read_swap_cache_async() would simply retry.
It spells that out:
/* * We might race against __delete_from_swap_cache(), and * stumble across a swap_map entry whose SWAP_HAS_CACHE * has not yet been cleared. Or race against another * __read_swap_cache_async(), which has set SWAP_HAS_CACHE * in swap_map, but not yet added its folio to swap cache. */
Whereby we could not race against this code here as well where we speculatively set SWAP_HAS_CACHE and might never add something to the swap cache.
I'd probably avoid the wrong returns and do something even closer to __read_swap_cache_async().
while (true) { /* * Fake that we are trying to insert a page into the swapcache, to * serialize against concurrent threads wanting to do the same. * [more from your description] */ ret = swapcache_prepare(entry); if (likely(!ret) /* * Move forward with swapin, we'll recheck if the PTE hasn't * changed later. */ break; else if (ret != -EEXIST) goto out;
The swap entry may be kept in swap cache for long time. For example, it may be read into swap cache via MADV_WILLNEED.
I am trying to find the alternative path which can cause the swap fault to bring in the swap cache in page fault while the SYNC IO is looping for HAS_SWAP_CACHE. Kairui was able to identify the in the current code in do_page_fault() path, the rmap and fork case wouldn't be able to modify the swap cache causing a problem. The MADV_WILLNEED is an excellent example. Thank you for finding this example.
This seems fine.
if swapcache has data from WILLNEED, the new page fault will hit it. Thus, we won't go into the SYNC_IO path any more?
They may happen in parallel. That is, one task is busy looping, while another task read the swap entry into swap cache.
Agree.
Chris
On 18.02.24 08:59, Huang, Ying wrote:
David Hildenbrand david@redhat.com writes:
On 16.02.24 10:51, Kairui Song wrote:
From: Kairui Song kasong@tencent.com When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads swapin the same entry at the same time, they get different pages (A, B). Before one thread (T0) finishes the swapin and installs page (A) to the PTE, another thread (T1) could finish swapin of page (B), swap_free the entry, then swap out the possibly modified page reusing the same entry. It breaks the pte_same check in (T0) because PTE value is unchanged, causing ABA problem. Thread (T0) will install a stalled page (A) into the PTE and cause data corruption. One possible callstack is like this: CPU0 CPU1
do_swap_page() do_swap_page() with same entry <direct swapin path> <direct swapin path> <alloc page A> <alloc page B> swap_read_folio() <- read to page A swap_read_folio() <- read to page B <slow on later locks or interrupt> <finished swapin first> ... set_pte_at() swap_free() <- entry is free <write to page B, now page A stalled> <swap out page B to same swap entry> pte_same() <- Check pass, PTE seems unchanged, but page A is stalled! swap_free() <- page B content lost! set_pte_at() <- staled page A installed! And besides, for ZRAM, swap_free() allows the swap device to discard the entry content, so even if page (B) is not modified, if swap_read_folio() on CPU0 happens later than swap_free() on CPU1, it may also cause data loss. To fix this, reuse swapcache_prepare which will pin the swap entry using the cache flag, and allow only one thread to pin it. Release the pin after PT unlocked. Racers will simply wait since it's a rare and very short event. A schedule() call is added to avoid wasting too much CPU or adding too much noise to perf statistics Other methods like increasing the swap count don't seem to be a good idea after some tests, that will cause racers to fall back to use the swap cache again. Parallel swapin using different methods leads to a much more complex scenario. Reproducer: This race issue can be triggered easily using a well constructed reproducer and patched brd (with a delay in read path) [1]: With latest 6.8 mainline, race caused data loss can be observed easily: $ gcc -g -lpthread test-thread-swap-race.c && ./a.out Polulating 32MB of memory region... Keep swapping out... Starting round 0... Spawning 65536 workers... 32746 workers spawned, wait for done... Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss! Round 0 Failed, 15 data loss! This reproducer spawns multiple threads sharing the same memory region using a small swap device. Every two threads updates mapped pages one by one in opposite direction trying to create a race, with one dedicated thread keep swapping out the data out using madvise. The reproducer created a reproduce rate of about once every 5 minutes, so the race should be totally possible in production. After this patch, I ran the reproducer for over a few hundred rounds and no data loss observed. Performance overhead is minimal, microbenchmark swapin 10G from 32G zram: Before: 10934698 us After: 11157121 us Non-direct: 13155355 us (Dropping SWP_SYNCHRONOUS_IO flag) Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchronous device") Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1] Reported-by: "Huang, Ying" ying.huang@intel.com Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.... Signed-off-by: Kairui Song kasong@tencent.com Cc: stable@vger.kernel.org
Update from V2:
- Add a schedule() if raced to prevent repeated page faults wasting CPU and add noise to perf statistics.
- Use a bool to state the special case instead of reusing existing variables fixing error handling [Minchan Kim].
V2: https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/ Update from V1:
- Add some words on ZRAM case, it will discard swap content on swap_free so the race window is a bit different but cure is the same. [Barry Song]
- Update comments make it cleaner [Huang, Ying]
- Add a function place holder to fix CONFIG_SWAP=n built [SeongJae Park]
- Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO instead of "direct swapin path" [Yu Zhao]
- Update commit message.
- Collect Review and Acks.
V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/ include/linux/swap.h | 5 +++++ mm/memory.c | 20 ++++++++++++++++++++ mm/swap.h | 5 +++++ mm/swapfile.c | 13 +++++++++++++ 4 files changed, 43 insertions(+) diff --git a/include/linux/swap.h b/include/linux/swap.h index 4db00ddad261..8d28f6091a32 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp) return 0; } +static inline int swapcache_prepare(swp_entry_t swp) +{
- return 0;
+}
- static inline void swap_free(swp_entry_t swp) { }
diff --git a/mm/memory.c b/mm/memory.c index 7e1f4849463a..7059230d0a54 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) struct page *page; struct swap_info_struct *si = NULL; rmap_t rmap_flags = RMAP_NONE;
- bool need_clear_cache = false; bool exclusive = false; swp_entry_t entry; pte_t pte;
@@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!folio) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) {
/*
* Prevent parallel swapin from proceeding with
* the cache flag. Otherwise, another thread may
* finish swapin first, free the entry, and swapout
* reusing the same entry. It's undetectable as
* pte_same() returns true due to entry reuse.
*/
if (swapcache_prepare(entry)) {
/* Relax a bit to prevent rapid repeated page faults */
schedule();
goto out;
}
need_clear_cache = true;
I took a closer look at __read_swap_cache_async() and it essentially does something similar.
Instead of returning, it keeps retrying until it finds that swapcache_prepare() fails for another reason than -EEXISTS (e.g., freed concurrently) or it finds the entry in the swapcache.
So if you would succeed here on a freed+reused swap entry, __read_swap_cache_async() would simply retry.
It spells that out:
/* * We might race against __delete_from_swap_cache(), and * stumble across a swap_map entry whose SWAP_HAS_CACHE * has not yet been cleared. Or race against another * __read_swap_cache_async(), which has set SWAP_HAS_CACHE * in swap_map, but not yet added its folio to swap cache. */
Whereby we could not race against this code here as well where we speculatively set SWAP_HAS_CACHE and might never add something to the swap cache.
I'd probably avoid the wrong returns and do something even closer to __read_swap_cache_async().
while (true) { /* * Fake that we are trying to insert a page into the swapcache, to * serialize against concurrent threads wanting to do the same. * [more from your description] */ ret = swapcache_prepare(entry); if (likely(!ret) /* * Move forward with swapin, we'll recheck if the PTE hasn't * changed later. */ break; else if (ret != -EEXIST) goto out;
The swap entry may be kept in swap cache for long time. For example, it may be read into swap cache via MADV_WILLNEED.
Right, we'd have to check for the swapcache.
I briefly thought about just factoring out what we have in __read_swap_cache_async() and reusing here. Similar problem to solve, and quite a lot of duplicate code.
But not worth the churn in a simple fix. We could explore that option as a cleanup on top.
Kairui Song ryncsn@gmail.com writes:
From: Kairui Song kasong@tencent.com
When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads swapin the same entry at the same time, they get different pages (A, B). Before one thread (T0) finishes the swapin and installs page (A) to the PTE, another thread (T1) could finish swapin of page (B), swap_free the entry, then swap out the possibly modified page reusing the same entry. It breaks the pte_same check in (T0) because PTE value is unchanged, causing ABA problem. Thread (T0) will install a stalled page (A) into the PTE and cause data corruption.
One possible callstack is like this:
CPU0 CPU1
do_swap_page() do_swap_page() with same entry <direct swapin path> <direct swapin path> <alloc page A> <alloc page B> swap_read_folio() <- read to page A swap_read_folio() <- read to page B <slow on later locks or interrupt> <finished swapin first> ... set_pte_at() swap_free() <- entry is free <write to page B, now page A stalled> <swap out page B to same swap entry> pte_same() <- Check pass, PTE seems unchanged, but page A is stalled! swap_free() <- page B content lost! set_pte_at() <- staled page A installed!
And besides, for ZRAM, swap_free() allows the swap device to discard the entry content, so even if page (B) is not modified, if swap_read_folio() on CPU0 happens later than swap_free() on CPU1, it may also cause data loss.
To fix this, reuse swapcache_prepare which will pin the swap entry using the cache flag, and allow only one thread to pin it. Release the pin after PT unlocked. Racers will simply wait since it's a rare and very short event. A schedule() call is added to avoid wasting too much CPU or adding too much noise to perf statistics
Other methods like increasing the swap count don't seem to be a good idea after some tests, that will cause racers to fall back to use the swap cache again. Parallel swapin using different methods leads to a much more complex scenario.
The swap entry may be put in swap cache by some parallel code path anyway. So, we always need to consider that when reasoning the code.
Reproducer:
This race issue can be triggered easily using a well constructed reproducer and patched brd (with a delay in read path) [1]:
With latest 6.8 mainline, race caused data loss can be observed easily: $ gcc -g -lpthread test-thread-swap-race.c && ./a.out Polulating 32MB of memory region... Keep swapping out... Starting round 0... Spawning 65536 workers... 32746 workers spawned, wait for done... Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss! Round 0 Failed, 15 data loss!
This reproducer spawns multiple threads sharing the same memory region using a small swap device. Every two threads updates mapped pages one by one in opposite direction trying to create a race, with one dedicated thread keep swapping out the data out using madvise.
The reproducer created a reproduce rate of about once every 5 minutes, so the race should be totally possible in production.
After this patch, I ran the reproducer for over a few hundred rounds and no data loss observed.
Performance overhead is minimal, microbenchmark swapin 10G from 32G zram:
Before: 10934698 us After: 11157121 us Non-direct: 13155355 us (Dropping SWP_SYNCHRONOUS_IO flag)
Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchronous device") Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1] Reported-by: "Huang, Ying" ying.huang@intel.com Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.... Signed-off-by: Kairui Song kasong@tencent.com Cc: stable@vger.kernel.org
Update from V2:
- Add a schedule() if raced to prevent repeated page faults wasting CPU and add noise to perf statistics.
- Use a bool to state the special case instead of reusing existing variables fixing error handling [Minchan Kim].
V2: https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/
Update from V1:
- Add some words on ZRAM case, it will discard swap content on swap_free so the race window is a bit different but cure is the same. [Barry Song]
- Update comments make it cleaner [Huang, Ying]
- Add a function place holder to fix CONFIG_SWAP=n built [SeongJae Park]
- Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO instead of "direct swapin path" [Yu Zhao]
- Update commit message.
- Collect Review and Acks.
V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/
include/linux/swap.h | 5 +++++ mm/memory.c | 20 ++++++++++++++++++++ mm/swap.h | 5 +++++ mm/swapfile.c | 13 +++++++++++++ 4 files changed, 43 insertions(+)
diff --git a/include/linux/swap.h b/include/linux/swap.h index 4db00ddad261..8d28f6091a32 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp) return 0; } +static inline int swapcache_prepare(swp_entry_t swp) +{
- return 0;
+}
static inline void swap_free(swp_entry_t swp) { } diff --git a/mm/memory.c b/mm/memory.c index 7e1f4849463a..7059230d0a54 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) struct page *page; struct swap_info_struct *si = NULL; rmap_t rmap_flags = RMAP_NONE;
- bool need_clear_cache = false; bool exclusive = false; swp_entry_t entry; pte_t pte;
@@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!folio) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) {
/*
* Prevent parallel swapin from proceeding with
* the cache flag. Otherwise, another thread may
* finish swapin first, free the entry, and swapout
* reusing the same entry. It's undetectable as
* pte_same() returns true due to entry reuse.
*/
if (swapcache_prepare(entry)) {
/* Relax a bit to prevent rapid repeated page faults */
schedule();
The current task may be chosen in schedule(). So, I think that we should use cond_resched() here.
-- Best Regards, Huang, Ying
goto out;
}
need_clear_cache = true;
/* skip swapcache */ folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, vmf->address, false);
@@ -4117,6 +4132,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (vmf->pte) pte_unmap_unlock(vmf->pte, vmf->ptl); out:
- /* Clear the swap cache pin for direct swapin after PTL unlock */
- if (need_clear_cache)
if (si) put_swap_device(si); return ret;swapcache_clear(si, entry);
@@ -4131,6 +4149,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) folio_unlock(swapcache); folio_put(swapcache); }
- if (need_clear_cache)
if (si) put_swap_device(si); return ret;swapcache_clear(si, entry);
diff --git a/mm/swap.h b/mm/swap.h index 758c46ca671e..fc2f6ade7f80 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -41,6 +41,7 @@ void __delete_from_swap_cache(struct folio *folio, void delete_from_swap_cache(struct folio *folio); void clear_shadow_from_swap_cache(int type, unsigned long begin, unsigned long end); +void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry); struct folio *swap_cache_get_folio(swp_entry_t entry, struct vm_area_struct *vma, unsigned long addr); struct folio *filemap_get_incore_folio(struct address_space *mapping, @@ -97,6 +98,10 @@ static inline int swap_writepage(struct page *p, struct writeback_control *wbc) return 0; } +static inline void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry) +{ +}
static inline struct folio *swap_cache_get_folio(swp_entry_t entry, struct vm_area_struct *vma, unsigned long addr) { diff --git a/mm/swapfile.c b/mm/swapfile.c index 556ff7347d5f..746aa9da5302 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -3365,6 +3365,19 @@ int swapcache_prepare(swp_entry_t entry) return __swap_duplicate(entry, SWAP_HAS_CACHE); } +void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry) +{
- struct swap_cluster_info *ci;
- unsigned long offset = swp_offset(entry);
- unsigned char usage;
- ci = lock_cluster_or_swap_info(si, offset);
- usage = __swap_entry_free_locked(si, offset, SWAP_HAS_CACHE);
- unlock_cluster_or_swap_info(si, ci);
- if (!usage)
free_swap_slot(entry);
+}
struct swap_info_struct *swp_swap_info(swp_entry_t entry) { return swap_type_to_swap_info(swp_type(entry));
On Sun, Feb 18, 2024 at 4:34 PM Huang, Ying ying.huang@intel.com wrote:
Kairui Song ryncsn@gmail.com writes:
From: Kairui Song kasong@tencent.com
When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads swapin the same entry at the same time, they get different pages (A, B). Before one thread (T0) finishes the swapin and installs page (A) to the PTE, another thread (T1) could finish swapin of page (B), swap_free the entry, then swap out the possibly modified page reusing the same entry. It breaks the pte_same check in (T0) because PTE value is unchanged, causing ABA problem. Thread (T0) will install a stalled page (A) into the PTE and cause data corruption.
One possible callstack is like this:
CPU0 CPU1
do_swap_page() do_swap_page() with same entry <direct swapin path> <direct swapin path> <alloc page A> <alloc page B> swap_read_folio() <- read to page A swap_read_folio() <- read to page B <slow on later locks or interrupt> <finished swapin first> ... set_pte_at() swap_free() <- entry is free <write to page B, now page A stalled> <swap out page B to same swap entry> pte_same() <- Check pass, PTE seems unchanged, but page A is stalled! swap_free() <- page B content lost! set_pte_at() <- staled page A installed!
And besides, for ZRAM, swap_free() allows the swap device to discard the entry content, so even if page (B) is not modified, if swap_read_folio() on CPU0 happens later than swap_free() on CPU1, it may also cause data loss.
To fix this, reuse swapcache_prepare which will pin the swap entry using the cache flag, and allow only one thread to pin it. Release the pin after PT unlocked. Racers will simply wait since it's a rare and very short event. A schedule() call is added to avoid wasting too much CPU or adding too much noise to perf statistics
Other methods like increasing the swap count don't seem to be a good idea after some tests, that will cause racers to fall back to use the swap cache again. Parallel swapin using different methods leads to a much more complex scenario.
The swap entry may be put in swap cache by some parallel code path anyway. So, we always need to consider that when reasoning the code.
Reproducer:
This race issue can be triggered easily using a well constructed reproducer and patched brd (with a delay in read path) [1]:
With latest 6.8 mainline, race caused data loss can be observed easily: $ gcc -g -lpthread test-thread-swap-race.c && ./a.out Polulating 32MB of memory region... Keep swapping out... Starting round 0... Spawning 65536 workers... 32746 workers spawned, wait for done... Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss! Round 0 Failed, 15 data loss!
This reproducer spawns multiple threads sharing the same memory region using a small swap device. Every two threads updates mapped pages one by one in opposite direction trying to create a race, with one dedicated thread keep swapping out the data out using madvise.
The reproducer created a reproduce rate of about once every 5 minutes, so the race should be totally possible in production.
After this patch, I ran the reproducer for over a few hundred rounds and no data loss observed.
Performance overhead is minimal, microbenchmark swapin 10G from 32G zram:
Before: 10934698 us After: 11157121 us Non-direct: 13155355 us (Dropping SWP_SYNCHRONOUS_IO flag)
Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchronous device") Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1] Reported-by: "Huang, Ying" ying.huang@intel.com Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.... Signed-off-by: Kairui Song kasong@tencent.com Cc: stable@vger.kernel.org
Update from V2:
- Add a schedule() if raced to prevent repeated page faults wasting CPU and add noise to perf statistics.
- Use a bool to state the special case instead of reusing existing variables fixing error handling [Minchan Kim].
V2: https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/
Update from V1:
- Add some words on ZRAM case, it will discard swap content on swap_free so the race window is a bit different but cure is the same. [Barry Song]
- Update comments make it cleaner [Huang, Ying]
- Add a function place holder to fix CONFIG_SWAP=n built [SeongJae Park]
- Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO instead of "direct swapin path" [Yu Zhao]
- Update commit message.
- Collect Review and Acks.
V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/
include/linux/swap.h | 5 +++++ mm/memory.c | 20 ++++++++++++++++++++ mm/swap.h | 5 +++++ mm/swapfile.c | 13 +++++++++++++ 4 files changed, 43 insertions(+)
diff --git a/include/linux/swap.h b/include/linux/swap.h index 4db00ddad261..8d28f6091a32 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp) return 0; }
+static inline int swapcache_prepare(swp_entry_t swp) +{
return 0;
+}
static inline void swap_free(swp_entry_t swp) { } diff --git a/mm/memory.c b/mm/memory.c index 7e1f4849463a..7059230d0a54 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) struct page *page; struct swap_info_struct *si = NULL; rmap_t rmap_flags = RMAP_NONE;
bool need_clear_cache = false; bool exclusive = false; swp_entry_t entry; pte_t pte;
@@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!folio) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) {
/*
* Prevent parallel swapin from proceeding with
* the cache flag. Otherwise, another thread may
* finish swapin first, free the entry, and swapout
* reusing the same entry. It's undetectable as
* pte_same() returns true due to entry reuse.
*/
if (swapcache_prepare(entry)) {
/* Relax a bit to prevent rapid repeated page faults */
schedule();
The current task may be chosen in schedule(). So, I think that we should use cond_resched() here.
I think if we are worried about current task got chosen again we can use schedule_timeout_uninterruptible(1) here. Isn't cond_resched still __schedule() and and it can even get omitted, so it should be "weaker" IIUC.
Kairui Song ryncsn@gmail.com writes:
On Sun, Feb 18, 2024 at 4:34 PM Huang, Ying ying.huang@intel.com wrote:
Kairui Song ryncsn@gmail.com writes:
From: Kairui Song kasong@tencent.com
When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads swapin the same entry at the same time, they get different pages (A, B). Before one thread (T0) finishes the swapin and installs page (A) to the PTE, another thread (T1) could finish swapin of page (B), swap_free the entry, then swap out the possibly modified page reusing the same entry. It breaks the pte_same check in (T0) because PTE value is unchanged, causing ABA problem. Thread (T0) will install a stalled page (A) into the PTE and cause data corruption.
One possible callstack is like this:
CPU0 CPU1
do_swap_page() do_swap_page() with same entry <direct swapin path> <direct swapin path> <alloc page A> <alloc page B> swap_read_folio() <- read to page A swap_read_folio() <- read to page B <slow on later locks or interrupt> <finished swapin first> ... set_pte_at() swap_free() <- entry is free <write to page B, now page A stalled> <swap out page B to same swap entry> pte_same() <- Check pass, PTE seems unchanged, but page A is stalled! swap_free() <- page B content lost! set_pte_at() <- staled page A installed!
And besides, for ZRAM, swap_free() allows the swap device to discard the entry content, so even if page (B) is not modified, if swap_read_folio() on CPU0 happens later than swap_free() on CPU1, it may also cause data loss.
To fix this, reuse swapcache_prepare which will pin the swap entry using the cache flag, and allow only one thread to pin it. Release the pin after PT unlocked. Racers will simply wait since it's a rare and very short event. A schedule() call is added to avoid wasting too much CPU or adding too much noise to perf statistics
Other methods like increasing the swap count don't seem to be a good idea after some tests, that will cause racers to fall back to use the swap cache again. Parallel swapin using different methods leads to a much more complex scenario.
The swap entry may be put in swap cache by some parallel code path anyway. So, we always need to consider that when reasoning the code.
Reproducer:
This race issue can be triggered easily using a well constructed reproducer and patched brd (with a delay in read path) [1]:
With latest 6.8 mainline, race caused data loss can be observed easily: $ gcc -g -lpthread test-thread-swap-race.c && ./a.out Polulating 32MB of memory region... Keep swapping out... Starting round 0... Spawning 65536 workers... 32746 workers spawned, wait for done... Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss! Round 0 Failed, 15 data loss!
This reproducer spawns multiple threads sharing the same memory region using a small swap device. Every two threads updates mapped pages one by one in opposite direction trying to create a race, with one dedicated thread keep swapping out the data out using madvise.
The reproducer created a reproduce rate of about once every 5 minutes, so the race should be totally possible in production.
After this patch, I ran the reproducer for over a few hundred rounds and no data loss observed.
Performance overhead is minimal, microbenchmark swapin 10G from 32G zram:
Before: 10934698 us After: 11157121 us Non-direct: 13155355 us (Dropping SWP_SYNCHRONOUS_IO flag)
Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchronous device") Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1] Reported-by: "Huang, Ying" ying.huang@intel.com Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.... Signed-off-by: Kairui Song kasong@tencent.com Cc: stable@vger.kernel.org
Update from V2:
- Add a schedule() if raced to prevent repeated page faults wasting CPU and add noise to perf statistics.
- Use a bool to state the special case instead of reusing existing variables fixing error handling [Minchan Kim].
V2: https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/
Update from V1:
- Add some words on ZRAM case, it will discard swap content on swap_free so the race window is a bit different but cure is the same. [Barry Song]
- Update comments make it cleaner [Huang, Ying]
- Add a function place holder to fix CONFIG_SWAP=n built [SeongJae Park]
- Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO instead of "direct swapin path" [Yu Zhao]
- Update commit message.
- Collect Review and Acks.
V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/
include/linux/swap.h | 5 +++++ mm/memory.c | 20 ++++++++++++++++++++ mm/swap.h | 5 +++++ mm/swapfile.c | 13 +++++++++++++ 4 files changed, 43 insertions(+)
diff --git a/include/linux/swap.h b/include/linux/swap.h index 4db00ddad261..8d28f6091a32 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp) return 0; }
+static inline int swapcache_prepare(swp_entry_t swp) +{
return 0;
+}
static inline void swap_free(swp_entry_t swp) { } diff --git a/mm/memory.c b/mm/memory.c index 7e1f4849463a..7059230d0a54 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) struct page *page; struct swap_info_struct *si = NULL; rmap_t rmap_flags = RMAP_NONE;
bool need_clear_cache = false; bool exclusive = false; swp_entry_t entry; pte_t pte;
@@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!folio) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) {
/*
* Prevent parallel swapin from proceeding with
* the cache flag. Otherwise, another thread may
* finish swapin first, free the entry, and swapout
* reusing the same entry. It's undetectable as
* pte_same() returns true due to entry reuse.
*/
if (swapcache_prepare(entry)) {
/* Relax a bit to prevent rapid repeated page faults */
schedule();
The current task may be chosen in schedule(). So, I think that we should use cond_resched() here.
I think if we are worried about current task got chosen again we can use schedule_timeout_uninterruptible(1) here. Isn't cond_resched still __schedule() and and it can even get omitted, so it should be "weaker" IIUC.
schedule_timeout_uninterruptible(1) will introduce 1ms latency for the second task. That may kill performance of some workloads.
-- Best Regards, Huang, Ying
"Huang, Ying" ying.huang@intel.com writes:
Kairui Song ryncsn@gmail.com writes:
On Sun, Feb 18, 2024 at 4:34 PM Huang, Ying ying.huang@intel.com wrote:
Kairui Song ryncsn@gmail.com writes:
From: Kairui Song kasong@tencent.com
When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads swapin the same entry at the same time, they get different pages (A, B). Before one thread (T0) finishes the swapin and installs page (A) to the PTE, another thread (T1) could finish swapin of page (B), swap_free the entry, then swap out the possibly modified page reusing the same entry. It breaks the pte_same check in (T0) because PTE value is unchanged, causing ABA problem. Thread (T0) will install a stalled page (A) into the PTE and cause data corruption.
One possible callstack is like this:
CPU0 CPU1
do_swap_page() do_swap_page() with same entry <direct swapin path> <direct swapin path> <alloc page A> <alloc page B> swap_read_folio() <- read to page A swap_read_folio() <- read to page B <slow on later locks or interrupt> <finished swapin first> ... set_pte_at() swap_free() <- entry is free <write to page B, now page A stalled> <swap out page B to same swap entry> pte_same() <- Check pass, PTE seems unchanged, but page A is stalled! swap_free() <- page B content lost! set_pte_at() <- staled page A installed!
And besides, for ZRAM, swap_free() allows the swap device to discard the entry content, so even if page (B) is not modified, if swap_read_folio() on CPU0 happens later than swap_free() on CPU1, it may also cause data loss.
To fix this, reuse swapcache_prepare which will pin the swap entry using the cache flag, and allow only one thread to pin it. Release the pin after PT unlocked. Racers will simply wait since it's a rare and very short event. A schedule() call is added to avoid wasting too much CPU or adding too much noise to perf statistics
Other methods like increasing the swap count don't seem to be a good idea after some tests, that will cause racers to fall back to use the swap cache again. Parallel swapin using different methods leads to a much more complex scenario.
The swap entry may be put in swap cache by some parallel code path anyway. So, we always need to consider that when reasoning the code.
Reproducer:
This race issue can be triggered easily using a well constructed reproducer and patched brd (with a delay in read path) [1]:
With latest 6.8 mainline, race caused data loss can be observed easily: $ gcc -g -lpthread test-thread-swap-race.c && ./a.out Polulating 32MB of memory region... Keep swapping out... Starting round 0... Spawning 65536 workers... 32746 workers spawned, wait for done... Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss! Round 0 Failed, 15 data loss!
This reproducer spawns multiple threads sharing the same memory region using a small swap device. Every two threads updates mapped pages one by one in opposite direction trying to create a race, with one dedicated thread keep swapping out the data out using madvise.
The reproducer created a reproduce rate of about once every 5 minutes, so the race should be totally possible in production.
After this patch, I ran the reproducer for over a few hundred rounds and no data loss observed.
Performance overhead is minimal, microbenchmark swapin 10G from 32G zram:
Before: 10934698 us After: 11157121 us Non-direct: 13155355 us (Dropping SWP_SYNCHRONOUS_IO flag)
Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchronous device") Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1] Reported-by: "Huang, Ying" ying.huang@intel.com Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.... Signed-off-by: Kairui Song kasong@tencent.com Cc: stable@vger.kernel.org
Update from V2:
- Add a schedule() if raced to prevent repeated page faults wasting CPU and add noise to perf statistics.
- Use a bool to state the special case instead of reusing existing variables fixing error handling [Minchan Kim].
V2: https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/
Update from V1:
- Add some words on ZRAM case, it will discard swap content on swap_free so the race window is a bit different but cure is the same. [Barry Song]
- Update comments make it cleaner [Huang, Ying]
- Add a function place holder to fix CONFIG_SWAP=n built [SeongJae Park]
- Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO instead of "direct swapin path" [Yu Zhao]
- Update commit message.
- Collect Review and Acks.
V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/
include/linux/swap.h | 5 +++++ mm/memory.c | 20 ++++++++++++++++++++ mm/swap.h | 5 +++++ mm/swapfile.c | 13 +++++++++++++ 4 files changed, 43 insertions(+)
diff --git a/include/linux/swap.h b/include/linux/swap.h index 4db00ddad261..8d28f6091a32 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp) return 0; }
+static inline int swapcache_prepare(swp_entry_t swp) +{
return 0;
+}
static inline void swap_free(swp_entry_t swp) { } diff --git a/mm/memory.c b/mm/memory.c index 7e1f4849463a..7059230d0a54 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) struct page *page; struct swap_info_struct *si = NULL; rmap_t rmap_flags = RMAP_NONE;
bool need_clear_cache = false; bool exclusive = false; swp_entry_t entry; pte_t pte;
@@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!folio) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) {
/*
* Prevent parallel swapin from proceeding with
* the cache flag. Otherwise, another thread may
* finish swapin first, free the entry, and swapout
* reusing the same entry. It's undetectable as
* pte_same() returns true due to entry reuse.
*/
if (swapcache_prepare(entry)) {
/* Relax a bit to prevent rapid repeated page faults */
schedule();
The current task may be chosen in schedule(). So, I think that we should use cond_resched() here.
I think if we are worried about current task got chosen again we can use schedule_timeout_uninterruptible(1) here. Isn't cond_resched still __schedule() and and it can even get omitted, so it should be "weaker" IIUC.
schedule_timeout_uninterruptible(1) will introduce 1ms latency for the second task. That may kill performance of some workloads.
Just found that the cond_sched() in __read_swap_cache_async() has been changed to schedule_timeout_uninterruptible(1) to fix some live lock. Details are in the description of commit 029c4628b2eb ("mm: swap: get rid of livelock in swapin readahead"). I think the similar issue may happen here too. So, we must use schedule_timeout_uninterruptible(1) here until some other better idea becomes available.
-- Best Regards, Huang, Ying
On Mon, Feb 19, 2024 at 10:35 AM Huang, Ying ying.huang@intel.com wrote:
"Huang, Ying" ying.huang@intel.com writes:
Kairui Song ryncsn@gmail.com writes:
On Sun, Feb 18, 2024 at 4:34 PM Huang, Ying ying.huang@intel.com wrote:
Kairui Song ryncsn@gmail.com writes:
From: Kairui Song kasong@tencent.com
When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads swapin the same entry at the same time, they get different pages (A, B). Before one thread (T0) finishes the swapin and installs page (A) to the PTE, another thread (T1) could finish swapin of page (B), swap_free the entry, then swap out the possibly modified page reusing the same entry. It breaks the pte_same check in (T0) because PTE value is unchanged, causing ABA problem. Thread (T0) will install a stalled page (A) into the PTE and cause data corruption.
One possible callstack is like this:
CPU0 CPU1
do_swap_page() do_swap_page() with same entry <direct swapin path> <direct swapin path> <alloc page A> <alloc page B> swap_read_folio() <- read to page A swap_read_folio() <- read to page B <slow on later locks or interrupt> <finished swapin first> ... set_pte_at() swap_free() <- entry is free <write to page B, now page A stalled> <swap out page B to same swap entry> pte_same() <- Check pass, PTE seems unchanged, but page A is stalled! swap_free() <- page B content lost! set_pte_at() <- staled page A installed!
And besides, for ZRAM, swap_free() allows the swap device to discard the entry content, so even if page (B) is not modified, if swap_read_folio() on CPU0 happens later than swap_free() on CPU1, it may also cause data loss.
To fix this, reuse swapcache_prepare which will pin the swap entry using the cache flag, and allow only one thread to pin it. Release the pin after PT unlocked. Racers will simply wait since it's a rare and very short event. A schedule() call is added to avoid wasting too much CPU or adding too much noise to perf statistics
Other methods like increasing the swap count don't seem to be a good idea after some tests, that will cause racers to fall back to use the swap cache again. Parallel swapin using different methods leads to a much more complex scenario.
The swap entry may be put in swap cache by some parallel code path anyway. So, we always need to consider that when reasoning the code.
Reproducer:
This race issue can be triggered easily using a well constructed reproducer and patched brd (with a delay in read path) [1]:
With latest 6.8 mainline, race caused data loss can be observed easily: $ gcc -g -lpthread test-thread-swap-race.c && ./a.out Polulating 32MB of memory region... Keep swapping out... Starting round 0... Spawning 65536 workers... 32746 workers spawned, wait for done... Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss! Round 0 Failed, 15 data loss!
This reproducer spawns multiple threads sharing the same memory region using a small swap device. Every two threads updates mapped pages one by one in opposite direction trying to create a race, with one dedicated thread keep swapping out the data out using madvise.
The reproducer created a reproduce rate of about once every 5 minutes, so the race should be totally possible in production.
After this patch, I ran the reproducer for over a few hundred rounds and no data loss observed.
Performance overhead is minimal, microbenchmark swapin 10G from 32G zram:
Before: 10934698 us After: 11157121 us Non-direct: 13155355 us (Dropping SWP_SYNCHRONOUS_IO flag)
Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchronous device") Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1] Reported-by: "Huang, Ying" ying.huang@intel.com Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.... Signed-off-by: Kairui Song kasong@tencent.com Cc: stable@vger.kernel.org
Update from V2:
- Add a schedule() if raced to prevent repeated page faults wasting CPU and add noise to perf statistics.
- Use a bool to state the special case instead of reusing existing variables fixing error handling [Minchan Kim].
V2: https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/
Update from V1:
- Add some words on ZRAM case, it will discard swap content on swap_free so the race window is a bit different but cure is the same. [Barry Song]
- Update comments make it cleaner [Huang, Ying]
- Add a function place holder to fix CONFIG_SWAP=n built [SeongJae Park]
- Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO instead of "direct swapin path" [Yu Zhao]
- Update commit message.
- Collect Review and Acks.
V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/
include/linux/swap.h | 5 +++++ mm/memory.c | 20 ++++++++++++++++++++ mm/swap.h | 5 +++++ mm/swapfile.c | 13 +++++++++++++ 4 files changed, 43 insertions(+)
diff --git a/include/linux/swap.h b/include/linux/swap.h index 4db00ddad261..8d28f6091a32 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp) return 0; }
+static inline int swapcache_prepare(swp_entry_t swp) +{
return 0;
+}
static inline void swap_free(swp_entry_t swp) { } diff --git a/mm/memory.c b/mm/memory.c index 7e1f4849463a..7059230d0a54 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) struct page *page; struct swap_info_struct *si = NULL; rmap_t rmap_flags = RMAP_NONE;
bool need_clear_cache = false; bool exclusive = false; swp_entry_t entry; pte_t pte;
@@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!folio) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) {
/*
* Prevent parallel swapin from proceeding with
* the cache flag. Otherwise, another thread may
* finish swapin first, free the entry, and swapout
* reusing the same entry. It's undetectable as
* pte_same() returns true due to entry reuse.
*/
if (swapcache_prepare(entry)) {
/* Relax a bit to prevent rapid repeated page faults */
schedule();
The current task may be chosen in schedule(). So, I think that we should use cond_resched() here.
I think if we are worried about current task got chosen again we can use schedule_timeout_uninterruptible(1) here. Isn't cond_resched still __schedule() and and it can even get omitted, so it should be "weaker" IIUC.
schedule_timeout_uninterruptible(1) will introduce 1ms latency for the second task. That may kill performance of some workloads.
It actually calls schedule_timeout so it should be a 1 jiffy latency, not 1ms, right?
/** * schedule_timeout - sleep until timeout * @timeout: timeout value in jiffies ...
But I think what we really want here is actually the set_current_state to force yield CPU for a short period. The latency should be mild.
Just found that the cond_sched() in __read_swap_cache_async() has been changed to schedule_timeout_uninterruptible(1) to fix some live lock. Details are in the description of commit 029c4628b2eb ("mm: swap: get rid of livelock in swapin readahead"). I think the similar issue may happen here too. So, we must use schedule_timeout_uninterruptible(1) here until some other better idea becomes available.
Indeed, I'll switch to schedule_timeout_uninterruptible(1). I've tested and posted the result with schedule_timeout_uninterruptible(1) before, it looked fine, or even better.
On Mon, Feb 19, 2024 at 11:09 AM Kairui Song ryncsn@gmail.com wrote:
On Mon, Feb 19, 2024 at 10:35 AM Huang, Ying ying.huang@intel.com wrote:
"Huang, Ying" ying.huang@intel.com writes:
Kairui Song ryncsn@gmail.com writes:
On Sun, Feb 18, 2024 at 4:34 PM Huang, Ying ying.huang@intel.com wrote:
Kairui Song ryncsn@gmail.com writes:
From: Kairui Song kasong@tencent.com
When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads swapin the same entry at the same time, they get different pages (A, B). Before one thread (T0) finishes the swapin and installs page (A) to the PTE, another thread (T1) could finish swapin of page (B), swap_free the entry, then swap out the possibly modified page reusing the same entry. It breaks the pte_same check in (T0) because PTE value is unchanged, causing ABA problem. Thread (T0) will install a stalled page (A) into the PTE and cause data corruption.
One possible callstack is like this:
CPU0 CPU1
do_swap_page() do_swap_page() with same entry <direct swapin path> <direct swapin path> <alloc page A> <alloc page B> swap_read_folio() <- read to page A swap_read_folio() <- read to page B <slow on later locks or interrupt> <finished swapin first> ... set_pte_at() swap_free() <- entry is free <write to page B, now page A stalled> <swap out page B to same swap entry> pte_same() <- Check pass, PTE seems unchanged, but page A is stalled! swap_free() <- page B content lost! set_pte_at() <- staled page A installed!
And besides, for ZRAM, swap_free() allows the swap device to discard the entry content, so even if page (B) is not modified, if swap_read_folio() on CPU0 happens later than swap_free() on CPU1, it may also cause data loss.
To fix this, reuse swapcache_prepare which will pin the swap entry using the cache flag, and allow only one thread to pin it. Release the pin after PT unlocked. Racers will simply wait since it's a rare and very short event. A schedule() call is added to avoid wasting too much CPU or adding too much noise to perf statistics
Other methods like increasing the swap count don't seem to be a good idea after some tests, that will cause racers to fall back to use the swap cache again. Parallel swapin using different methods leads to a much more complex scenario.
The swap entry may be put in swap cache by some parallel code path anyway. So, we always need to consider that when reasoning the code.
Reproducer:
This race issue can be triggered easily using a well constructed reproducer and patched brd (with a delay in read path) [1]:
With latest 6.8 mainline, race caused data loss can be observed easily: $ gcc -g -lpthread test-thread-swap-race.c && ./a.out Polulating 32MB of memory region... Keep swapping out... Starting round 0... Spawning 65536 workers... 32746 workers spawned, wait for done... Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss! Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss! Round 0 Failed, 15 data loss!
This reproducer spawns multiple threads sharing the same memory region using a small swap device. Every two threads updates mapped pages one by one in opposite direction trying to create a race, with one dedicated thread keep swapping out the data out using madvise.
The reproducer created a reproduce rate of about once every 5 minutes, so the race should be totally possible in production.
After this patch, I ran the reproducer for over a few hundred rounds and no data loss observed.
Performance overhead is minimal, microbenchmark swapin 10G from 32G zram:
Before: 10934698 us After: 11157121 us Non-direct: 13155355 us (Dropping SWP_SYNCHRONOUS_IO flag)
Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchronous device") Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1] Reported-by: "Huang, Ying" ying.huang@intel.com Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.... Signed-off-by: Kairui Song kasong@tencent.com Cc: stable@vger.kernel.org
Update from V2:
- Add a schedule() if raced to prevent repeated page faults wasting CPU and add noise to perf statistics.
- Use a bool to state the special case instead of reusing existing variables fixing error handling [Minchan Kim].
V2: https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/
Update from V1:
- Add some words on ZRAM case, it will discard swap content on swap_free so the race window is a bit different but cure is the same. [Barry Song]
- Update comments make it cleaner [Huang, Ying]
- Add a function place holder to fix CONFIG_SWAP=n built [SeongJae Park]
- Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO instead of "direct swapin path" [Yu Zhao]
- Update commit message.
- Collect Review and Acks.
V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/
include/linux/swap.h | 5 +++++ mm/memory.c | 20 ++++++++++++++++++++ mm/swap.h | 5 +++++ mm/swapfile.c | 13 +++++++++++++ 4 files changed, 43 insertions(+)
diff --git a/include/linux/swap.h b/include/linux/swap.h index 4db00ddad261..8d28f6091a32 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp) return 0; }
+static inline int swapcache_prepare(swp_entry_t swp) +{
return 0;
+}
static inline void swap_free(swp_entry_t swp) { } diff --git a/mm/memory.c b/mm/memory.c index 7e1f4849463a..7059230d0a54 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) struct page *page; struct swap_info_struct *si = NULL; rmap_t rmap_flags = RMAP_NONE;
bool need_clear_cache = false; bool exclusive = false; swp_entry_t entry; pte_t pte;
@@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!folio) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) {
/*
* Prevent parallel swapin from proceeding with
* the cache flag. Otherwise, another thread may
* finish swapin first, free the entry, and swapout
* reusing the same entry. It's undetectable as
* pte_same() returns true due to entry reuse.
*/
if (swapcache_prepare(entry)) {
/* Relax a bit to prevent rapid repeated page faults */
schedule();
The current task may be chosen in schedule(). So, I think that we should use cond_resched() here.
I think if we are worried about current task got chosen again we can use schedule_timeout_uninterruptible(1) here. Isn't cond_resched still __schedule() and and it can even get omitted, so it should be "weaker" IIUC.
schedule_timeout_uninterruptible(1) will introduce 1ms latency for the second task. That may kill performance of some workloads.
It actually calls schedule_timeout so it should be a 1 jiffy latency, not 1ms, right?
/**
- schedule_timeout - sleep until timeout
- @timeout: timeout value in jiffies
...
But I think what we really want here is actually the set_current_state to force yield CPU for a short period. The latency should be mild.
I just forgot 1 jiffy >= 1 ms here, and uninterruptible should make it unable to wakeup until timeout...
Just found that the cond_sched() in __read_swap_cache_async() has been changed to schedule_timeout_uninterruptible(1) to fix some live lock. Details are in the description of commit 029c4628b2eb ("mm: swap: get rid of livelock in swapin readahead"). I think the similar issue may happen here too. So, we must use schedule_timeout_uninterruptible(1) here until some other better idea becomes available.
Indeed, I'll switch to schedule_timeout_uninterruptible(1). I've tested and posted the result with schedule_timeout_uninterruptible(1) before, it looked fine, or even better.
But this should be still the same though, the minor/major fault ratio in previous test result [1] shows the race on ZRAM even with threads set to race on purpose, the chance is low, and thanks for the info on mentioning another commit!
[1] https://lore.kernel.org/all/CAMgjq7BvTJmxrWQOJvkLt4g_jnvmx07NdU63sGeRMGde4Ov...
linux-stable-mirror@lists.linaro.org