SWP_FS is used to make swap_{read,write}page() go through the filesystem, and it's only used for swap files over NFS. So, !SWP_FS means non NFS for now, it could be either file backed or device backed. Something similar goes with legacy SWP_FILE.
So in order to achieve the goal of the original patch, SWP_BLKDEV should be used instead.
FS corruption can be observed with SSD device + XFS + fragmented swapfile due to CONFIG_THP_SWAP=y.
I reproduced the issue with the following details:
Environment: QEMU + upstream kernel + buildroot + NVMe (2 GB)
Kernel config: CONFIG_BLK_DEV_NVME=y CONFIG_THP_SWAP=y
Some reproducable steps: mkfs.xfs -f /dev/nvme0n1 mkdir /tmp/mnt mount /dev/nvme0n1 /tmp/mnt bs="32k" sz="1024m" # doesn't matter too much, I also tried 16m xfs_io -f -c "pwrite -R -b $bs 0 $sz" -c "fdatasync" /tmp/mnt/sw xfs_io -f -c "pwrite -R -b $bs 0 $sz" -c "fdatasync" /tmp/mnt/sw xfs_io -f -c "pwrite -R -b $bs 0 $sz" -c "fdatasync" /tmp/mnt/sw xfs_io -f -c "pwrite -F -S 0 -b $bs 0 $sz" -c "fdatasync" /tmp/mnt/sw xfs_io -f -c "pwrite -R -b $bs 0 $sz" -c "fsync" /tmp/mnt/sw
mkswap /tmp/mnt/sw swapon /tmp/mnt/sw
stress --vm 2 --vm-bytes 600M # doesn't matter too much as well
Symptoms: - FS corruption (e.g. checksum failure) - memory corruption at: 0xd2808010 - segfault
Fixes: f0eea189e8e9 ("mm, THP, swap: Don't allocate huge cluster for file backed swap device") Fixes: 38d8b4e6bdc8 ("mm, THP, swap: delay splitting THP during swap out") Cc: "Huang, Ying" ying.huang@intel.com Cc: Yang Shi yang.shi@linux.alibaba.com Cc: Rafael Aquini aquini@redhat.com Cc: Dave Chinner david@fromorbit.com Cc: stable stable@vger.kernel.org Signed-off-by: Gao Xiang hsiangkao@redhat.com --- v1: https://lore.kernel.org/r/20200819195613.24269-1-hsiangkao@redhat.com
changes since v1: - improve commit message description
Hi Andrew, Kindly consider this one instead if no other concerns...
Thanks, Gao Xiang
mm/swapfile.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/swapfile.c b/mm/swapfile.c index 6c26916e95fd..2937daf3ca02 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1074,7 +1074,7 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_size) goto nextsi; } if (size == SWAPFILE_CLUSTER) { - if (!(si->flags & SWP_FS)) + if (si->flags & SWP_BLKDEV) n_ret = swap_alloc_cluster(si, swp_entries); } else n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE,
Gao Xiang hsiangkao@redhat.com writes:
SWP_FS is used to make swap_{read,write}page() go through the filesystem, and it's only used for swap files over NFS. So, !SWP_FS means non NFS for now, it could be either file backed or device backed. Something similar goes with legacy SWP_FILE.
So in order to achieve the goal of the original patch, SWP_BLKDEV should be used instead.
FS corruption can be observed with SSD device + XFS + fragmented swapfile due to CONFIG_THP_SWAP=y.
I reproduced the issue with the following details:
Environment: QEMU + upstream kernel + buildroot + NVMe (2 GB)
Kernel config: CONFIG_BLK_DEV_NVME=y CONFIG_THP_SWAP=y
Some reproducable steps: mkfs.xfs -f /dev/nvme0n1 mkdir /tmp/mnt mount /dev/nvme0n1 /tmp/mnt bs="32k" sz="1024m" # doesn't matter too much, I also tried 16m xfs_io -f -c "pwrite -R -b $bs 0 $sz" -c "fdatasync" /tmp/mnt/sw xfs_io -f -c "pwrite -R -b $bs 0 $sz" -c "fdatasync" /tmp/mnt/sw xfs_io -f -c "pwrite -R -b $bs 0 $sz" -c "fdatasync" /tmp/mnt/sw xfs_io -f -c "pwrite -F -S 0 -b $bs 0 $sz" -c "fdatasync" /tmp/mnt/sw xfs_io -f -c "pwrite -R -b $bs 0 $sz" -c "fsync" /tmp/mnt/sw
mkswap /tmp/mnt/sw swapon /tmp/mnt/sw
stress --vm 2 --vm-bytes 600M # doesn't matter too much as well
Symptoms:
- FS corruption (e.g. checksum failure)
- memory corruption at: 0xd2808010
- segfault
Fixes: f0eea189e8e9 ("mm, THP, swap: Don't allocate huge cluster for file backed swap device") Fixes: 38d8b4e6bdc8 ("mm, THP, swap: delay splitting THP during swap out") Cc: "Huang, Ying" ying.huang@intel.com Cc: Yang Shi yang.shi@linux.alibaba.com Cc: Rafael Aquini aquini@redhat.com Cc: Dave Chinner david@fromorbit.com Cc: stable stable@vger.kernel.org Signed-off-by: Gao Xiang hsiangkao@redhat.com
Thanks!
Reviewed-by: "Huang, Ying" ying.huang@intel.com
Best Regards, Huang, Ying
On Thu, Aug 20, 2020 at 12:53:23PM +0800, Gao Xiang wrote:
SWP_FS is used to make swap_{read,write}page() go through the filesystem, and it's only used for swap files over NFS. So, !SWP_FS means non NFS for now, it could be either file backed or device backed. Something similar goes with legacy SWP_FILE.
So in order to achieve the goal of the original patch, SWP_BLKDEV should be used instead.
This is clearly confusing. I think we need to rename SWP_FS to SWP_FS_OPS.
More generally, the swap code seems insane. I appreciate that it's an inherited design from over twenty-five years ago, and nobody wants to touch it, but it's crazy that it cares about how the filesystem has mapped file blocks to disk blocks. I understand that the filesystem has to know not to allocate memory in order to free memory, but this is already something filesystems have to understand. It's also useful for filesystems to know that this is data which has no meaning after a power cycle (so it doesn't need to be journalled or snapshotted or ...), but again, that's useful functionality which we could stand to present to userspace anyway.
I suppose the tricky thing about it is that working on the swap code is not as sexy as working on a filesystem, and doing the swap code right is essentially writing a filesystem, so everybody who's capable already has something better to do.
Anyway, Gao, please can you submit a follow-on patch to rename SWP_FS?
Hi Matthew,
On Thu, Aug 20, 2020 at 12:34:48PM +0100, Matthew Wilcox wrote:
On Thu, Aug 20, 2020 at 12:53:23PM +0800, Gao Xiang wrote:
SWP_FS is used to make swap_{read,write}page() go through the filesystem, and it's only used for swap files over NFS. So, !SWP_FS means non NFS for now, it could be either file backed or device backed. Something similar goes with legacy SWP_FILE.
So in order to achieve the goal of the original patch, SWP_BLKDEV should be used instead.
This is clearly confusing. I think we need to rename SWP_FS to SWP_FS_OPS.
More generally, the swap code seems insane. I appreciate that it's an inherited design from over twenty-five years ago, and nobody wants to touch it, but it's crazy that it cares about how the filesystem has mapped file blocks to disk blocks. I understand that the filesystem has to know not to allocate memory in order to free memory, but this is already something filesystems have to understand. It's also useful for filesystems to know that this is data which has no meaning after a power cycle (so it doesn't need to be journalled or snapshotted or ...), but again, that's useful functionality which we could stand to present to userspace anyway.
I suppose the tricky thing about it is that working on the swap code is not as sexy as working on a filesystem, and doing the swap code right is essentially writing a filesystem, so everybody who's capable already has something better to do.
Yeah, I agree with your point. After looking into swap code a bit (swapfile.c and swap.c), I think such code really needs to be cleaned up... But I'm lack of motivation about this since I couldn't guarantee to introduce some new regression and honestly I don't care much about this piece of code.
Maybe some new projects based on this could help clean up that as well. :)
Anyway, we really need a quick fix to avoid such FS corruption, which looks dangerous on the consumer side.
Anyway, Gao, please can you submit a follow-on patch to rename SWP_FS?
Ok, anyway, that is another stuff and may need some other thread. I will seek some time to send out a patch for further discussion later.
Thanks, Gao Xiang
On Wed, Aug 19, 2020 at 9:54 PM Gao Xiang hsiangkao@redhat.com wrote:
SWP_FS is used to make swap_{read,write}page() go through the filesystem, and it's only used for swap files over NFS. So, !SWP_FS means non NFS for now, it could be either file backed or device backed. Something similar goes with legacy SWP_FILE.
So in order to achieve the goal of the original patch, SWP_BLKDEV should be used instead.
FS corruption can be observed with SSD device + XFS + fragmented swapfile due to CONFIG_THP_SWAP=y.
I reproduced the issue with the following details:
Environment: QEMU + upstream kernel + buildroot + NVMe (2 GB)
Kernel config: CONFIG_BLK_DEV_NVME=y CONFIG_THP_SWAP=y
Some reproducable steps: mkfs.xfs -f /dev/nvme0n1 mkdir /tmp/mnt mount /dev/nvme0n1 /tmp/mnt bs="32k" sz="1024m" # doesn't matter too much, I also tried 16m xfs_io -f -c "pwrite -R -b $bs 0 $sz" -c "fdatasync" /tmp/mnt/sw xfs_io -f -c "pwrite -R -b $bs 0 $sz" -c "fdatasync" /tmp/mnt/sw xfs_io -f -c "pwrite -R -b $bs 0 $sz" -c "fdatasync" /tmp/mnt/sw xfs_io -f -c "pwrite -F -S 0 -b $bs 0 $sz" -c "fdatasync" /tmp/mnt/sw xfs_io -f -c "pwrite -R -b $bs 0 $sz" -c "fsync" /tmp/mnt/sw
mkswap /tmp/mnt/sw swapon /tmp/mnt/sw
stress --vm 2 --vm-bytes 600M # doesn't matter too much as well
Symptoms:
- FS corruption (e.g. checksum failure)
- memory corruption at: 0xd2808010
- segfault
Fixes: f0eea189e8e9 ("mm, THP, swap: Don't allocate huge cluster for file backed swap device") Fixes: 38d8b4e6bdc8 ("mm, THP, swap: delay splitting THP during swap out") Cc: "Huang, Ying" ying.huang@intel.com Cc: Yang Shi yang.shi@linux.alibaba.com Cc: Rafael Aquini aquini@redhat.com Cc: Dave Chinner david@fromorbit.com Cc: stable stable@vger.kernel.org Signed-off-by: Gao Xiang hsiangkao@redhat.com
v1: https://lore.kernel.org/r/20200819195613.24269-1-hsiangkao@redhat.com
changes since v1:
- improve commit message description
Thanks for incorporating this. Reviewed-by: Yang Shi shy828301@gmail.com
Hi Andrew, Kindly consider this one instead if no other concerns...
Thanks, Gao Xiang
mm/swapfile.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/swapfile.c b/mm/swapfile.c index 6c26916e95fd..2937daf3ca02 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1074,7 +1074,7 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_size) goto nextsi; } if (size == SWAPFILE_CLUSTER) {
if (!(si->flags & SWP_FS))
if (si->flags & SWP_BLKDEV) n_ret = swap_alloc_cluster(si, swp_entries); } else n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE,
-- 2.18.1
On Thu, Aug 20, 2020 at 12:53:23PM +0800, Gao Xiang wrote:
SWP_FS is used to make swap_{read,write}page() go through the filesystem, and it's only used for swap files over NFS. So, !SWP_FS means non NFS for now, it could be either file backed or device backed. Something similar goes with legacy SWP_FILE.
So in order to achieve the goal of the original patch, SWP_BLKDEV should be used instead.
FS corruption can be observed with SSD device + XFS + fragmented swapfile due to CONFIG_THP_SWAP=y.
I reproduced the issue with the following details:
Environment: QEMU + upstream kernel + buildroot + NVMe (2 GB)
Kernel config: CONFIG_BLK_DEV_NVME=y CONFIG_THP_SWAP=y
Some reproducable steps: mkfs.xfs -f /dev/nvme0n1 mkdir /tmp/mnt mount /dev/nvme0n1 /tmp/mnt bs="32k" sz="1024m" # doesn't matter too much, I also tried 16m xfs_io -f -c "pwrite -R -b $bs 0 $sz" -c "fdatasync" /tmp/mnt/sw xfs_io -f -c "pwrite -R -b $bs 0 $sz" -c "fdatasync" /tmp/mnt/sw xfs_io -f -c "pwrite -R -b $bs 0 $sz" -c "fdatasync" /tmp/mnt/sw xfs_io -f -c "pwrite -F -S 0 -b $bs 0 $sz" -c "fdatasync" /tmp/mnt/sw xfs_io -f -c "pwrite -R -b $bs 0 $sz" -c "fsync" /tmp/mnt/sw
mkswap /tmp/mnt/sw swapon /tmp/mnt/sw
stress --vm 2 --vm-bytes 600M # doesn't matter too much as well
Symptoms:
- FS corruption (e.g. checksum failure)
- memory corruption at: 0xd2808010
- segfault
Fixes: f0eea189e8e9 ("mm, THP, swap: Don't allocate huge cluster for file backed swap device") Fixes: 38d8b4e6bdc8 ("mm, THP, swap: delay splitting THP during swap out") Cc: "Huang, Ying" ying.huang@intel.com Cc: Yang Shi yang.shi@linux.alibaba.com Cc: Rafael Aquini aquini@redhat.com Cc: Dave Chinner david@fromorbit.com Cc: stable stable@vger.kernel.org Signed-off-by: Gao Xiang hsiangkao@redhat.com
v1: https://lore.kernel.org/r/20200819195613.24269-1-hsiangkao@redhat.com
changes since v1:
- improve commit message description
Hi Andrew, Kindly consider this one instead if no other concerns...
Thanks, Gao Xiang
mm/swapfile.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/swapfile.c b/mm/swapfile.c index 6c26916e95fd..2937daf3ca02 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1074,7 +1074,7 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_size) goto nextsi; } if (size == SWAPFILE_CLUSTER) {
if (!(si->flags & SWP_FS))
} else n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE,if (si->flags & SWP_BLKDEV) n_ret = swap_alloc_cluster(si, swp_entries);
-- 2.18.1
Acked-by: Rafael Aquini aquini@redhat.com
On Thu, Aug 20, 2020 at 12:53:23PM +0800, Gao Xiang wrote:
SWP_FS is used to make swap_{read,write}page() go through the filesystem, and it's only used for swap files over NFS. So, !SWP_FS means non NFS for now, it could be either file backed or device backed. Something similar goes with legacy SWP_FILE.
So in order to achieve the goal of the original patch, SWP_BLKDEV should be used instead.
FS corruption can be observed with SSD device + XFS + fragmented swapfile due to CONFIG_THP_SWAP=y.
I reproduced the issue with the following details:
Environment: QEMU + upstream kernel + buildroot + NVMe (2 GB)
Kernel config: CONFIG_BLK_DEV_NVME=y CONFIG_THP_SWAP=y
Ok, so at it's core this is a swap file extent versus THP swap cluster alignment issue?
diff --git a/mm/swapfile.c b/mm/swapfile.c index 6c26916e95fd..2937daf3ca02 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1074,7 +1074,7 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_size) goto nextsi; } if (size == SWAPFILE_CLUSTER) {
if (!(si->flags & SWP_FS))
} else n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE,if (si->flags & SWP_BLKDEV) n_ret = swap_alloc_cluster(si, swp_entries);
IOWs, if you don't make this change, does the corruption problem go away if you align swap extents in iomap_swapfile_add_extent() to (SWAPFILE_CLUSTER * PAGE_SIZE) instead of just PAGE_SIZE?
I.e. if the swapfile extents are aligned correctly to huge page swap cluster size and alignment, does the swap clustering optimisations for swapping THP pages work correctly? And, if so, is there any performance benefit we get from enabling proper THP swap clustering on swapfiles?
Cheers,
Dave.
Hi Dave,
On Fri, Aug 21, 2020 at 09:34:46AM +1000, Dave Chinner wrote:
On Thu, Aug 20, 2020 at 12:53:23PM +0800, Gao Xiang wrote:
SWP_FS is used to make swap_{read,write}page() go through the filesystem, and it's only used for swap files over NFS. So, !SWP_FS means non NFS for now, it could be either file backed or device backed. Something similar goes with legacy SWP_FILE.
So in order to achieve the goal of the original patch, SWP_BLKDEV should be used instead.
FS corruption can be observed with SSD device + XFS + fragmented swapfile due to CONFIG_THP_SWAP=y.
I reproduced the issue with the following details:
Environment: QEMU + upstream kernel + buildroot + NVMe (2 GB)
Kernel config: CONFIG_BLK_DEV_NVME=y CONFIG_THP_SWAP=y
Ok, so at it's core this is a swap file extent versus THP swap cluster alignment issue?
I think yes.
diff --git a/mm/swapfile.c b/mm/swapfile.c index 6c26916e95fd..2937daf3ca02 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1074,7 +1074,7 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_size) goto nextsi; } if (size == SWAPFILE_CLUSTER) {
if (!(si->flags & SWP_FS))
} else n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE,if (si->flags & SWP_BLKDEV) n_ret = swap_alloc_cluster(si, swp_entries);
IOWs, if you don't make this change, does the corruption problem go away if you align swap extents in iomap_swapfile_add_extent() to (SWAPFILE_CLUSTER * PAGE_SIZE) instead of just PAGE_SIZE?
I.e. if the swapfile extents are aligned correctly to huge page swap cluster size and alignment, does the swap clustering optimisations for swapping THP pages work correctly? And, if so, is there any performance benefit we get from enabling proper THP swap clustering on swapfiles?
Yeah, I once think about some similiar thing as well. My thought for now is
- First, SWAP THP doesn't claim to support such swapfile for now. And the original author tried to explicitly avoid the whole thing in
f0eea189e8e9 ("mm, THP, swap: Don't allocate huge cluster for file backed swap device")
So such thing would be considered as some new feature and need more testing at least. But for now I think we just need a quick fix to fix the commit f0eea189e8e9 to avoid regression and for backport use.
- It is hard for users to control swapfile in SWAPFILE_CLUSTER * PAGE_SIZE extents, especially users' disk are fragmented or have some on-disk metadata limitation or something. It's very hard for users to utilize this and arrange their swapfile physical addr alignment and fragments for now.
So my point is, if it's considered in the future (supporting SWAP THP swapfile), it needs more carefully consideration and decision (e.g. stability, performance, simplicity, etc). For now, it's just a exist regression which fixes the original fix, and finish the original author claim.
Thanks, Gao Xiang
Cheers,
Dave.
Dave Chinner david@fromorbit.com
On Fri, Aug 21, 2020 at 08:21:45AM +0800, Gao Xiang wrote:
Hi Dave,
On Fri, Aug 21, 2020 at 09:34:46AM +1000, Dave Chinner wrote:
On Thu, Aug 20, 2020 at 12:53:23PM +0800, Gao Xiang wrote:
SWP_FS is used to make swap_{read,write}page() go through the filesystem, and it's only used for swap files over NFS. So, !SWP_FS means non NFS for now, it could be either file backed or device backed. Something similar goes with legacy SWP_FILE.
So in order to achieve the goal of the original patch, SWP_BLKDEV should be used instead.
FS corruption can be observed with SSD device + XFS + fragmented swapfile due to CONFIG_THP_SWAP=y.
I reproduced the issue with the following details:
Environment: QEMU + upstream kernel + buildroot + NVMe (2 GB)
Kernel config: CONFIG_BLK_DEV_NVME=y CONFIG_THP_SWAP=y
Ok, so at it's core this is a swap file extent versus THP swap cluster alignment issue?
I think yes.
diff --git a/mm/swapfile.c b/mm/swapfile.c index 6c26916e95fd..2937daf3ca02 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1074,7 +1074,7 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_size) goto nextsi; } if (size == SWAPFILE_CLUSTER) {
if (!(si->flags & SWP_FS))
} else n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE,if (si->flags & SWP_BLKDEV) n_ret = swap_alloc_cluster(si, swp_entries);
IOWs, if you don't make this change, does the corruption problem go away if you align swap extents in iomap_swapfile_add_extent() to (SWAPFILE_CLUSTER * PAGE_SIZE) instead of just PAGE_SIZE?
I.e. if the swapfile extents are aligned correctly to huge page swap cluster size and alignment, does the swap clustering optimisations for swapping THP pages work correctly? And, if so, is there any performance benefit we get from enabling proper THP swap clustering on swapfiles?
Yeah, I once think about some similiar thing as well. My thought for now is
First, SWAP THP doesn't claim to support such swapfile for now. And the original author tried to explicitly avoid the whole thing in
f0eea189e8e9 ("mm, THP, swap: Don't allocate huge cluster for file backed swap device")
So such thing would be considered as some new feature and need more testing at least. But for now I think we just need a quick fix to fix the commit f0eea189e8e9 to avoid regression and for backport use.
Sure, a quick fix is fine for the current issue. I'm asking questions about the design/architecture of how THP_SWAP is supposed to work and whether swapfiles are violating some other undocumented assumption about swapping THP files...
- It is hard for users to control swapfile in SWAPFILE_CLUSTER * PAGE_SIZE extents, especially users' disk are fragmented or have some on-disk metadata limitation or something. It's very hard for users to utilize this and arrange their swapfile physical addr alignment and fragments for now.
This isn't something users control. The swapfile extent mapping code rounds the swap extents inwards so that the parts of the on-disk extents that are not aligned or cannot hold a full page are omitted from the ranges of the file that can be swapped to.
i.e. a file that extents aligned to 4kB is fine for a 4KB page size machine, but needs additional alignment to allow swapping to work on a 64kB page size machine. Hence the swap code rounds the file extents inwards to PAGE_SIZE to align them correctly. We really should be doing this for THP page size rather than PAGE_SIZE if THP_SWAP is enabled, regardless of whether swap clustering is enabled or not...
Cheers,
Dave.
Dave Chinner david@fromorbit.com writes:
On Fri, Aug 21, 2020 at 08:21:45AM +0800, Gao Xiang wrote:
Hi Dave,
On Fri, Aug 21, 2020 at 09:34:46AM +1000, Dave Chinner wrote:
On Thu, Aug 20, 2020 at 12:53:23PM +0800, Gao Xiang wrote:
SWP_FS is used to make swap_{read,write}page() go through the filesystem, and it's only used for swap files over NFS. So, !SWP_FS means non NFS for now, it could be either file backed or device backed. Something similar goes with legacy SWP_FILE.
So in order to achieve the goal of the original patch, SWP_BLKDEV should be used instead.
FS corruption can be observed with SSD device + XFS + fragmented swapfile due to CONFIG_THP_SWAP=y.
I reproduced the issue with the following details:
Environment: QEMU + upstream kernel + buildroot + NVMe (2 GB)
Kernel config: CONFIG_BLK_DEV_NVME=y CONFIG_THP_SWAP=y
Ok, so at it's core this is a swap file extent versus THP swap cluster alignment issue?
I think yes.
diff --git a/mm/swapfile.c b/mm/swapfile.c index 6c26916e95fd..2937daf3ca02 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1074,7 +1074,7 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_size) goto nextsi; } if (size == SWAPFILE_CLUSTER) {
if (!(si->flags & SWP_FS))
} else n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE,if (si->flags & SWP_BLKDEV) n_ret = swap_alloc_cluster(si, swp_entries);
IOWs, if you don't make this change, does the corruption problem go away if you align swap extents in iomap_swapfile_add_extent() to (SWAPFILE_CLUSTER * PAGE_SIZE) instead of just PAGE_SIZE?
I.e. if the swapfile extents are aligned correctly to huge page swap cluster size and alignment, does the swap clustering optimisations for swapping THP pages work correctly? And, if so, is there any performance benefit we get from enabling proper THP swap clustering on swapfiles?
Yeah, I once think about some similiar thing as well. My thought for now is
First, SWAP THP doesn't claim to support such swapfile for now. And the original author tried to explicitly avoid the whole thing in
f0eea189e8e9 ("mm, THP, swap: Don't allocate huge cluster for file backed swap device")
So such thing would be considered as some new feature and need more testing at least. But for now I think we just need a quick fix to fix the commit f0eea189e8e9 to avoid regression and for backport use.
Sure, a quick fix is fine for the current issue. I'm asking questions about the design/architecture of how THP_SWAP is supposed to work and whether swapfiles are violating some other undocumented assumption about swapping THP files...
The main requirement for THP_SWAP is that the swap cluster need to be mapped to the continuous block device space.
So Yes. In theory, it's possible to support THP_SWAP for swapfile. But I don't know whether people need it or not.
Best Regards, Huang, Ying
- It is hard for users to control swapfile in SWAPFILE_CLUSTER * PAGE_SIZE extents, especially users' disk are fragmented or have some on-disk metadata limitation or something. It's very hard for users to utilize this and arrange their swapfile physical addr alignment and fragments for now.
This isn't something users control. The swapfile extent mapping code rounds the swap extents inwards so that the parts of the on-disk extents that are not aligned or cannot hold a full page are omitted from the ranges of the file that can be swapped to.
i.e. a file that extents aligned to 4kB is fine for a 4KB page size machine, but needs additional alignment to allow swapping to work on a 64kB page size machine. Hence the swap code rounds the file extents inwards to PAGE_SIZE to align them correctly. We really should be doing this for THP page size rather than PAGE_SIZE if THP_SWAP is enabled, regardless of whether swap clustering is enabled or not...
Cheers,
Dave.
On Fri, Aug 21, 2020 at 09:34:46AM +1000, Dave Chinner wrote:
On Thu, Aug 20, 2020 at 12:53:23PM +0800, Gao Xiang wrote:
SWP_FS is used to make swap_{read,write}page() go through the filesystem, and it's only used for swap files over NFS. So, !SWP_FS means non NFS for now, it could be either file backed or device backed. Something similar goes with legacy SWP_FILE.
So in order to achieve the goal of the original patch, SWP_BLKDEV should be used instead.
FS corruption can be observed with SSD device + XFS + fragmented swapfile due to CONFIG_THP_SWAP=y.
I reproduced the issue with the following details:
Environment: QEMU + upstream kernel + buildroot + NVMe (2 GB)
Kernel config: CONFIG_BLK_DEV_NVME=y CONFIG_THP_SWAP=y
Ok, so at it's core this is a swap file extent versus THP swap cluster alignment issue?
diff --git a/mm/swapfile.c b/mm/swapfile.c index 6c26916e95fd..2937daf3ca02 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1074,7 +1074,7 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_size) goto nextsi; } if (size == SWAPFILE_CLUSTER) {
if (!(si->flags & SWP_FS))
} else n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE,if (si->flags & SWP_BLKDEV) n_ret = swap_alloc_cluster(si, swp_entries);
IOWs, if you don't make this change, does the corruption problem go away if you align swap extents in iomap_swapfile_add_extent() to (SWAPFILE_CLUSTER * PAGE_SIZE) instead of just PAGE_SIZE?
I suspect that will have to come with the 3rd, and final, part of the THP_SWAP work Intel is doing. Right now, basically, all that's accomplished is deferring the THP split step when swapping out, so this change is what we need to avoid stomping outside the file extent boundaries.
I.e. if the swapfile extents are aligned correctly to huge page swap cluster size and alignment, does the swap clustering optimisations for swapping THP pages work correctly? And, if so, is there any performance benefit we get from enabling proper THP swap clustering on swapfiles?
Cheers,
Dave.
Dave Chinner david@fromorbit.com
linux-stable-mirror@lists.linaro.org