Changes since v8 [1]: * Rebase on v4.17-rc2
* Fix get_user_pages_fast() for ZONE_DEVICE pages to revalidate the pte, pmd, pud after taking references (Jan)
* Kill dax_layout_lock(). With get_user_pages_fast() for ZONE_DEVICE fixed we can then rely on the {pte,pmd}_lock to synchronize dax_layout_busy_page() vs new page references (Jan)
* Hold the iolock over repeated invocations of dax_layout_busy_page() to enable truncate/hole-punch to make forward progress in the presence of a constant stream of new direct-I/O requests (Jan).
[1]: https://lists.01.org/pipermail/linux-nvdimm/2018-March/015058.html
---
Background:
get_user_pages() in the filesystem pins file backed memory pages for access by devices performing dma. However, it only pins the memory pages not the page-to-file offset association. If a file is truncated the pages are mapped out of the file and dma may continue indefinitely into a page that is owned by a device driver. This breaks coherency of the file vs dma, but the assumption is that if userspace wants the file-space truncated it does not matter what data is inbound from the device, it is not relevant anymore. The only expectation is that dma can safely continue while the filesystem reallocates the block(s).
Problem:
This expectation that dma can safely continue while the filesystem changes the block map is broken by dax. With dax the target dma page *is* the filesystem block. The model of leaving the page pinned for dma, but truncating the file block out of the file, means that the filesytem is free to reallocate a block under active dma to another file and now the expected data-incoherency situation has turned into active data-corruption.
Solution:
Defer all filesystem operations (fallocate(), truncate()) on a dax mode file while any page/block in the file is under active dma. This solution assumes that dma is transient. Cases where dma operations are known to not be transient, like RDMA, have been explicitly disabled via commits like 5f1d43de5416 "IB/core: disable memory registration of filesystem-dax vmas".
The dax_layout_busy_page() routine is called by filesystems with a lock held against mm faults (i_mmap_lock) to find pinned / busy dax pages. The process of looking up a busy page invalidates all mappings to trigger any subsequent get_user_pages() to block on i_mmap_lock. The filesystem continues to call dax_layout_busy_page() until it finally returns no more active pages. This approach assumes that the page pinning is transient, if that assumption is violated the system would have likely hung from the uncompleted I/O.
---
Dan Williams (9): dax, dm: introduce ->fs_{claim,release}() dax_device infrastructure mm, dax: enable filesystems to trigger dev_pagemap ->page_free callbacks memremap: split devm_memremap_pages() and memremap() infrastructure mm, dev_pagemap: introduce CONFIG_DEV_PAGEMAP_OPS mm: fix __gup_device_huge vs unmap mm, fs, dax: handle layout changes to pinned dax mappings xfs: prepare xfs_break_layouts() to be called with XFS_MMAPLOCK_EXCL xfs: prepare xfs_break_layouts() for another layout type xfs, dax: introduce xfs_break_dax_layouts()
drivers/dax/super.c | 99 ++++++++++++++++++++-- drivers/md/dm.c | 57 +++++++++++++ drivers/nvdimm/pmem.c | 3 - fs/Kconfig | 2 fs/dax.c | 97 +++++++++++++++++++++ fs/ext2/super.c | 6 + fs/ext4/super.c | 6 + fs/xfs/xfs_file.c | 72 +++++++++++++++- fs/xfs/xfs_inode.h | 16 ++++ fs/xfs/xfs_ioctl.c | 8 -- fs/xfs/xfs_iops.c | 16 ++-- fs/xfs/xfs_pnfs.c | 16 ++-- fs/xfs/xfs_pnfs.h | 6 + fs/xfs/xfs_super.c | 20 ++-- include/linux/dax.h | 71 +++++++++++++++- include/linux/memremap.h | 25 ++---- include/linux/mm.h | 71 ++++++++++++---- kernel/Makefile | 3 - kernel/iomem.c | 167 +++++++++++++++++++++++++++++++++++++ kernel/memremap.c | 208 ++++++---------------------------------------- mm/Kconfig | 5 + mm/gup.c | 37 ++++++-- mm/hmm.c | 13 --- mm/swap.c | 3 - 24 files changed, 730 insertions(+), 297 deletions(-) create mode 100644 kernel/iomem.c
get_user_pages_fast() for device pages is missing the typical validation that all page references have been taken while the mapping was valid. Without this validation truncate operations can not reliably coordinate against new page reference events like O_DIRECT.
Cc: stable@vger.kernel.org Fixes: 3565fce3a659 ("mm, x86: get_user_pages() for dax mappings") Reported-by: Jan Kara jack@suse.cz Signed-off-by: Dan Williams dan.j.williams@intel.com --- mm/gup.c | 36 ++++++++++++++++++++++++++---------- 1 file changed, 26 insertions(+), 10 deletions(-)
diff --git a/mm/gup.c b/mm/gup.c index 76af4cfeaf68..84dd2063ca3d 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1456,32 +1456,48 @@ static int __gup_device_huge(unsigned long pfn, unsigned long addr, return 1; }
-static int __gup_device_huge_pmd(pmd_t pmd, unsigned long addr, +static int __gup_device_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, unsigned long end, struct page **pages, int *nr) { unsigned long fault_pfn; + int nr_start = *nr; + + fault_pfn = pmd_pfn(orig) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); + if (!__gup_device_huge(fault_pfn, addr, end, pages, nr)) + return 0;
- fault_pfn = pmd_pfn(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); - return __gup_device_huge(fault_pfn, addr, end, pages, nr); + if (unlikely(pmd_val(orig) != pmd_val(*pmdp))) { + undo_dev_pagemap(nr, nr_start, pages); + return 0; + } + return 1; }
-static int __gup_device_huge_pud(pud_t pud, unsigned long addr, +static int __gup_device_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, unsigned long end, struct page **pages, int *nr) { unsigned long fault_pfn; + int nr_start = *nr; + + fault_pfn = pud_pfn(orig) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); + if (!__gup_device_huge(fault_pfn, addr, end, pages, nr)) + return 0;
- fault_pfn = pud_pfn(pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); - return __gup_device_huge(fault_pfn, addr, end, pages, nr); + if (unlikely(pud_val(orig) != pud_val(*pudp))) { + undo_dev_pagemap(nr, nr_start, pages); + return 0; + } + return 1; } #else -static int __gup_device_huge_pmd(pmd_t pmd, unsigned long addr, +static int __gup_device_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, unsigned long end, struct page **pages, int *nr) { BUILD_BUG(); return 0; }
-static int __gup_device_huge_pud(pud_t pud, unsigned long addr, +static int __gup_device_huge_pud(pud_t pud, pud_t *pudp, unsigned long addr, unsigned long end, struct page **pages, int *nr) { BUILD_BUG(); @@ -1499,7 +1515,7 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, return 0;
if (pmd_devmap(orig)) - return __gup_device_huge_pmd(orig, addr, end, pages, nr); + return __gup_device_huge_pmd(orig, pmdp, addr, end, pages, nr);
refs = 0; page = pmd_page(orig) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); @@ -1537,7 +1553,7 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, return 0;
if (pud_devmap(orig)) - return __gup_device_huge_pud(orig, addr, end, pages, nr); + return __gup_device_huge_pud(orig, pudp, addr, end, pages, nr);
refs = 0; page = pud_page(orig) + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
On Tue 24-04-18 16:33:29, Dan Williams wrote:
get_user_pages_fast() for device pages is missing the typical validation that all page references have been taken while the mapping was valid. Without this validation truncate operations can not reliably coordinate against new page reference events like O_DIRECT.
Cc: stable@vger.kernel.org Fixes: 3565fce3a659 ("mm, x86: get_user_pages() for dax mappings") Reported-by: Jan Kara jack@suse.cz Signed-off-by: Dan Williams dan.j.williams@intel.com
The patch looks good to me. You can add:
Reviewed-by: Jan Kara jack@suse.cz
Honza
mm/gup.c | 36 ++++++++++++++++++++++++++---------- 1 file changed, 26 insertions(+), 10 deletions(-)
diff --git a/mm/gup.c b/mm/gup.c index 76af4cfeaf68..84dd2063ca3d 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1456,32 +1456,48 @@ static int __gup_device_huge(unsigned long pfn, unsigned long addr, return 1; } -static int __gup_device_huge_pmd(pmd_t pmd, unsigned long addr, +static int __gup_device_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, unsigned long end, struct page **pages, int *nr) { unsigned long fault_pfn;
- int nr_start = *nr;
- fault_pfn = pmd_pfn(orig) + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
- if (!__gup_device_huge(fault_pfn, addr, end, pages, nr))
return 0;
- fault_pfn = pmd_pfn(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
- return __gup_device_huge(fault_pfn, addr, end, pages, nr);
- if (unlikely(pmd_val(orig) != pmd_val(*pmdp))) {
undo_dev_pagemap(nr, nr_start, pages);
return 0;
- }
- return 1;
} -static int __gup_device_huge_pud(pud_t pud, unsigned long addr, +static int __gup_device_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, unsigned long end, struct page **pages, int *nr) { unsigned long fault_pfn;
- int nr_start = *nr;
- fault_pfn = pud_pfn(orig) + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
- if (!__gup_device_huge(fault_pfn, addr, end, pages, nr))
return 0;
- fault_pfn = pud_pfn(pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
- return __gup_device_huge(fault_pfn, addr, end, pages, nr);
- if (unlikely(pud_val(orig) != pud_val(*pudp))) {
undo_dev_pagemap(nr, nr_start, pages);
return 0;
- }
- return 1;
} #else -static int __gup_device_huge_pmd(pmd_t pmd, unsigned long addr, +static int __gup_device_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, unsigned long end, struct page **pages, int *nr) { BUILD_BUG(); return 0; } -static int __gup_device_huge_pud(pud_t pud, unsigned long addr, +static int __gup_device_huge_pud(pud_t pud, pud_t *pudp, unsigned long addr, unsigned long end, struct page **pages, int *nr) { BUILD_BUG(); @@ -1499,7 +1515,7 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, return 0; if (pmd_devmap(orig))
return __gup_device_huge_pmd(orig, addr, end, pages, nr);
return __gup_device_huge_pmd(orig, pmdp, addr, end, pages, nr);
refs = 0; page = pmd_page(orig) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); @@ -1537,7 +1553,7 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, return 0; if (pud_devmap(orig))
return __gup_device_huge_pud(orig, addr, end, pages, nr);
return __gup_device_huge_pud(orig, pudp, addr, end, pages, nr);
refs = 0; page = pud_page(orig) + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
On Tue, Apr 24, 2018 at 4:33 PM, Dan Williams dan.j.williams@intel.com wrote:
Changes since v8 [1]:
Rebase on v4.17-rc2
Fix get_user_pages_fast() for ZONE_DEVICE pages to revalidate the pte, pmd, pud after taking references (Jan)
Kill dax_layout_lock(). With get_user_pages_fast() for ZONE_DEVICE fixed we can then rely on the {pte,pmd}_lock to synchronize dax_layout_busy_page() vs new page references (Jan)
Hold the iolock over repeated invocations of dax_layout_busy_page() to enable truncate/hole-punch to make forward progress in the presence of a constant stream of new direct-I/O requests (Jan).
I'll push this for soak time in -next if there are no further comments...
On Thu, May 03, 2018 at 04:53:18PM -0700, Dan Williams wrote:
On Tue, Apr 24, 2018 at 4:33 PM, Dan Williams dan.j.williams@intel.com wrote:
Changes since v8 [1]:
Rebase on v4.17-rc2
Fix get_user_pages_fast() for ZONE_DEVICE pages to revalidate the pte, pmd, pud after taking references (Jan)
Kill dax_layout_lock(). With get_user_pages_fast() for ZONE_DEVICE fixed we can then rely on the {pte,pmd}_lock to synchronize dax_layout_busy_page() vs new page references (Jan)
Hold the iolock over repeated invocations of dax_layout_busy_page() to enable truncate/hole-punch to make forward progress in the presence of a constant stream of new direct-I/O requests (Jan).
I'll push this for soak time in -next if there are no further comments...
I don't have any. :D
--D
linux-stable-mirror@lists.linaro.org