The patch below does not apply to the 6.1-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
To reproduce the conflict and resubmit, you may use the following commands:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-6.1.y git checkout FETCH_HEAD git cherry-pick -x e74a68468062d7ebd8ce17069e12ccc64cc6a58c # <resolve conflicts, build, test, etc.> git commit -s git send-email --to 'stable@vger.kernel.org' --in-reply-to '1678182267252151@kroah.com' --subject-prefix 'PATCH 6.1.y' HEAD^..
Possible dependencies:
e74a68468062 ("arm64: Reset KASAN tag in copy_highpage with HW tags only") d77e59a8fccd ("arm64: mte: Lock a page for MTE tag initialisation") e059853d14ca ("arm64: mte: Fix/clarify the PG_mte_tagged semantics")
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From e74a68468062d7ebd8ce17069e12ccc64cc6a58c Mon Sep 17 00:00:00 2001 From: Peter Collingbourne pcc@google.com Date: Tue, 14 Feb 2023 21:09:11 -0800 Subject: [PATCH] arm64: Reset KASAN tag in copy_highpage with HW tags only MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit
During page migration, the copy_highpage function is used to copy the page data to the target page. If the source page is a userspace page with MTE tags, the KASAN tag of the target page must have the match-all tag in order to avoid tag check faults during subsequent accesses to the page by the kernel. However, the target page may have been allocated in a number of ways, some of which will use the KASAN allocator and will therefore end up setting the KASAN tag to a non-match-all tag. Therefore, update the target page's KASAN tag to match the source page.
We ended up unintentionally fixing this issue as a result of a bad merge conflict resolution between commit e059853d14ca ("arm64: mte: Fix/clarify the PG_mte_tagged semantics") and commit 20794545c146 ("arm64: kasan: Revert "arm64: mte: reset the page tag in page->flags""), which preserved a tag reset for PG_mte_tagged pages which was considered to be unnecessary at the time. Because SW tags KASAN uses separate tag storage, update the code to only reset the tags when HW tags KASAN is enabled.
Signed-off-by: Peter Collingbourne pcc@google.com Link: https://linux-review.googlesource.com/id/If303d8a709438d3ff5af5fd85706505830... Reported-by: "Kuan-Ying Lee (李冠穎)" Kuan-Ying.Lee@mediatek.com Cc: stable@vger.kernel.org # 6.1 Fixes: 20794545c146 ("arm64: kasan: Revert "arm64: mte: reset the page tag in page->flags"") Reviewed-by: Andrey Konovalov andreyknvl@gmail.com Link: https://lore.kernel.org/r/20230215050911.1433132-1-pcc@google.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com
diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index 8dd5a8fe64b4..4aadcfb01754 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -22,7 +22,8 @@ void copy_highpage(struct page *to, struct page *from) copy_page(kto, kfrom);
if (system_supports_mte() && page_mte_tagged(from)) { - page_kasan_tag_reset(to); + if (kasan_hw_tags_enabled()) + page_kasan_tag_reset(to); /* It's a new page, shouldn't have been tagged yet */ WARN_ON_ONCE(!try_page_mte_tagging(to)); mte_copy_page_tags(kto, kfrom);
From: Catalin Marinas catalin.marinas@arm.com
Currently the PG_mte_tagged page flag mostly means the page contains valid tags and it should be set after the tags have been cleared or restored. However, in mte_sync_tags() it is set before setting the tags to avoid, in theory, a race with concurrent mprotect(PROT_MTE) for shared pages. However, a concurrent mprotect(PROT_MTE) with a copy on write in another thread can cause the new page to have stale tags. Similarly, tag reading via ptrace() can read stale tags if the PG_mte_tagged flag is set before actually clearing/restoring the tags.
Fix the PG_mte_tagged semantics so that it is only set after the tags have been cleared or restored. This is safe for swap restoring into a MAP_SHARED or CoW page since the core code takes the page lock. Add two functions to test and set the PG_mte_tagged flag with acquire and release semantics. The downside is that concurrent mprotect(PROT_MTE) on a MAP_SHARED page may cause tag loss. This is already the case for KVM guests if a VMM changes the page protection while the guest triggers a user_mem_abort().
Signed-off-by: Catalin Marinas catalin.marinas@arm.com [pcc@google.com: fix build with CONFIG_ARM64_MTE disabled] Signed-off-by: Peter Collingbourne pcc@google.com Reviewed-by: Cornelia Huck cohuck@redhat.com Reviewed-by: Steven Price steven.price@arm.com Cc: Will Deacon will@kernel.org Cc: Marc Zyngier maz@kernel.org Cc: Peter Collingbourne pcc@google.com Signed-off-by: Marc Zyngier maz@kernel.org Link: https://lore.kernel.org/r/20221104011041.290951-3-pcc@google.com (cherry picked from commit e059853d14ca4ed0f6a190d7109487918a22a976) --- arch/arm64/include/asm/mte.h | 30 ++++++++++++++++++++++++++++++ arch/arm64/include/asm/pgtable.h | 2 +- arch/arm64/kernel/cpufeature.c | 4 +++- arch/arm64/kernel/elfcore.c | 2 +- arch/arm64/kernel/hibernate.c | 2 +- arch/arm64/kernel/mte.c | 17 +++++++++++------ arch/arm64/kvm/guest.c | 4 ++-- arch/arm64/kvm/mmu.c | 4 ++-- arch/arm64/mm/copypage.c | 5 +++-- arch/arm64/mm/fault.c | 2 +- arch/arm64/mm/mteswap.c | 2 +- 11 files changed, 56 insertions(+), 18 deletions(-)
diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index 760c62f8e22f..3f8199ba265a 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -37,6 +37,29 @@ void mte_free_tag_storage(char *storage); /* track which pages have valid allocation tags */ #define PG_mte_tagged PG_arch_2
+static inline void set_page_mte_tagged(struct page *page) +{ + /* + * Ensure that the tags written prior to this function are visible + * before the page flags update. + */ + smp_wmb(); + set_bit(PG_mte_tagged, &page->flags); +} + +static inline bool page_mte_tagged(struct page *page) +{ + bool ret = test_bit(PG_mte_tagged, &page->flags); + + /* + * If the page is tagged, ensure ordering with a likely subsequent + * read of the tags. + */ + if (ret) + smp_rmb(); + return ret; +} + void mte_zero_clear_page_tags(void *addr); void mte_sync_tags(pte_t old_pte, pte_t pte); void mte_copy_page_tags(void *kto, const void *kfrom); @@ -56,6 +79,13 @@ size_t mte_probe_user_range(const char __user *uaddr, size_t size); /* unused if !CONFIG_ARM64_MTE, silence the compiler */ #define PG_mte_tagged 0
+static inline void set_page_mte_tagged(struct page *page) +{ +} +static inline bool page_mte_tagged(struct page *page) +{ + return false; +} static inline void mte_zero_clear_page_tags(void *addr) { } diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index f1cfc44ef52f..5d0f1f7b7600 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1050,7 +1050,7 @@ static inline void arch_swap_invalidate_area(int type) static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) { if (system_supports_mte() && mte_restore_tags(entry, &folio->page)) - set_bit(PG_mte_tagged, &folio->flags); + set_page_mte_tagged(&folio->page); }
#endif /* CONFIG_ARM64_MTE */ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index b3f37e2209ad..0e0f4787df97 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2074,8 +2074,10 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap) * Clear the tags in the zero page. This needs to be done via the * linear map which has the Tagged attribute. */ - if (!test_and_set_bit(PG_mte_tagged, &ZERO_PAGE(0)->flags)) + if (!page_mte_tagged(ZERO_PAGE(0))) { mte_clear_page_tags(lm_alias(empty_zero_page)); + set_page_mte_tagged(ZERO_PAGE(0)); + }
kasan_init_hw_tags_cpu(); } diff --git a/arch/arm64/kernel/elfcore.c b/arch/arm64/kernel/elfcore.c index 662a61e5e75e..2e94d20c4ac7 100644 --- a/arch/arm64/kernel/elfcore.c +++ b/arch/arm64/kernel/elfcore.c @@ -46,7 +46,7 @@ static int mte_dump_tag_range(struct coredump_params *cprm, * Pages mapped in user space as !pte_access_permitted() (e.g. * PROT_EXEC only) may not have the PG_mte_tagged flag set. */ - if (!test_bit(PG_mte_tagged, &page->flags)) { + if (!page_mte_tagged(page)) { put_page(page); dump_skip(cprm, MTE_PAGE_TAG_STORAGE); continue; diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index af5df48ba915..788597a6b6a2 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -271,7 +271,7 @@ static int swsusp_mte_save_tags(void) if (!page) continue;
- if (!test_bit(PG_mte_tagged, &page->flags)) + if (!page_mte_tagged(page)) continue;
ret = save_tags(page, pfn); diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index 7467217c1eaf..84a085d536f8 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -41,8 +41,10 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte, if (check_swap && is_swap_pte(old_pte)) { swp_entry_t entry = pte_to_swp_entry(old_pte);
- if (!non_swap_entry(entry) && mte_restore_tags(entry, page)) + if (!non_swap_entry(entry) && mte_restore_tags(entry, page)) { + set_page_mte_tagged(page); return; + } }
if (!pte_is_tagged) @@ -52,8 +54,10 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte, * Test PG_mte_tagged again in case it was racing with another * set_pte_at(). */ - if (!test_and_set_bit(PG_mte_tagged, &page->flags)) + if (!page_mte_tagged(page)) { mte_clear_page_tags(page_address(page)); + set_page_mte_tagged(page); + } }
void mte_sync_tags(pte_t old_pte, pte_t pte) @@ -69,9 +73,11 @@ void mte_sync_tags(pte_t old_pte, pte_t pte)
/* if PG_mte_tagged is set, tags have already been initialised */ for (i = 0; i < nr_pages; i++, page++) { - if (!test_bit(PG_mte_tagged, &page->flags)) + if (!page_mte_tagged(page)) { mte_sync_page_tags(page, old_pte, check_swap, pte_is_tagged); + set_page_mte_tagged(page); + } }
/* ensure the tags are visible before the PTE is set */ @@ -96,8 +102,7 @@ int memcmp_pages(struct page *page1, struct page *page2) * pages is tagged, set_pte_at() may zero or change the tags of the * other page via mte_sync_tags(). */ - if (test_bit(PG_mte_tagged, &page1->flags) || - test_bit(PG_mte_tagged, &page2->flags)) + if (page_mte_tagged(page1) || page_mte_tagged(page2)) return addr1 != addr2;
return ret; @@ -454,7 +459,7 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr, put_page(page); break; } - WARN_ON_ONCE(!test_bit(PG_mte_tagged, &page->flags)); + WARN_ON_ONCE(!page_mte_tagged(page));
/* limit access to the end of the page */ offset = offset_in_page(addr); diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 2ff13a3f8479..817fdd1ab778 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -1059,7 +1059,7 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, maddr = page_address(page);
if (!write) { - if (test_bit(PG_mte_tagged, &page->flags)) + if (page_mte_tagged(page)) num_tags = mte_copy_tags_to_user(tags, maddr, MTE_GRANULES_PER_PAGE); else @@ -1076,7 +1076,7 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, * completed fully */ if (num_tags == MTE_GRANULES_PER_PAGE) - set_bit(PG_mte_tagged, &page->flags); + set_page_mte_tagged(page);
kvm_release_pfn_dirty(pfn); } diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 60ee3d9f01f8..2c3759f1f2c5 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1110,9 +1110,9 @@ static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, return -EFAULT;
for (i = 0; i < nr_pages; i++, page++) { - if (!test_bit(PG_mte_tagged, &page->flags)) { + if (!page_mte_tagged(page)) { mte_clear_page_tags(page_address(page)); - set_bit(PG_mte_tagged, &page->flags); + set_page_mte_tagged(page); } }
diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index 24913271e898..731d8a35701e 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -21,9 +21,10 @@ void copy_highpage(struct page *to, struct page *from)
copy_page(kto, kfrom);
- if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) { - set_bit(PG_mte_tagged, &to->flags); + if (system_supports_mte() && page_mte_tagged(from)) { + page_kasan_tag_reset(to); mte_copy_page_tags(kto, kfrom); + set_page_mte_tagged(to); } } EXPORT_SYMBOL(copy_highpage); diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 3eb2825d08cf..4ee20280133e 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -944,5 +944,5 @@ struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma, void tag_clear_highpage(struct page *page) { mte_zero_clear_page_tags(page_address(page)); - set_bit(PG_mte_tagged, &page->flags); + set_page_mte_tagged(page); } diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c index bed803d8e158..70f913205db9 100644 --- a/arch/arm64/mm/mteswap.c +++ b/arch/arm64/mm/mteswap.c @@ -24,7 +24,7 @@ int mte_save_tags(struct page *page) { void *tag_storage, *ret;
- if (!test_bit(PG_mte_tagged, &page->flags)) + if (!page_mte_tagged(page)) return 0;
tag_storage = mte_allocate_tag_storage();
During page migration, the copy_highpage function is used to copy the page data to the target page. If the source page is a userspace page with MTE tags, the KASAN tag of the target page must have the match-all tag in order to avoid tag check faults during subsequent accesses to the page by the kernel. However, the target page may have been allocated in a number of ways, some of which will use the KASAN allocator and will therefore end up setting the KASAN tag to a non-match-all tag. Therefore, update the target page's KASAN tag to match the source page.
We ended up unintentionally fixing this issue as a result of a bad merge conflict resolution between commit e059853d14ca ("arm64: mte: Fix/clarify the PG_mte_tagged semantics") and commit 20794545c146 ("arm64: kasan: Revert "arm64: mte: reset the page tag in page->flags""), which preserved a tag reset for PG_mte_tagged pages which was considered to be unnecessary at the time. Because SW tags KASAN uses separate tag storage, update the code to only reset the tags when HW tags KASAN is enabled.
Signed-off-by: Peter Collingbourne pcc@google.com Link: https://linux-review.googlesource.com/id/If303d8a709438d3ff5af5fd85706505830... Reported-by: "Kuan-Ying Lee (李冠穎)" Kuan-Ying.Lee@mediatek.com Cc: stable@vger.kernel.org # 6.1 Fixes: 20794545c146 ("arm64: kasan: Revert "arm64: mte: reset the page tag in page->flags"") Reviewed-by: Andrey Konovalov andreyknvl@gmail.com Link: https://lore.kernel.org/r/20230215050911.1433132-1-pcc@google.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com (cherry picked from commit e74a68468062d7ebd8ce17069e12ccc64cc6a58c) --- arch/arm64/mm/copypage.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index 731d8a35701e..6dbc822332f2 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -22,7 +22,8 @@ void copy_highpage(struct page *to, struct page *from) copy_page(kto, kfrom);
if (system_supports_mte() && page_mte_tagged(from)) { - page_kasan_tag_reset(to); + if (kasan_hw_tags_enabled()) + page_kasan_tag_reset(to); mte_copy_page_tags(kto, kfrom); set_page_mte_tagged(to); }
On Tue, Mar 07, 2023 at 09:49:25AM -0800, Peter Collingbourne wrote:
During page migration, the copy_highpage function is used to copy the page data to the target page. If the source page is a userspace page with MTE tags, the KASAN tag of the target page must have the match-all tag in order to avoid tag check faults during subsequent accesses to the page by the kernel. However, the target page may have been allocated in a number of ways, some of which will use the KASAN allocator and will therefore end up setting the KASAN tag to a non-match-all tag. Therefore, update the target page's KASAN tag to match the source page.
We ended up unintentionally fixing this issue as a result of a bad merge conflict resolution between commit e059853d14ca ("arm64: mte: Fix/clarify the PG_mte_tagged semantics") and commit 20794545c146 ("arm64: kasan: Revert "arm64: mte: reset the page tag in page->flags""), which preserved a tag reset for PG_mte_tagged pages which was considered to be unnecessary at the time. Because SW tags KASAN uses separate tag storage, update the code to only reset the tags when HW tags KASAN is enabled.
Signed-off-by: Peter Collingbourne pcc@google.com Link: https://linux-review.googlesource.com/id/If303d8a709438d3ff5af5fd85706505830... Reported-by: "Kuan-Ying Lee (李冠穎)" Kuan-Ying.Lee@mediatek.com Cc: stable@vger.kernel.org # 6.1 Fixes: 20794545c146 ("arm64: kasan: Revert "arm64: mte: reset the page tag in page->flags"") Reviewed-by: Andrey Konovalov andreyknvl@gmail.com Link: https://lore.kernel.org/r/20230215050911.1433132-1-pcc@google.com Signed-off-by: Catalin Marinas catalin.marinas@arm.com (cherry picked from commit e74a68468062d7ebd8ce17069e12ccc64cc6a58c)
arch/arm64/mm/copypage.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
Both now queued up, thanks.
greg k-h
linux-stable-mirror@lists.linaro.org