Hi Peter Z,
On 7/10/25 21:54, Peter Zijlstra wrote:
On Wed, Jul 09, 2025 at 02:28:00PM +0800, Lu Baolu wrote:
The vmalloc() and vfree() functions manage virtually contiguous, but not necessarily physically contiguous, kernel memory regions. When vfree() unmaps such a region, it tears down the associated kernel page table entries and frees the physical pages.
In the IOMMU Shared Virtual Addressing (SVA) context, the IOMMU hardware shares and walks the CPU's page tables. Architectures like x86 share static kernel address mappings across all user page tables, allowing the IOMMU to access the kernel portion of these tables.
Modern IOMMUs often cache page table entries to optimize walk performance, even for intermediate page table levels. If kernel page table mappings are changed (e.g., by vfree()), but the IOMMU's internal caches retain stale entries, Use-After-Free (UAF) vulnerability condition arises. If these freed page table pages are reallocated for a different purpose, potentially by an attacker, the IOMMU could misinterpret the new data as valid page table entries. This allows the IOMMU to walk into attacker-controlled memory, leading to arbitrary physical memory DMA access or privilege escalation.
To mitigate this, introduce a new iommu interface to flush IOMMU caches and fence pending page table walks when kernel page mappings are updated. This interface should be invoked from architecture-specific code that manages combined user and kernel page tables.
I must say I liked the kPTI based idea better. Having to iterate and invalidate an unspecified number of IOMMUs from non-preemptible context seems 'unfortunate'.
The cache invalidation path in IOMMU drivers is already critical and operates within a non-preemptible context. This approach is, in fact, already utilized for user-space page table updates since the beginning of SVA support.
Why was this approach chosen over the kPTI one, where we keep a page-table root that simply does not include the kernel bits, and therefore the IOMMU will never see them (change) and we'll never have to invalidate?
The IOMMU subsystem started supporting the SVA feature in 2019, and it has been broadly adopted in various production kernels. The issue described here is fundamentally a software bug related to not maintaining IOMMU cache coherence. Therefore, we need a quick and simple fix to address it, and this patch can be easily backported to production kernels.
While a kPTI-based approach might appear more attractive, I believe some extra work is still required to properly integrate it into the IOMMU subsystem. For instance, kPTI is currently an optional mitigation, enabled via CONFIG_MITIGATION_PAGE_TABLE_ISOLATION and bypassable with the "nopti" kernel parameter. This optionality is not suitable for the IOMMU subsystem, as software must always guarantee IOMMU cache coherence for functional correctness and security.
So, in the short term, let's proceed with a straightforward solution to resolve this issue and ensure the SVA feature functions correctly. For the long term, we can explore optimizations and deeper integration aligned with features like kPTI.
@@ -132,8 +136,15 @@ struct iommu_sva *iommu_sva_bind_device(struct device *dev, struct mm_struct *mm if (ret) goto out_free_domain; domain->users = 1;
- list_add(&domain->next, &mm->iommu_mm->sva_domains);
- if (list_empty(&iommu_mm->sva_domains)) {
scoped_guard(spinlock_irqsave, &iommu_mms_lock) {
if (list_empty(&iommu_sva_mms))
static_branch_enable(&iommu_sva_present);
list_add(&iommu_mm->mm_list_elm, &iommu_sva_mms);
}
- }
- list_add(&domain->next, &iommu_mm->sva_domains); out: refcount_set(&handle->users, 1); mutex_unlock(&iommu_sva_lock);
@@ -175,6 +186,15 @@ void iommu_sva_unbind_device(struct iommu_sva *handle) list_del(&domain->next); iommu_domain_free(domain); }
- if (list_empty(&iommu_mm->sva_domains)) {
scoped_guard(spinlock_irqsave, &iommu_mms_lock) {
list_del(&iommu_mm->mm_list_elm);
if (list_empty(&iommu_sva_mms))
static_branch_disable(&iommu_sva_present);
}
- }
- mutex_unlock(&iommu_sva_lock); kfree(handle); }
This seems an odd coding style choice; why the extra unneeded indentation? That is, what's wrong with:
if (list_empty()) { guard(spinlock_irqsave)(&iommu_mms_lock); list_del(); if (list_empty() static_branch_disable(); }
Perhaps I overlooked or misunderstood something, but my understanding is,
The lock order in this function is:
mutex_lock(&iommu_sva_lock); spin_lock(&iommu_mms_lock); spin_unlock(&iommu_mms_lock); mutex_unlock(&iommu_sva_lock);
With above change, it is changed to:
mutex_lock(&iommu_sva_lock); spin_lock(&iommu_mms_lock); mutex_unlock(&iommu_sva_lock); spin_unlock(&iommu_mms_lock);
@@ -312,3 +332,15 @@ static struct iommu_domain *iommu_sva_domain_alloc(struct device *dev, return domain; }
+void iommu_sva_invalidate_kva_range(unsigned long start, unsigned long end) +{
- struct iommu_mm_data *iommu_mm;
- if (!static_branch_unlikely(&iommu_sva_present))
return;
- guard(spinlock_irqsave)(&iommu_mms_lock);
- list_for_each_entry(iommu_mm, &iommu_sva_mms, mm_list_elm)
mmu_notifier_arch_invalidate_secondary_tlbs(iommu_mm->mm, start, end);
+}
This is absolutely the wrong way to use static_branch. You want them in inline functions guarding the function call, not inside the function call.
I don't think a static branch is desirable here, as we have no idea how often the condition will switch in real-world scenarios. I will remove it in the next version if there are no objections.
Thanks, baolu