On Thu, Jul 03, 2025 at 04:57:34AM +0000, Tian, Kevin wrote:
I meant something like below:
iopt_unmap_iova_range() { bool internal_access = false;
down_read(&iopt->domains_rwsem); down_write(&iopt->iova_rwsem); /* Bypass any unmap if there is an internal access */ xa_for_each(&iopt->access_list, index, access) { if (iommufd_access_is_internal(access)) { internal_access = true; break; } }
while ((area = iopt_area_iter_first(iopt, start, last))) { if (area->num_access) { if (internal_access) { rc = -EBUSY; goto out_unlock_iova; } up_write(&iopt->iova_rwsem); up_read(&iopt->domains_rwsem); iommufd_access_notify_unmap(iopt, area_first, length); } } }
it checks the access_list in the common path, but the cost should be negligible when there is no access attached to this iopt. The upside is that now unmap is denied explicitly in the area loop instead of still trying to unmap and then handling errors.
Hmm, I realized that either way might be incorrect, as it iterates the entire iopt for any internal access regardless its iova ranges.
What we really want is to reject an unmap against the same range as once pinged by an internal access, i.e. other range of unmap should be still allowed.
So, doing it at this level isn't enough. I think we should still go down to struct iopt_area as my v5 did: https://lore.kernel.org/all/3ddc8c678406772a8358a265912bb1c064f4c796.1747537... We'd only need to rename to num_locked as you suggested, i.e.
@@ -719,6 +719,12 @@ static int iopt_unmap_iova_range(struct io_pagetable *iopt, unsigned long start, goto out_unlock_iova; }
+ /* The area is locked by an object that has not been destroyed */ + if (area->num_locked) { + rc = -EBUSY; + goto out_unlock_iova; + } + if (area_first < start || area_last > last) { rc = -ENOENT; goto out_unlock_iova;
Thanks Nicolin