On Mon, Apr 21, 2025 at 08:16:54AM +0000, Tian, Kevin wrote:
From: Nicolin Chen nicolinc@nvidia.com Sent: Friday, April 11, 2025 2:38 PM
- previously given to user space via a prior ioctl output.
- */
+static int iommufd_fops_mmap(struct file *filp, struct vm_area_struct *vma) +{
- struct iommufd_ctx *ictx = filp->private_data;
- size_t size = vma->vm_end - vma->vm_start;
- struct iommufd_mmap *immap;
- if (size & ~PAGE_MASK)
return -EINVAL;
- if (!(vma->vm_flags & VM_SHARED))
return -EINVAL;
- if (vma->vm_flags & VM_EXEC)
return -EPERM;
- /* vm_pgoff carries an index of an mtree entry/immap */
- immap = mtree_load(&ictx->mt_mmap, vma->vm_pgoff);
- if (!immap)
return -EINVAL;
- if (size >> PAGE_SHIFT > immap->pfn_end - immap->pfn_start + 1)
return -EINVAL;
Do we want to document in uAPI that iommufd mmap allows to map a sub-region (starting from offset zero) of the reported size from earlier alloc ioctl, but not from random offset (of course impossible by forcing vm_pgoff to be a mtree index)?
I also did this:
diff --git a/Documentation/userspace-api/iommufd.rst b/Documentation/userspace-api/iommufd.rst index ace0579432d57..f57a5bf2feea1 100644 --- a/Documentation/userspace-api/iommufd.rst +++ b/Documentation/userspace-api/iommufd.rst @@ -128,11 +128,13 @@ Following IOMMUFD objects are exposed to userspace: virtualization feature for a VM to directly execute guest-issued commands to invalidate HW cache entries holding the mappings or translations of a guest- owned stage-1 page table. Along with this queue object, iommufd provides the - user space a new mmap interface that the VMM can mmap a physical MMIO region - from the host physical address space to a guest physical address space. To use - this mmap interface, the VMM must define an IOMMU specific driver structure - to ask for a pair of VMA info (vm_pgoff/size) to do mmap after a vCMDQ gets - allocated. + user space an mmap interface for VMM to mmap a physical MMIO region from the + host physical address space to a guest physical address space. When allocating + a vCMDQ, the VMM must request a pair of VMA info (vm_pgoff/size) for a later + mmap call. The length argument of an mmap call can be smaller than the given + size for a paritial mmap, but the given vm_pgoff (as the addr argument of the + mmap call) should never be changed, which implies that the mmap always starts + from the beginning of the MMIO region.
All user-visible objects are destroyed via the IOMMU_DESTROY uAPI.
Thanks Nicolin