On Fri, May 30, 2025 at 09:59:36AM -0700, Nicolin Chen wrote:
+/* The vm_pgoff must be pre-allocated from mt_mmap, and given to user space */ +static int iommufd_fops_mmap(struct file *filp, struct vm_area_struct *vma) +{
- struct iommufd_ctx *ictx = filp->private_data;
- size_t length = vma->vm_end - vma->vm_start;
- struct iommufd_mmap *immap;
- int rc;
- if (!PAGE_ALIGNED(length))
return -EINVAL;
- if (!(vma->vm_flags & VM_SHARED))
return -EINVAL;
- if (vma->vm_flags & VM_EXEC)
return -EPERM;
- /* vma->vm_pgoff carries an index to an mtree entry (immap) */
- immap = mtree_load(&ictx->mt_mmap, vma->vm_pgoff);
- if (!immap)
return -ENXIO;
- /* Validate the vm_pgoff and length against the registered region */
- if (vma->vm_pgoff != immap->startp)
return -ENXIO;
This check seems redundant
Hmm, I was trying to follow your remarks: "This needs to validate that vm_pgoff is at the start of the immap" https://lore.kernel.org/all/20250515164717.GL382960@nvidia.com/
Oh, right I forgot how mtree_load works again. :\ Maybe add a little note
/* mtree_load returns the immap for any contained pgoff, only allow the immap thing to be mapped. */
Jason