From: Nicolin Chen nicolinc@nvidia.com Sent: Tuesday, April 22, 2025 3:14 AM On Mon, Apr 21, 2025 at 08:37:40AM +0000, Tian, Kevin wrote:
From: Nicolin Chen nicolinc@nvidia.com Sent: Friday, April 11, 2025 2:38 PM
- vcmdq = iommufd_vcmdq_alloc(viommu, struct tegra241_vcmdq,
core);
- if (!vcmdq)
return ERR_PTR(-ENOMEM);
- ret = tegra241_vintf_init_lvcmdq(vintf, arg.vcmdq_id, vcmdq);
- if (ret)
goto free_vcmdq;
- dev_dbg(cmdqv->dev, "%sallocated\n",
lvcmdq_error_header(vcmdq, header, 64));
- vcmdq->cmdq.q.q_base = q_base & VCMDQ_ADDR;
- vcmdq->cmdq.q.q_base |= arg.vcmdq_log2size;
could the queue size be multiple pages? there is no guarantee that the HPA of guest queue would be contiguous :/
It certainly can. VMM must make sure the guest PA are contiguous by using huge pages to back the guest RAM space. Kernel has no control of this but only has to trust the VMM.
I'm adding a note here: /* User space ensures that the queue memory is physically contiguous */
And likely something similar in the uAPI header too.
It's not a good idea having the kernel trust the VMM. Also I'm not sure the contiguity is guaranteed all the time with huge page (e.g. if just using THP).
@Jason?
btw does smmu only read the cmdq or also update some fields in the queue? If the latter, then it also brings a security hole as a malicious VMM could violate the contiguity requirement to instruct the smmu to touch pages which don't belong to it...