On Thu, Aug 15, 2024 at 05:50:06PM -0700, Nicolin Chen wrote:
Though only driver would know whether it would eventually access the vdev_id list, I'd like to keep things in the way of having a core-managed VIOMMU object (IOMMU_VIOMMU_TYPE_DEFAULT), so the viommu invalidation handler could have a lock at its top level to protect any potential access to the vdev_id list.
It is a bit tortured to keep the xarray hidden. It would be better to find a way to expose the right struct to the driver.
@@ -3249,6 +3266,19 @@ arm_smmu_convert_user_cmd(struct arm_smmu_domain *s2_parent, cmd->cmd[0] &= ~CMDQ_TLBI_0_VMID; cmd->cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, vmid); break;
- case CMDQ_OP_ATC_INV:
- case CMDQ_OP_CFGI_CD:
- case CMDQ_OP_CFGI_CD_ALL:
Oh, I didn't catch on that CD was needing this too.. :\
Well, viommu cache has a very wide range :)
That makes the other op much more useless than I expected. I really wanted to break these two series apart.
HWPT invalidate and VIOMMU invalidate are somewhat duplicated in both concept and implementation for SMMUv3. It's not a problem to have both but practically I can't think of the reason why VMM not simply stick to the wider VIOMMU invalidate uAPI alone..
Maybe we need to drop the hwpt invalidation from the other series and
Yea, the hwpt invalidate is just one patch in your series, it's easy to move if we want to.
aim to merge this all together through the iommufd tree.
I have been hoping for that, as you can see those driver patches are included here :)
Well, this series has to go through iommufd of course
I was hoping will could take the nesting enablement and we'd do the viommu next window.
But nesting enablment with out viommu is alot less useful than I had thought :(
So maybe Will acks the nesting patches and we take the bunch together.
Jason