On Wed, Jun 28, 2023 at 02:47:02AM +0000, Tian, Kevin wrote:
From: Jason Gunthorpe jgg@nvidia.com Sent: Wednesday, June 28, 2023 12:01 AM
On Tue, Jun 27, 2023 at 06:02:13AM +0000, Tian, Kevin wrote:
From: Nicolin Chen nicolinc@nvidia.com Sent: Tuesday, June 27, 2023 1:29 AM
I'm not sure whether the MSI region needs a special MSI type or just a general RESV_DIRECT type for 1:1 mapping, though.
I don't quite get this part. Isn't MSI having IOMMU_RESV_MSI and IOMMU_RESV_SW_MSI? Or does it juset mean we should report the iommu_resv_type along with reserved regions in new ioctl?
Currently those are iommu internal types. When defining the new ioctl we need think about what are necessary presenting to the user.
Probably just a list of reserved regions plus a flag to mark which one is SW_MSI? Except SW_MSI all other reserved region types just need the user to reserve them w/o knowing more detail.
I think I prefer the idea we just import the reserved regions from a devid and do not expose any of this detail to userspace.
Kernel can make only the SW_MSI a mandatory cut out when the S2 is attached.
I'm confused.
The VMM needs to know reserved regions per dev_id and report them to the guest.
And we have aligned on that reserved regions (except SW_MSI) should not be automatically added to S2 in nesting case. Then the VMM cannot rely on IOAS_IOVA_RANGES to identify the reserved regions.
We also said we need a way to load the reserved regions to create an identity compatible version of the HWPT
So we have a model where the VMM will want to load in regions beyond the currently attached device needs
So there needs a new interface for the user to discover reserved regions per dev_id, within which the SW_MSI region should be marked out so identity mapping can be installed properly for it in S1.
Did I misunderstand your point in previous discussion?
This is another discussion, if the vmm needs this then we probably need a new API to get it.
Jason