On Fri, May 16, 2025 at 02:02:29AM +0800, Xu Yilun wrote:
IMHO, I think it might be helpful that you can picture out what are the minimum requirements (function/life cycle) to the current IOMMUFD TSM bind architecture:
1.host tsm_bind (preparation) is in IOMMUFD, triggered by QEMU handling the TVM-HOST call. 2. TDI acceptance is handled in guest_request() to accept the TDI after the validation in the TVM)
I'll try my best to brainstorm and make a flow in ASCII.
(*) means new feature
Guest Guest TSM QEMU VFIO IOMMUFD host TSM KVM ----- --------- ---- ---- ------- -------- ---
*Connect(IDE)
Init vdev
open /dev/vfio/XX as a VFIO action
Then VFIO attaches to IOMMUFD as an iommufd action creating the idev
*create dmabuf
*export dmabuf
create memslot
*import dmabuf
setup shared DMA
create hwpt
attach hwpt
kvm run
11.enum shared dev 12.*Connect(Bind) 13. *GHCI Bind 14. *Bind 15 CC viommu alloc 16. vdevice allloc
viommu and vdevice creation happen before KVM run. The vPCI function is visible to the guest from the very start, even though it is in T=0 mode. If a platform does not require any special CC steps prior to KVM run then it just has a NOP for these functions.
What you have here is some new BIND operation against the already existing vdevice as we discussed earlier.
*attach vdev
*setup CC viommu
18 *tsm_bind 19. *bind 20.*Attest 21. *GHCI get CC info 22. *get CC info 23. *vdev guest req 24. *guest req 25.*Accept 26. *GHCI accept MMIO/DMA 27. *accept MMIO/DMA 28. *vdev guest req 29. *guest req 30. *map private MMIO 31. *GHCI start tdi 32. *start tdi 33. *vdev guest req 34. *guest req
This seems reasonable you want to have some generic RPC scheme to carry messages fro mthe VM to the TSM tunneled through the iommufd vdevice (because the vdevice has the vPCI ID, the KVM ID, the VIOMMU id and so on)
35.Workload... 36.*disconnect(Unbind) 37. *GHCI unbind 38. *Unbind 39. *detach vdev
unbind vdev. vdev remains until kvm is stopped.
*tsm_unbind
*TDX stop tdi
*TDX disable mmio cb
*cb dmabuf revoke
*unmap private MMIO
*TDX disable dma cb
*cb disable CC viommu
I don't know why you'd disable a viommu while the VM is running, doesn't make sense.
*TDX tdi free
*enable mmio
*cb dmabuf recover
50.workable shared dev
This is a nice chart, it would be good to see a comparable chart for AMD and ARM
Jason
linaro-mm-sig@lists.linaro.org