On 4/22/26 15:13, Jason Gunthorpe wrote:
On Wed, Apr 22, 2026 at 02:39:16PM +0200, Christian König wrote:
Can you be more specific please, I still have no idea what you are thinking in terms of an acceptable implementation.
Let me try to describe it differently:
The iommufd deals with iommu_domain structures which userspace can map different things into.
So of hand I would say that an interface to map DMA-buf into such an iommu_domain should look something like this:
dma_buf_map_attachment_iommu(struct dma_buf_attachment *attachment, struct iommu_domain *domain, unsigned long iova, unsigned long offset, size_t size, ...);
The DMA buf exporter then maps the its data into the iommu_domain at iova starting with offset from within the buffer and size number of bytes.
Well, my first reaction is very negative, this suggestion is leaking deep internal details like iommu_domain out of the single place that needs them - iommufd - into about 6 exporter drivers. Not nice. I have the mirror of your concern that I don't trust DRM drivers not to abuse the iommu_domain pointer in some very creative way.
Yeah, of course that argument goes into both directions.
The point is just that we have much more importers than exporters to handle, and from experience it was always the importer who messed things up.
Background is that the importer integrates the buffer into it's own handling which might not be made for the way the exporter is expecting things to be used.
The result ranged from extremely hard to debug data corruption issues all the way to security issues because somebody used vm_insert_page()/vm_insert_pfn() with a different address space object than the exporter expected to be used for it's memory.
However. With a suitable helper we can largely isolate this to a single function and yeah I can see making this functional.
The important point is that the exporter should not need to expose it's physical data store and how it's housekeeping works.
As long as we can guarantee that I'm fine with it.
Not sure how this can work for KVM, but I'm getting the feeling the way forward here is to "live and learn" together.
So, in the context of this series, your proposal is an iommu_domain mapping type, to replace PAL. Yes?
Something like that, yes.
Do you have a positive feeling about the general mapping type system from the earlier patches?
As far as I can see that goes into the right direction, yes.
I think if you want these kinds of APIs there are going to be several mapping types required to exchange their very narrowly defined details: scatterlist, scatterlist-ng, iommu_domain, the Intel vfio thing, UALink, driver private interconnects, and whatever KVM needs.
Plus those strange device to device interfaces you find on ARM/Android which people currently manage out of the upstream kernel and happen to break all the time.
Thus I think this is making a stronger case that we should have this formal negotiation protocol between exporter and importer for the mapping types.
Yes, absolutely.
Regards, Christian.
Thanks, Jason