On Tuesday 03 February 2015 09:04:03 Rob Clark wrote:
On Tue, Feb 3, 2015 at 2:48 AM, Daniel Vetter daniel@ffwll.ch wrote:
On Mon, Feb 02, 2015 at 03:30:21PM -0500, Rob Clark wrote:
On Mon, Feb 2, 2015 at 11:54 AM, Daniel Vetter daniel@ffwll.ch wrote:
My initial thought is for dma-buf to not try to prevent something than an exporter can actually do.. I think the scenario you describe could be handled by two sg-lists, if the exporter was clever enough.
That's already needed, each attachment has it's own sg-list. After all there's no array of dma_addr_t in the sg tables, so you can't use one sg for more than one mapping. And due to different iommu different devices can easily end up with different addresses.
Well, to be fair it may not be explicitly stated, but currently one should assume the dma_addr_t's in the dmabuf sglist are bogus. With gpu's that implement per-process/context page tables, I'm not really sure that there is a sane way to actually do anything else..
Hm, what does per-process/context page tables have to do here? At least on i915 we have a two levels of page tables:
- first level for vm/device isolation, used through dma api
- 2nd level for per-gpu-context isolation and context switching, handled internally.
Since atm the dma api doesn't have any context of contexts or different pagetables, I don't see who you could use that at all.
Since I'm stuck w/ an iommu, instead of built in mmu, my plan was to drop use of dma-mapping entirely (incl the current call to dma_map_sg, which I just need until we can use drm_cflush on arm), and attach/detach iommu domains directly to implement context switches. At that point, dma_addr_t really has no sensible meaning for me.
I think what you see here is a quite common hardware setup and we really lack the right abstraction for it at the moment. Everybody seems to work around it with a mix of the dma-mapping API and the iommu API. These are doing different things, and even though the dma-mapping API can be implemented on top of the iommu API, they are not really compatible.
The drm_clflush helpers don't seem like the right solution to me, because all other devices outside of drm will face the same issue, and I suspect we should fill the missing gaps in the API in a more generic way.
Arnd