Am 22.05.19 um 20:30 schrieb Daniel Vetter:
[SNIP]
Well, it seems you are making incorrect assumptions about the cache maintenance of DMA-buf here.
At least for all DRM devices I'm aware of mapping/unmapping an attachment does *NOT* have any cache maintenance implications.
E.g. the use case you describe above would certainly fail with amdgpu, radeon, nouveau and i915 because mapping a DMA-buf doesn't stop the exporter from reading/writing to that buffer (just the opposite actually).
All of them assume perfectly coherent access to the underlying memory. As far as I know there is no documented cache maintenance requirements for DMA-buf.
I think it is documented. It's just that on x86, we ignore that because the dma-api pretends there's never a need for cache flushing on x86, and that everything snoops the cpu caches. Which isn't true since over 20 ago when AGP happened. The actual rules for x86 dma-buf are very much ad-hoc (and we occasionally reapply some duct-tape when cacheline noise shows up somewhere).
Well I strongly disagree on this. Even on x86 at least AMD GPUs are also not fully coherent.
For example you have the texture cache and the HDP read/write cache. So when both amdgpu as well as i915 would write to the same buffer at the same time we would get a corrupted data as well.
The key point is that it is NOT DMA-buf in it's map/unmap call who is defining the coherency, but rather the reservation object and its attached dma_fence instances.
So for example as long as a exclusive reservation object fence is still not signaled I can't assume that all caches are flushed and so can't start with my own operation/access to the data in question.
Regards, Christian.
I've just filed this away as another instance of the dma-api not fitting gpus, and I think giving recent discussions that won't improve anytime soon. So we're stuck with essentially undefined dma-buf behaviour. -Daniel