Once more an entire week behind on mails, but this looked interesting enough.
On Fri, Dec 03, 2021 at 03:18:01PM +0100, Thomas Hellström wrote:
On Fri, 2021-12-03 at 14:08 +0100, Christian König wrote:
Am 01.12.21 um 13:16 schrieb Thomas Hellström (Intel):
On 12/1/21 12:25, Christian König wrote:
And why do you use dma_fence_chain to generate a timeline for TTM? That should come naturally because all the moves must be ordered.
Oh, in this case because we're looking at adding stuff at the end of migration (like coalescing object shared fences and / or async unbind fences), which may not complete in order.
Well that's ok as well. My question is why does this single dma_fence then shows up in the dma_fence_chain representing the whole migration?
What we'd like to happen during eviction is that we
- await any exclusive- or moving fences, then schedule the migration
blit. The blit manages its own GPU ptes. Results in a single fence. 2) Schedule unbind of any gpu vmas, resulting possibly in multiple fences.
This sounds like over-optimizing for nothing. We only really care about pipeling moves on dgpu, and on dgpu we only care about modern userspace (because even gl moves in that direction).
And modern means that usually even write access is only setting a read fence, because in vk/compute we only set write fences for object which need implicit sync, and _only_ when actually needed.
So ignoring read fences for movings "because it's only reads" is actually busted.
I think for buffer moves we should document and enforce (in review) the rule that you have to wait for all fences, otherwise boom. Same really like before freeing backing storage. Otherwise there's just too many gaps and surprises.
And yes with Christian's rework of dma_resv this will change, and we'll allow multiple write fences (because that's what amdgpu encoded into their uapi). Still means that you cannot move a buffer without waiting for read fences (or kernel fences or anything really).
The other thing is this entire spinlock recursion topic for dma_fence, and I'm deeply unhappy about the truckload of tricks i915 plays and hence in favour of avoiding recursion in this area as much as possible.
If we really can't avoid it then irq_work to get a new clean context gets the job done. Making this messy and work is imo a feature, lock nesting of same level locks is just not a good&robust engineering idea.
/me back to being completely burried
I do hope I can find some more time to review a few more of Christian's patches this week though :-/
Cheers, Daniel
- Most but not all of the remaining resv shared fences will have been
finished in 2) We can't easily tell which so we have a couple of shared fences left. 4) Add all fences resulting from 1) 2) and 3) into the per-memory-type dma-fence-chain. 5) hand the resulting dma-fence-chain representing the end of migration over to ttm's resource manager.
Now this means we have a dma-fence-chain disguised as a dma-fence out in the wild, and it could in theory reappear as a 3) fence for another migration unless a very careful audit is done, or as an input to the dma-fence-array used for that single dependency.
That somehow doesn't seem to make sense because each individual step of the migration needs to wait for those dependencies as well even when it runs in parallel.
But that's not really the point, the point was that an (at least to me) seemingly harmless usage pattern, be it real or fictious, ends up giving you severe internal- or cross-driver headaches.
Yeah, we probably should document that better. But in general I don't see much reason to allow mixing containers. The dma_fence_array and dma_fence_chain objects have some distinct use cases and and using them to build up larger dependency structures sounds really questionable.
Yes, I tend to agree to some extent here. Perhaps add warnings when adding a chain or array as an input to array and when accidently joining chains, and provide helpers for flattening if needed.
/Thomas
Christian.
/Thomas
Regards, Christian.
linaro-mm-sig@lists.linaro.org