On Fri, Dec 03, 2021 at 12:51:44PM +0900, Shunsuke Mie wrote:
> Hi maintainers,
>
> Could you please review this patch series?
Why is it RFC?
I'm confused why this is useful?
This can't do copy from MMIO memory, so it shouldn't be compatible
with things like Gaudi - does something prevent this?
Jason
Am 03.12.21 um 15:50 schrieb Thomas Hellström:
>
> On 12/3/21 15:26, Christian König wrote:
>> [Adding Daniel here as well]
>>
>> Am 03.12.21 um 15:18 schrieb Thomas Hellström:
>>> [SNIP]
>>>> Well that's ok as well. My question is why does this single dma_fence
>>>> then shows up in the dma_fence_chain representing the whole
>>>> migration?
>>> What we'd like to happen during eviction is that we
>>>
>>> 1) await any exclusive- or moving fences, then schedule the migration
>>> blit. The blit manages its own GPU ptes. Results in a single fence.
>>> 2) Schedule unbind of any gpu vmas, resulting possibly in multiple
>>> fences.
>>> 3) Most but not all of the remaining resv shared fences will have been
>>> finished in 2) We can't easily tell which so we have a couple of shared
>>> fences left.
>>
>> Stop, wait a second here. We are going a bit in circles.
>>
>> Before you migrate a buffer, you *MUST* wait for all shared fences to
>> complete. This is documented mandatory DMA-buf behavior.
>>
>> Daniel and I have discussed that quite extensively in the last few
>> month.
>>
>> So how does it come that you do the blit before all shared fences are
>> completed?
>
> Well we don't currently but wanted to... (I haven't consulted Daniel
> in the matter, tbh).
>
> I was under the impression that all writes would add an exclusive
> fence to the dma_resv.
Yes that's correct. I'm working on to have more than one write fence,
but that is currently under review.
> If that's not the case or this is otherwise against the mandatory
> DMA-buf bevhavior, we can certainly keep that part as is and that
> would eliminate 3).
Ah, now that somewhat starts to make sense.
So your blit only waits for the writes to finish before starting the
blit. Yes that's legal as long as you don't change the original content
with the blit.
But don't you then need to wait for both reads and writes before you
unmap the VMAs?
Anyway the good news is your problem totally goes away with the DMA-resv
rework I've already send out. Basically it is now possible to have more
than one fence in the DMA-resv object for migrations and all existing
fences are kept around until they are finished.
Regards,
Christian.
>
> /Thomas
>
[Adding Daniel here as well]
Am 03.12.21 um 15:18 schrieb Thomas Hellström:
> [SNIP]
>> Well that's ok as well. My question is why does this single dma_fence
>> then shows up in the dma_fence_chain representing the whole
>> migration?
> What we'd like to happen during eviction is that we
>
> 1) await any exclusive- or moving fences, then schedule the migration
> blit. The blit manages its own GPU ptes. Results in a single fence.
> 2) Schedule unbind of any gpu vmas, resulting possibly in multiple
> fences.
> 3) Most but not all of the remaining resv shared fences will have been
> finished in 2) We can't easily tell which so we have a couple of shared
> fences left.
Stop, wait a second here. We are going a bit in circles.
Before you migrate a buffer, you *MUST* wait for all shared fences to
complete. This is documented mandatory DMA-buf behavior.
Daniel and I have discussed that quite extensively in the last few month.
So how does it come that you do the blit before all shared fences are
completed?
> 4) Add all fences resulting from 1) 2) and 3) into the per-memory-type
> dma-fence-chain.
> 5) hand the resulting dma-fence-chain representing the end of migration
> over to ttm's resource manager.
>
> Now this means we have a dma-fence-chain disguised as a dma-fence out
> in the wild, and it could in theory reappear as a 3) fence for another
> migration unless a very careful audit is done, or as an input to the
> dma-fence-array used for that single dependency.
>
>> That somehow doesn't seem to make sense because each individual step
>> of
>> the migration needs to wait for those dependencies as well even when
>> it
>> runs in parallel.
>>
>>> But that's not really the point, the point was that an (at least to
>>> me) seemingly harmless usage pattern, be it real or fictious, ends
>>> up
>>> giving you severe internal- or cross-driver headaches.
>> Yeah, we probably should document that better. But in general I don't
>> see much reason to allow mixing containers. The dma_fence_array and
>> dma_fence_chain objects have some distinct use cases and and using
>> them
>> to build up larger dependency structures sounds really questionable.
> Yes, I tend to agree to some extent here. Perhaps add warnings when
> adding a chain or array as an input to array and when accidently
> joining chains, and provide helpers for flattening if needed.
Yeah, that's probably a really good idea. Going to put it on my todo list.
Thanks,
Christian.
>
> /Thomas
>
>
>> Christian.
>>
>>> /Thomas
>>>
>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>>
>
Am 01.12.21 um 13:16 schrieb Thomas Hellström (Intel):
>
> On 12/1/21 12:25, Christian König wrote:
>> Am 01.12.21 um 12:04 schrieb Thomas Hellström (Intel):
>>>
>>> On 12/1/21 11:32, Christian König wrote:
>>>> Am 01.12.21 um 11:15 schrieb Thomas Hellström (Intel):
>>>>> [SNIP]
>>>>>>
>>>>>> What we could do is to avoid all this by not calling the callback
>>>>>> with the lock held in the first place.
>>>>>
>>>>> If that's possible that might be a good idea, pls also see below.
>>>>
>>>> The problem with that is
>>>> dma_fence_signal_locked()/dma_fence_signal_timestamp_locked(). If
>>>> we could avoid using that or at least allow it to drop the lock
>>>> then we could call the callback without holding it.
>>>>
>>>> Somebody would need to audit the drivers and see if holding the
>>>> lock is really necessary anywhere.
>>>>
>>>>>>
>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> /Thomas
>>>>>>>>>
>>>>>>>>> Oh, and a follow up question:
>>>>>>>>>
>>>>>>>>> If there was a way to break the recursion on final put()
>>>>>>>>> (using the same basic approach as patch 2 in this series uses
>>>>>>>>> to break recursion in enable_signaling()), so that none of
>>>>>>>>> these containers did require any special treatment, would it
>>>>>>>>> be worth pursuing? I guess it might be possible by having the
>>>>>>>>> callbacks drop the references rather than the loop in the
>>>>>>>>> final put. + a couple of changes in code iterating over the
>>>>>>>>> fence pointers.
>>>>>>>>
>>>>>>>> That won't really help, you just move the recursion from the
>>>>>>>> final put into the callback.
>>>>>>>
>>>>>>> How do we recurse from the callback? The introduced fence_put()
>>>>>>> of individual fence pointers
>>>>>>> doesn't recurse anymore (at most 1 level), and any callback
>>>>>>> recursion is broken by the irq_work?
>>>>>>
>>>>>> Yeah, but then you would need to take another lock to avoid
>>>>>> racing with dma_fence_array_signaled().
>>>>>>
>>>>>>>
>>>>>>> I figure the big amount of work would be to adjust code that
>>>>>>> iterates over the individual fence pointers to recognize that
>>>>>>> they are rcu protected.
>>>>>>
>>>>>> Could be that we could solve this with RCU, but that sounds like
>>>>>> a lot of churn for no gain at all.
>>>>>>
>>>>>> In other words even with the problems solved I think it would be
>>>>>> a really bad idea to allow chaining of dma_fence_array objects.
>>>>>
>>>>> Yes, that was really the question, Is it worth pursuing this? I'm
>>>>> not really suggesting we should allow this as an intentional
>>>>> feature. I'm worried, however, that if we allow these containers
>>>>> to start floating around cross-driver (or even internally)
>>>>> disguised as ordinary dma_fences, they would require a lot of
>>>>> driver special casing, or else completely unexpeced WARN_ON()s and
>>>>> lockdep splats would start to turn up, scaring people off from
>>>>> using them. And that would be a breeding ground for hairy
>>>>> driver-private constructs.
>>>>
>>>> Well the question is why we would want to do it?
>>>>
>>>> If it's to avoid inter driver lock dependencies by avoiding to call
>>>> the callback with the spinlock held, then yes please. We had tons
>>>> of problems with that, resulting in irq_work and work_item
>>>> delegation all over the place.
>>>
>>> Yes, that sounds like something desirable, but in these containers,
>>> what's causing the lock dependencies is the enable_signaling()
>>> callback that is typically called locked.
>>>
>>>
>>>>
>>>> If it's to allow nesting of dma_fence_array instances, then it's
>>>> most likely a really bad idea even if we fix all the locking order
>>>> problems.
>>>
>>> Well I think my use-case where I hit a dead end may illustrate what
>>> worries me here:
>>>
>>> 1) We use a dma-fence-array to coalesce all dependencies for ttm
>>> object migration.
>>> 2) We use a dma-fence-chain to order the resulting dm_fence into a
>>> timeline because the TTM resource manager code requires that.
>>>
>>> Initially seemingly harmless to me.
>>>
>>> But after a sequence evict->alloc->clear, the dma-fence-chain feeds
>>> into the dma-fence-array for the clearing operation. Code still
>>> works fine, and no deep recursion, no warnings. But if I were to add
>>> another driver to the system that instead feeds a dma-fence-array
>>> into a dma-fence-chain, this would give me a lockdep splat.
>>>
>>> So then if somebody were to come up with the splendid idea of using
>>> a dma-fence-chain to initially coalesce fences, I'd hit the same
>>> problem or risk illegaly joining two dma-fence-chains together.
>>>
>>> To fix this, I would need to look at the incoming fences and iterate
>>> over any dma-fence-array or dma-fence-chain that is fed into the
>>> dma-fence-array to flatten out the input. In fact all
>>> dma-fence-array users would need to do that, and even
>>> dma-fence-chain users watching out for not joining chains together
>>> or accidently add an array that perhaps came as a disguised
>>> dma-fence from antother driver.
>>>
>>> So the purpose to me would be to allow these containers as input to
>>> eachother without a lot of in-driver special-casing, be it by
>>> breaking recursion on built-in flattening to avoid
>>>
>>> a) Hitting issues in the future or with existing interoperating
>>> drivers.
>>> b) Avoid driver-private containers that also might break the
>>> interoperability. (For example the i915 currently driver-private
>>> dma_fence_work avoid all these problems, but we're attempting to
>>> address issues in common code rather than re-inventing stuff
>>> internally).
>>
>> I don't think that a dma_fence_array or dma_fence_chain is the right
>> thing to begin with in those use cases.
>>
>> When you want to coalesce the dependencies for a job you could either
>> use an xarray like Daniel did for the scheduler or some hashtable
>> like we use in amdgpu. But I don't see the need for exposing the
>> dma_fence interface for those.
>
> This is because the interface to our migration code takes just a
> single dma-fence as dependency. Now this is of course something we
> need to look at to mitigate this, but see below.
Yeah, that's actually fine.
>>
>> And why do you use dma_fence_chain to generate a timeline for TTM?
>> That should come naturally because all the moves must be ordered.
>
> Oh, in this case because we're looking at adding stuff at the end of
> migration (like coalescing object shared fences and / or async unbind
> fences), which may not complete in order.
Well that's ok as well. My question is why does this single dma_fence
then shows up in the dma_fence_chain representing the whole migration?
That somehow doesn't seem to make sense because each individual step of
the migration needs to wait for those dependencies as well even when it
runs in parallel.
> But that's not really the point, the point was that an (at least to
> me) seemingly harmless usage pattern, be it real or fictious, ends up
> giving you severe internal- or cross-driver headaches.
Yeah, we probably should document that better. But in general I don't
see much reason to allow mixing containers. The dma_fence_array and
dma_fence_chain objects have some distinct use cases and and using them
to build up larger dependency structures sounds really questionable.
Christian.
>
> /Thomas
>
>
>>
>> Regards,
>> Christian.
>>
>>
Am 01.12.21 um 12:04 schrieb Thomas Hellström (Intel):
>
> On 12/1/21 11:32, Christian König wrote:
>> Am 01.12.21 um 11:15 schrieb Thomas Hellström (Intel):
>>> [SNIP]
>>>>
>>>> What we could do is to avoid all this by not calling the callback
>>>> with the lock held in the first place.
>>>
>>> If that's possible that might be a good idea, pls also see below.
>>
>> The problem with that is
>> dma_fence_signal_locked()/dma_fence_signal_timestamp_locked(). If we
>> could avoid using that or at least allow it to drop the lock then we
>> could call the callback without holding it.
>>
>> Somebody would need to audit the drivers and see if holding the lock
>> is really necessary anywhere.
>>
>>>>
>>>>>>
>>>>>>>>
>>>>>>>> /Thomas
>>>>>>>
>>>>>>> Oh, and a follow up question:
>>>>>>>
>>>>>>> If there was a way to break the recursion on final put() (using
>>>>>>> the same basic approach as patch 2 in this series uses to break
>>>>>>> recursion in enable_signaling()), so that none of these
>>>>>>> containers did require any special treatment, would it be worth
>>>>>>> pursuing? I guess it might be possible by having the callbacks
>>>>>>> drop the references rather than the loop in the final put. + a
>>>>>>> couple of changes in code iterating over the fence pointers.
>>>>>>
>>>>>> That won't really help, you just move the recursion from the
>>>>>> final put into the callback.
>>>>>
>>>>> How do we recurse from the callback? The introduced fence_put() of
>>>>> individual fence pointers
>>>>> doesn't recurse anymore (at most 1 level), and any callback
>>>>> recursion is broken by the irq_work?
>>>>
>>>> Yeah, but then you would need to take another lock to avoid racing
>>>> with dma_fence_array_signaled().
>>>>
>>>>>
>>>>> I figure the big amount of work would be to adjust code that
>>>>> iterates over the individual fence pointers to recognize that they
>>>>> are rcu protected.
>>>>
>>>> Could be that we could solve this with RCU, but that sounds like a
>>>> lot of churn for no gain at all.
>>>>
>>>> In other words even with the problems solved I think it would be a
>>>> really bad idea to allow chaining of dma_fence_array objects.
>>>
>>> Yes, that was really the question, Is it worth pursuing this? I'm
>>> not really suggesting we should allow this as an intentional
>>> feature. I'm worried, however, that if we allow these containers to
>>> start floating around cross-driver (or even internally) disguised as
>>> ordinary dma_fences, they would require a lot of driver special
>>> casing, or else completely unexpeced WARN_ON()s and lockdep splats
>>> would start to turn up, scaring people off from using them. And that
>>> would be a breeding ground for hairy driver-private constructs.
>>
>> Well the question is why we would want to do it?
>>
>> If it's to avoid inter driver lock dependencies by avoiding to call
>> the callback with the spinlock held, then yes please. We had tons of
>> problems with that, resulting in irq_work and work_item delegation
>> all over the place.
>
> Yes, that sounds like something desirable, but in these containers,
> what's causing the lock dependencies is the enable_signaling()
> callback that is typically called locked.
>
>
>>
>> If it's to allow nesting of dma_fence_array instances, then it's most
>> likely a really bad idea even if we fix all the locking order problems.
>
> Well I think my use-case where I hit a dead end may illustrate what
> worries me here:
>
> 1) We use a dma-fence-array to coalesce all dependencies for ttm
> object migration.
> 2) We use a dma-fence-chain to order the resulting dm_fence into a
> timeline because the TTM resource manager code requires that.
>
> Initially seemingly harmless to me.
>
> But after a sequence evict->alloc->clear, the dma-fence-chain feeds
> into the dma-fence-array for the clearing operation. Code still works
> fine, and no deep recursion, no warnings. But if I were to add another
> driver to the system that instead feeds a dma-fence-array into a
> dma-fence-chain, this would give me a lockdep splat.
>
> So then if somebody were to come up with the splendid idea of using a
> dma-fence-chain to initially coalesce fences, I'd hit the same problem
> or risk illegaly joining two dma-fence-chains together.
>
> To fix this, I would need to look at the incoming fences and iterate
> over any dma-fence-array or dma-fence-chain that is fed into the
> dma-fence-array to flatten out the input. In fact all dma-fence-array
> users would need to do that, and even dma-fence-chain users watching
> out for not joining chains together or accidently add an array that
> perhaps came as a disguised dma-fence from antother driver.
>
> So the purpose to me would be to allow these containers as input to
> eachother without a lot of in-driver special-casing, be it by breaking
> recursion on built-in flattening to avoid
>
> a) Hitting issues in the future or with existing interoperating drivers.
> b) Avoid driver-private containers that also might break the
> interoperability. (For example the i915 currently driver-private
> dma_fence_work avoid all these problems, but we're attempting to
> address issues in common code rather than re-inventing stuff internally).
I don't think that a dma_fence_array or dma_fence_chain is the right
thing to begin with in those use cases.
When you want to coalesce the dependencies for a job you could either
use an xarray like Daniel did for the scheduler or some hashtable like
we use in amdgpu. But I don't see the need for exposing the dma_fence
interface for those.
And why do you use dma_fence_chain to generate a timeline for TTM? That
should come naturally because all the moves must be ordered.
Regards,
Christian.
Am 01.12.21 um 11:15 schrieb Thomas Hellström (Intel):
> [SNIP]
>>
>> What we could do is to avoid all this by not calling the callback
>> with the lock held in the first place.
>
> If that's possible that might be a good idea, pls also see below.
The problem with that is
dma_fence_signal_locked()/dma_fence_signal_timestamp_locked(). If we
could avoid using that or at least allow it to drop the lock then we
could call the callback without holding it.
Somebody would need to audit the drivers and see if holding the lock is
really necessary anywhere.
>>
>>>>
>>>>>>
>>>>>> /Thomas
>>>>>
>>>>> Oh, and a follow up question:
>>>>>
>>>>> If there was a way to break the recursion on final put() (using
>>>>> the same basic approach as patch 2 in this series uses to break
>>>>> recursion in enable_signaling()), so that none of these containers
>>>>> did require any special treatment, would it be worth pursuing? I
>>>>> guess it might be possible by having the callbacks drop the
>>>>> references rather than the loop in the final put. + a couple of
>>>>> changes in code iterating over the fence pointers.
>>>>
>>>> That won't really help, you just move the recursion from the final
>>>> put into the callback.
>>>
>>> How do we recurse from the callback? The introduced fence_put() of
>>> individual fence pointers
>>> doesn't recurse anymore (at most 1 level), and any callback
>>> recursion is broken by the irq_work?
>>
>> Yeah, but then you would need to take another lock to avoid racing
>> with dma_fence_array_signaled().
>>
>>>
>>> I figure the big amount of work would be to adjust code that
>>> iterates over the individual fence pointers to recognize that they
>>> are rcu protected.
>>
>> Could be that we could solve this with RCU, but that sounds like a
>> lot of churn for no gain at all.
>>
>> In other words even with the problems solved I think it would be a
>> really bad idea to allow chaining of dma_fence_array objects.
>
> Yes, that was really the question, Is it worth pursuing this? I'm not
> really suggesting we should allow this as an intentional feature. I'm
> worried, however, that if we allow these containers to start floating
> around cross-driver (or even internally) disguised as ordinary
> dma_fences, they would require a lot of driver special casing, or else
> completely unexpeced WARN_ON()s and lockdep splats would start to turn
> up, scaring people off from using them. And that would be a breeding
> ground for hairy driver-private constructs.
Well the question is why we would want to do it?
If it's to avoid inter driver lock dependencies by avoiding to call the
callback with the spinlock held, then yes please. We had tons of
problems with that, resulting in irq_work and work_item delegation all
over the place.
If it's to allow nesting of dma_fence_array instances, then it's most
likely a really bad idea even if we fix all the locking order problems.
Christian.
>
> /Thomas
>
>
>>
>> Christian.
>>
>>>
>>>
>>> Thanks,
>>>
>>> /Thomas
>>>
>>>
On Thu, Nov 25, 2021 at 11:48 PM <guangming.cao(a)mediatek.com> wrote:
>
> From: Guangming <Guangming.Cao(a)mediatek.com>
>
> For previous version, it uses 'sg_table.nent's to traverse sg_table in pages
> free flow.
> However, 'sg_table.nents' is reassigned in 'dma_map_sg', it means the number of
> created entries in the DMA adderess space.
> So, use 'sg_table.nents' in pages free flow will case some pages can't be freed.
>
> Here we should use sg_table.orig_nents to free pages memory, but use the
> sgtable helper 'for each_sgtable_sg'(, instead of the previous rather common
> helper 'for_each_sg' which maybe cause memory leak) is much better.
>
> Fixes: d963ab0f15fb0 ("dma-buf: system_heap: Allocate higher order pages if available")
> Signed-off-by: Guangming <Guangming.Cao(a)mediatek.com>
> Reviewed-by: Robin Murphy <robin.murphy(a)arm.com>
> Cc: <stable(a)vger.kernel.org> # 5.11.*
Thanks so much for catching this and sending in all the revisions!
Reviewed-by: John Stultz <john.stultz(a)linaro.org>
Am 01.12.21 um 09:23 schrieb Thomas Hellström (Intel):
> [SNIP]
>>>>> Jason and I came up with a deep dive iterator for his use case, but I
>>>>> think we don't want to use that any more after my dma_resv rework.
>>>>>
>>>>> In other words when you need to create a new dma_fence_array you
>>>>> flatten
>>>>> out the existing construct which is at worst case
>>>>> dma_fence_chain->dma_fence_array->dma_fence.
>>>> Ok, Are there any cross-driver contract here, Like every driver
>>>> using a
>>>> dma_fence_array need to check for dma_fence_chain and flatten like
>>>> above?
>>
>> So far we only discussed that on the mailing list but haven't made
>> any documentation for that.
>
> OK, one other cross-driver pitfall I see is if someone accidently
> joins two fence chains together by creating a fence chain unknowingly
> using another fence chain as the @fence argument?
That would indeed be illegal and we should probably add a WARN_ON() for
that.
>
> The third cross-driver pitfall IMHO is the locking dependency these
> containers add. Other drivers (read at least i915) may have defined
> slightly different locking orders and that should also be addressed if
> needed, but that requires a cross driver agreement what the locking
> orders really are. Patch 1 actually addresses this, while keeping the
> container lockdep warnings for deep recursions, so at least I think
> that could serve as a discussion starter.
No, drivers should never make any assumptions on that.
E.g. when you need to take a look from a callback you must guarantee
that you never have that lock taken when you call any of the dma_fence
functions. Your patch breaks the lockdep annotation for that.
What we could do is to avoid all this by not calling the callback with
the lock held in the first place.
>>
>>>>
>>>> /Thomas
>>>
>>> Oh, and a follow up question:
>>>
>>> If there was a way to break the recursion on final put() (using the
>>> same basic approach as patch 2 in this series uses to break
>>> recursion in enable_signaling()), so that none of these containers
>>> did require any special treatment, would it be worth pursuing? I
>>> guess it might be possible by having the callbacks drop the
>>> references rather than the loop in the final put. + a couple of
>>> changes in code iterating over the fence pointers.
>>
>> That won't really help, you just move the recursion from the final
>> put into the callback.
>
> How do we recurse from the callback? The introduced fence_put() of
> individual fence pointers
> doesn't recurse anymore (at most 1 level), and any callback recursion
> is broken by the irq_work?
Yeah, but then you would need to take another lock to avoid racing with
dma_fence_array_signaled().
>
> I figure the big amount of work would be to adjust code that iterates
> over the individual fence pointers to recognize that they are rcu
> protected.
Could be that we could solve this with RCU, but that sounds like a lot
of churn for no gain at all.
In other words even with the problems solved I think it would be a
really bad idea to allow chaining of dma_fence_array objects.
Christian.
>
>
> Thanks,
>
> /Thomas
>
>