Am 01.12.21 um 11:15 schrieb Thomas Hellström (Intel):
> [SNIP]
>>
>> What we could do is to avoid all this by not calling the callback
>> with the lock held in the first place.
>
> If that's possible that might be a good idea, pls also see below.
The problem with that is
dma_fence_signal_locked()/dma_fence_signal_timestamp_locked(). If we
could avoid using that or at least allow it to drop the lock then we
could call the callback without holding it.
Somebody would need to audit the drivers and see if holding the lock is
really necessary anywhere.
>>
>>>>
>>>>>>
>>>>>> /Thomas
>>>>>
>>>>> Oh, and a follow up question:
>>>>>
>>>>> If there was a way to break the recursion on final put() (using
>>>>> the same basic approach as patch 2 in this series uses to break
>>>>> recursion in enable_signaling()), so that none of these containers
>>>>> did require any special treatment, would it be worth pursuing? I
>>>>> guess it might be possible by having the callbacks drop the
>>>>> references rather than the loop in the final put. + a couple of
>>>>> changes in code iterating over the fence pointers.
>>>>
>>>> That won't really help, you just move the recursion from the final
>>>> put into the callback.
>>>
>>> How do we recurse from the callback? The introduced fence_put() of
>>> individual fence pointers
>>> doesn't recurse anymore (at most 1 level), and any callback
>>> recursion is broken by the irq_work?
>>
>> Yeah, but then you would need to take another lock to avoid racing
>> with dma_fence_array_signaled().
>>
>>>
>>> I figure the big amount of work would be to adjust code that
>>> iterates over the individual fence pointers to recognize that they
>>> are rcu protected.
>>
>> Could be that we could solve this with RCU, but that sounds like a
>> lot of churn for no gain at all.
>>
>> In other words even with the problems solved I think it would be a
>> really bad idea to allow chaining of dma_fence_array objects.
>
> Yes, that was really the question, Is it worth pursuing this? I'm not
> really suggesting we should allow this as an intentional feature. I'm
> worried, however, that if we allow these containers to start floating
> around cross-driver (or even internally) disguised as ordinary
> dma_fences, they would require a lot of driver special casing, or else
> completely unexpeced WARN_ON()s and lockdep splats would start to turn
> up, scaring people off from using them. And that would be a breeding
> ground for hairy driver-private constructs.
Well the question is why we would want to do it?
If it's to avoid inter driver lock dependencies by avoiding to call the
callback with the spinlock held, then yes please. We had tons of
problems with that, resulting in irq_work and work_item delegation all
over the place.
If it's to allow nesting of dma_fence_array instances, then it's most
likely a really bad idea even if we fix all the locking order problems.
Christian.
>
> /Thomas
>
>
>>
>> Christian.
>>
>>>
>>>
>>> Thanks,
>>>
>>> /Thomas
>>>
>>>
On Thu, Nov 25, 2021 at 11:48 PM <guangming.cao(a)mediatek.com> wrote:
>
> From: Guangming <Guangming.Cao(a)mediatek.com>
>
> For previous version, it uses 'sg_table.nent's to traverse sg_table in pages
> free flow.
> However, 'sg_table.nents' is reassigned in 'dma_map_sg', it means the number of
> created entries in the DMA adderess space.
> So, use 'sg_table.nents' in pages free flow will case some pages can't be freed.
>
> Here we should use sg_table.orig_nents to free pages memory, but use the
> sgtable helper 'for each_sgtable_sg'(, instead of the previous rather common
> helper 'for_each_sg' which maybe cause memory leak) is much better.
>
> Fixes: d963ab0f15fb0 ("dma-buf: system_heap: Allocate higher order pages if available")
> Signed-off-by: Guangming <Guangming.Cao(a)mediatek.com>
> Reviewed-by: Robin Murphy <robin.murphy(a)arm.com>
> Cc: <stable(a)vger.kernel.org> # 5.11.*
Thanks so much for catching this and sending in all the revisions!
Reviewed-by: John Stultz <john.stultz(a)linaro.org>
Am 01.12.21 um 09:23 schrieb Thomas Hellström (Intel):
> [SNIP]
>>>>> Jason and I came up with a deep dive iterator for his use case, but I
>>>>> think we don't want to use that any more after my dma_resv rework.
>>>>>
>>>>> In other words when you need to create a new dma_fence_array you
>>>>> flatten
>>>>> out the existing construct which is at worst case
>>>>> dma_fence_chain->dma_fence_array->dma_fence.
>>>> Ok, Are there any cross-driver contract here, Like every driver
>>>> using a
>>>> dma_fence_array need to check for dma_fence_chain and flatten like
>>>> above?
>>
>> So far we only discussed that on the mailing list but haven't made
>> any documentation for that.
>
> OK, one other cross-driver pitfall I see is if someone accidently
> joins two fence chains together by creating a fence chain unknowingly
> using another fence chain as the @fence argument?
That would indeed be illegal and we should probably add a WARN_ON() for
that.
>
> The third cross-driver pitfall IMHO is the locking dependency these
> containers add. Other drivers (read at least i915) may have defined
> slightly different locking orders and that should also be addressed if
> needed, but that requires a cross driver agreement what the locking
> orders really are. Patch 1 actually addresses this, while keeping the
> container lockdep warnings for deep recursions, so at least I think
> that could serve as a discussion starter.
No, drivers should never make any assumptions on that.
E.g. when you need to take a look from a callback you must guarantee
that you never have that lock taken when you call any of the dma_fence
functions. Your patch breaks the lockdep annotation for that.
What we could do is to avoid all this by not calling the callback with
the lock held in the first place.
>>
>>>>
>>>> /Thomas
>>>
>>> Oh, and a follow up question:
>>>
>>> If there was a way to break the recursion on final put() (using the
>>> same basic approach as patch 2 in this series uses to break
>>> recursion in enable_signaling()), so that none of these containers
>>> did require any special treatment, would it be worth pursuing? I
>>> guess it might be possible by having the callbacks drop the
>>> references rather than the loop in the final put. + a couple of
>>> changes in code iterating over the fence pointers.
>>
>> That won't really help, you just move the recursion from the final
>> put into the callback.
>
> How do we recurse from the callback? The introduced fence_put() of
> individual fence pointers
> doesn't recurse anymore (at most 1 level), and any callback recursion
> is broken by the irq_work?
Yeah, but then you would need to take another lock to avoid racing with
dma_fence_array_signaled().
>
> I figure the big amount of work would be to adjust code that iterates
> over the individual fence pointers to recognize that they are rcu
> protected.
Could be that we could solve this with RCU, but that sounds like a lot
of churn for no gain at all.
In other words even with the problems solved I think it would be a
really bad idea to allow chaining of dma_fence_array objects.
Christian.
>
>
> Thanks,
>
> /Thomas
>
>
Am 30.11.21 um 20:27 schrieb Thomas Hellström:
>
> On 11/30/21 19:12, Thomas Hellström wrote:
>> On Tue, 2021-11-30 at 16:02 +0100, Christian König wrote:
>>> Am 30.11.21 um 15:35 schrieb Thomas Hellström:
>>>> On Tue, 2021-11-30 at 14:26 +0100, Christian König wrote:
>>>>> Am 30.11.21 um 13:56 schrieb Thomas Hellström:
>>>>>> On 11/30/21 13:42, Christian König wrote:
>>>>>>> Am 30.11.21 um 13:31 schrieb Thomas Hellström:
>>>>>>>> [SNIP]
>>>>>>>>> Other than that, I didn't investigate the nesting fails
>>>>>>>>> enough to
>>>>>>>>> say I can accurately review this. :)
>>>>>>>> Basically the problem is that within enable_signaling()
>>>>>>>> which
>>>>>>>> is
>>>>>>>> called with the dma_fence lock held, we take the dma_fence
>>>>>>>> lock
>>>>>>>> of
>>>>>>>> another fence. If that other fence is a dma_fence_array, or
>>>>>>>> a
>>>>>>>> dma_fence_chain which in turn tries to lock a
>>>>>>>> dma_fence_array
>>>>>>>> we hit
>>>>>>>> a splat.
>>>>>>> Yeah, I already thought that you constructed something like
>>>>>>> that.
>>>>>>>
>>>>>>> You get the splat because what you do here is illegal, you
>>>>>>> can't
>>>>>>> mix
>>>>>>> dma_fence_array and dma_fence_chain like this or you can end
>>>>>>> up
>>>>>>> in a
>>>>>>> stack corruption.
>>>>>> Hmm. Ok, so what is the stack corruption, is it that the
>>>>>> enable_signaling() will end up with endless recursion? If so,
>>>>>> wouldn't
>>>>>> it be more usable we break that recursion chain and allow a
>>>>>> more
>>>>>> general use?
>>>>> The problem is that this is not easily possible for
>>>>> dma_fence_array
>>>>> containers. Just imagine that you drop the last reference to the
>>>>> containing fences during dma_fence_array destruction if any of
>>>>> the
>>>>> contained fences is another container you can easily run into
>>>>> recursion
>>>>> and with that stack corruption.
>>>> Indeed, that would require some deeper surgery.
>>>>
>>>>> That's one of the major reasons I came up with the
>>>>> dma_fence_chain
>>>>> container. This one you can chain any number of elements together
>>>>> without running into any recursion.
>>>>>
>>>>>> Also what are the mixing rules between these? Never use a
>>>>>> dma-fence-chain as one of the array fences and never use a
>>>>>> dma-fence-array as a dma-fence-chain fence?
>>>>> You can't add any other container to a dma_fence_array, neither
>>>>> other
>>>>> dma_fence_array instances nor dma_fence_chain instances.
>>>>>
>>>>> IIRC at least technically a dma_fence_chain can contain a
>>>>> dma_fence_array if you absolutely need that, but Daniel, Jason
>>>>> and I
>>>>> already had the same discussion a while back and came to the
>>>>> conclusion
>>>>> to avoid that as well if possible.
>>>> Yes, this is actually the use-case. But what I can't easily
>>>> guarantee
>>>> is that that dma_fence_chain isn't fed into a dma_fence_array
>>>> somewhere
>>>> else. How do you typically avoid that?
>>>>
>>>> Meanwhile I guess I need to take a different approach in the driver
>>>> to
>>>> avoid this altogether.
>>> Jason and I came up with a deep dive iterator for his use case, but I
>>> think we don't want to use that any more after my dma_resv rework.
>>>
>>> In other words when you need to create a new dma_fence_array you
>>> flatten
>>> out the existing construct which is at worst case
>>> dma_fence_chain->dma_fence_array->dma_fence.
>> Ok, Are there any cross-driver contract here, Like every driver using a
>> dma_fence_array need to check for dma_fence_chain and flatten like
>> above?
So far we only discussed that on the mailing list but haven't made any
documentation for that.
>>
>> /Thomas
>
> Oh, and a follow up question:
>
> If there was a way to break the recursion on final put() (using the
> same basic approach as patch 2 in this series uses to break recursion
> in enable_signaling()), so that none of these containers did require
> any special treatment, would it be worth pursuing? I guess it might be
> possible by having the callbacks drop the references rather than the
> loop in the final put. + a couple of changes in code iterating over
> the fence pointers.
That won't really help, you just move the recursion from the final put
into the callback.
What could be possible is to use an work item for any possible
operation, e.g. enabling, signaling and destruction. But in the last
discussion everybody agreed that it is better to just flatten out the array.
Christian.
>
>
> /Thomas
>
>>
>>> Regards,
>>> Christian.
>>>
>>>> /Thomas
>>>>
>>>>
>>>>> Regards,
>>>>> Christian.
>>>>>
>>>>>> /Thomas
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>> Regards,
>>>>>>> Christian.
>>>>>>>
>>>>>>>> But I'll update the commit message with a typical splat.
>>>>>>>>
>>>>>>>> /Thomas
Am 30.11.21 um 15:35 schrieb Thomas Hellström:
> On Tue, 2021-11-30 at 14:26 +0100, Christian König wrote:
>> Am 30.11.21 um 13:56 schrieb Thomas Hellström:
>>> On 11/30/21 13:42, Christian König wrote:
>>>> Am 30.11.21 um 13:31 schrieb Thomas Hellström:
>>>>> [SNIP]
>>>>>> Other than that, I didn't investigate the nesting fails
>>>>>> enough to
>>>>>> say I can accurately review this. :)
>>>>> Basically the problem is that within enable_signaling() which
>>>>> is
>>>>> called with the dma_fence lock held, we take the dma_fence lock
>>>>> of
>>>>> another fence. If that other fence is a dma_fence_array, or a
>>>>> dma_fence_chain which in turn tries to lock a dma_fence_array
>>>>> we hit
>>>>> a splat.
>>>> Yeah, I already thought that you constructed something like that.
>>>>
>>>> You get the splat because what you do here is illegal, you can't
>>>> mix
>>>> dma_fence_array and dma_fence_chain like this or you can end up
>>>> in a
>>>> stack corruption.
>>> Hmm. Ok, so what is the stack corruption, is it that the
>>> enable_signaling() will end up with endless recursion? If so,
>>> wouldn't
>>> it be more usable we break that recursion chain and allow a more
>>> general use?
>> The problem is that this is not easily possible for dma_fence_array
>> containers. Just imagine that you drop the last reference to the
>> containing fences during dma_fence_array destruction if any of the
>> contained fences is another container you can easily run into
>> recursion
>> and with that stack corruption.
> Indeed, that would require some deeper surgery.
>
>> That's one of the major reasons I came up with the dma_fence_chain
>> container. This one you can chain any number of elements together
>> without running into any recursion.
>>
>>> Also what are the mixing rules between these? Never use a
>>> dma-fence-chain as one of the array fences and never use a
>>> dma-fence-array as a dma-fence-chain fence?
>> You can't add any other container to a dma_fence_array, neither other
>> dma_fence_array instances nor dma_fence_chain instances.
>>
>> IIRC at least technically a dma_fence_chain can contain a
>> dma_fence_array if you absolutely need that, but Daniel, Jason and I
>> already had the same discussion a while back and came to the
>> conclusion
>> to avoid that as well if possible.
> Yes, this is actually the use-case. But what I can't easily guarantee
> is that that dma_fence_chain isn't fed into a dma_fence_array somewhere
> else. How do you typically avoid that?
>
> Meanwhile I guess I need to take a different approach in the driver to
> avoid this altogether.
Jason and I came up with a deep dive iterator for his use case, but I
think we don't want to use that any more after my dma_resv rework.
In other words when you need to create a new dma_fence_array you flatten
out the existing construct which is at worst case
dma_fence_chain->dma_fence_array->dma_fence.
Regards,
Christian.
>
> /Thomas
>
>
>> Regards,
>> Christian.
>>
>>> /Thomas
>>>
>>>
>>>
>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>>> But I'll update the commit message with a typical splat.
>>>>>
>>>>> /Thomas
>
Am 30.11.21 um 13:56 schrieb Thomas Hellström:
>
> On 11/30/21 13:42, Christian König wrote:
>> Am 30.11.21 um 13:31 schrieb Thomas Hellström:
>>> [SNIP]
>>>> Other than that, I didn't investigate the nesting fails enough to
>>>> say I can accurately review this. :)
>>>
>>> Basically the problem is that within enable_signaling() which is
>>> called with the dma_fence lock held, we take the dma_fence lock of
>>> another fence. If that other fence is a dma_fence_array, or a
>>> dma_fence_chain which in turn tries to lock a dma_fence_array we hit
>>> a splat.
>>
>> Yeah, I already thought that you constructed something like that.
>>
>> You get the splat because what you do here is illegal, you can't mix
>> dma_fence_array and dma_fence_chain like this or you can end up in a
>> stack corruption.
>
> Hmm. Ok, so what is the stack corruption, is it that the
> enable_signaling() will end up with endless recursion? If so, wouldn't
> it be more usable we break that recursion chain and allow a more
> general use?
The problem is that this is not easily possible for dma_fence_array
containers. Just imagine that you drop the last reference to the
containing fences during dma_fence_array destruction if any of the
contained fences is another container you can easily run into recursion
and with that stack corruption.
That's one of the major reasons I came up with the dma_fence_chain
container. This one you can chain any number of elements together
without running into any recursion.
> Also what are the mixing rules between these? Never use a
> dma-fence-chain as one of the array fences and never use a
> dma-fence-array as a dma-fence-chain fence?
You can't add any other container to a dma_fence_array, neither other
dma_fence_array instances nor dma_fence_chain instances.
IIRC at least technically a dma_fence_chain can contain a
dma_fence_array if you absolutely need that, but Daniel, Jason and I
already had the same discussion a while back and came to the conclusion
to avoid that as well if possible.
Regards,
Christian.
>
> /Thomas
>
>
>
>
>>
>> Regards,
>> Christian.
>>
>>>
>>> But I'll update the commit message with a typical splat.
>>>
>>> /Thomas
>>
Am 30.11.21 um 13:31 schrieb Thomas Hellström:
> [SNIP]
>> Other than that, I didn't investigate the nesting fails enough to say
>> I can accurately review this. :)
>
> Basically the problem is that within enable_signaling() which is
> called with the dma_fence lock held, we take the dma_fence lock of
> another fence. If that other fence is a dma_fence_array, or a
> dma_fence_chain which in turn tries to lock a dma_fence_array we hit a
> splat.
Yeah, I already thought that you constructed something like that.
You get the splat because what you do here is illegal, you can't mix
dma_fence_array and dma_fence_chain like this or you can end up in a
stack corruption.
Regards,
Christian.
>
> But I'll update the commit message with a typical splat.
>
> /Thomas
Am 30.11.21 um 13:19 schrieb Thomas Hellström:
> Introducing more usage of dma-fence-chain and dma-fence-array in the
> i915 driver we start to hit lockdep splats due to the recursive fence
> locking in the dma-fence-chain and dma-fence-array containers.
> This is a humble suggestion to try to establish a dma-fence locking order
> (patch 1) and to avoid excessive recursive locking in these containers
> (patch 2)
Well completely NAK to this.
This splats are intentional notes that something in the driver code is
wrong (or we messed up the chain and array containers somehow).
Those two containers are intentionally crafted in a way which allows to
avoid any dependency between their spinlocks. So as long as you
correctly use them you should never see a splat.
Please provide the lockdep splat so that we can analyze the problem.
Thanks,
Christian.
>
> Thomas Hellström (2):
> dma-fence: Avoid establishing a locking order between fence classes
> dma-fence: Avoid excessive recursive fence locking from
> enable_signaling() callbacks
>
> drivers/dma-buf/dma-fence-array.c | 23 +++++++--
> drivers/dma-buf/dma-fence-chain.c | 12 ++++-
> drivers/dma-buf/dma-fence.c | 79 +++++++++++++++++++++----------
> include/linux/dma-fence.h | 4 ++
> 4 files changed, 89 insertions(+), 29 deletions(-)
>
> Cc: linaro-mm-sig(a)lists.linaro.org
> Cc: dri-devel(a)lists.freedesktop.org
> Cc: Christian König <christian.koenig(a)amd.com>
>
Hi everyone,
compared to the last version I've dropped the pruning as suggested by
Maarten, split the new DMA_RESV_USAGE_* patches from the general
introduction as suggeted by Daniel and renamed OTEHRS to BOOKKEEP as
suggested by Pekka.
Please take a look and review,
Christian.