Am 17.09.21 um 16:43 schrieb Daniel Vetter:
On Fri, Sep 17, 2021 at 02:34:52PM +0200, Christian König wrote:
This makes the function much simpler since the complex retry logic is now handled elsewhere.
Signed-off-by: Christian König christian.koenig@amd.com
drivers/dma-buf/dma-resv.c | 68 ++++++-------------------------------- 1 file changed, 10 insertions(+), 58 deletions(-)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index 9b90bd9ac018..c7db553ab115 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -569,74 +569,26 @@ long dma_resv_wait_timeout(struct dma_resv *obj, bool wait_all, bool intr, unsigned long timeout) { long ret = timeout ? timeout : 1;
- unsigned int seq, shared_count;
- struct dma_resv_iter cursor; struct dma_fence *fence;
- int i;
-retry:
- shared_count = 0;
- seq = read_seqcount_begin(&obj->seq); rcu_read_lock();
I missed this in my previous conversion reviews, but pls move the rcu_read_lock into the iterator. That should simplify the flow in all of these quite a bit more, and since the iter_next_unlocked grabs a full reference for the iteration body we really don't need that protected by rcu.
I intentionally didn't do that because it makes it much more clear that we are using RCU here and there is absolutely no guarantee that the collection won't change.
But I'm fine if we go down that route instead if you think that's the way to go.
Thanks, Christian.
We can't toss rcu protection for dma_resv anytime soon (if ever), but we can at least make it an implementation detail.
- i = -1;
- fence = dma_resv_excl_fence(obj);
- if (fence && !test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
if (!dma_fence_get_rcu(fence))
goto unlock_retry;
- dma_resv_iter_begin(&cursor, obj, wait_all);
- dma_resv_for_each_fence_unlocked(&cursor, fence) {
rcu_read_unlock();
if (dma_fence_is_signaled(fence)) {
dma_fence_put(fence);
fence = NULL;
ret = dma_fence_wait_timeout(fence, intr, ret);
if (ret <= 0) {
dma_resv_iter_end(&cursor);
}return ret;
- } else {
fence = NULL;
- }
- if (wait_all) {
struct dma_resv_list *fobj = dma_resv_shared_list(obj);
if (fobj)
shared_count = fobj->shared_count;
for (i = 0; !fence && i < shared_count; ++i) {
struct dma_fence *lfence;
lfence = rcu_dereference(fobj->shared[i]);
if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT,
&lfence->flags))
continue;
if (!dma_fence_get_rcu(lfence))
goto unlock_retry;
if (dma_fence_is_signaled(lfence)) {
dma_fence_put(lfence);
continue;
}
fence = lfence;
break;
}
}rcu_read_lock();
- dma_resv_iter_end(&cursor); rcu_read_unlock();
- if (fence) {
if (read_seqcount_retry(&obj->seq, seq)) {
dma_fence_put(fence);
goto retry;
}
ret = dma_fence_wait_timeout(fence, intr, ret);
dma_fence_put(fence);
if (ret > 0 && wait_all && (i + 1 < shared_count))
goto retry;
- } return ret;
-unlock_retry:
- rcu_read_unlock();
- goto retry;
I think we still have the same semantics, and it's so much tidier.
With the rcu_read_unlock stuff into iterators (also applies to previous two patches):
Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
} EXPORT_SYMBOL_GPL(dma_resv_wait_timeout); -- 2.25.1