Am 20.09.21 um 12:33 schrieb Tvrtko Ursulin:
On 20/09/2021 11:13, Christian König wrote:
Am 20.09.21 um 10:45 schrieb Tvrtko Ursulin:
On 17/09/2021 13:35, Christian König wrote:
This makes the function much simpler since the complex retry logic is now handled else where.
Signed-off-by: Christian König christian.koenig@amd.com
drivers/gpu/drm/i915/gem/i915_gem_busy.c | 32 ++++++++---------------- 1 file changed, 11 insertions(+), 21 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_busy.c b/drivers/gpu/drm/i915/gem/i915_gem_busy.c index 6234e17259c1..b1cb7ba688da 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_busy.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_busy.c @@ -82,8 +82,8 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data, { struct drm_i915_gem_busy *args = data; struct drm_i915_gem_object *obj; - struct dma_resv_list *list; - unsigned int seq; + struct dma_resv_iter cursor; + struct dma_fence *fence; int err; err = -ENOENT; @@ -109,27 +109,17 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data, * to report the overall busyness. This is what the wait-ioctl does. * */ -retry: - seq = raw_read_seqcount(&obj->base.resv->seq);
- /* Translate the exclusive fence to the READ *and* WRITE engine */ - args->busy = busy_check_writer(dma_resv_excl_fence(obj->base.resv));
- /* Translate shared fences to READ set of engines */ - list = dma_resv_shared_list(obj->base.resv); - if (list) { - unsigned int shared_count = list->shared_count, i;
- for (i = 0; i < shared_count; ++i) { - struct dma_fence *fence = - rcu_dereference(list->shared[i]);
+ args->busy = false; + dma_resv_iter_begin(&cursor, obj->base.resv, true); + dma_resv_for_each_fence_unlocked(&cursor, fence) {
You did not agree with my suggestion to reset args->busy on restart and so preserve current behaviour?
No, I want to keep the restart behavior internally to the dma_resv object and as far as I can see it should not make a difference here.
To be clear, on paper difference between old and new implementation is if the restart happens while processing the shared fences.
Old implementation unconditionally goes to "args->busy =
busy_check_writer(dma_resv_excl_fence(obj->base.resv));" and so
overwrites the set of flags returned to userspace.
New implementation can merge new read flags to the old set of flags and so return a composition of past and current fences.
Maybe it does not matter hugely in this case, depends if userspace typically just restarts until flags are clear. But I am not sure.
On the higher level - what do you mean with wanting to keep the restart behaviour internal? Not providing iterators users means of detecting it? I think it has to be provided.
Ok I will adjust that for now to get the patch set upstream. But in general when somebody outside of the dma_resv code base depends on the restart behavior then that's a bug inside the design of that code.
The callers should only care about what unsignaled fences are inside the dma_resv container and it shouldn't matter if those fences are presented once or multiple times because of a reset..
When this makes a difference we have a bug in the handling and should probably consider taking the dma_resv.lock instead.
Regards, Christian.
Regards,
Tvrtko
Regards, Christian.
Regards,
Tvrtko
+ if (dma_resv_iter_is_exclusive(&cursor)) + /* Translate the exclusive fence to the READ *and* WRITE engine */ + args->busy = busy_check_writer(fence); + else + /* Translate shared fences to READ set of engines */ args->busy |= busy_check_reader(fence); - } }
- if (args->busy && read_seqcount_retry(&obj->base.resv->seq, seq)) - goto retry; + dma_resv_iter_end(&cursor); err = 0; out: