Am 30.11.21 um 10:02 schrieb Daniel Vetter:
On Mon, Nov 29, 2021 at 01:06:33PM +0100, Christian König wrote:
This is just abusing internals of the dma_resv object.
Signed-off-by: Christian König christian.koenig@amd.com
Yeah I think if we want this back we could do a _locked version of the wait, which prunes internally.
Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
Btw I wonder, should we put the ttm_bo_wait wrapper on the chopping block in gpu/todo.rst? It's really just complications I think in most cases. And it would be nice if ttm has the same errno semantics for these as everyone else, I always get very confused about this stuff ...
I've already done that quite a bit, e.g. removed most of the users.
What's left are the cases where we wait or test signaling inside of TTM and I think I can get rid of that with the LRU rework.
So yeah, already in the pipeline.
Regards, Christian.
Cheers, Daniel
drivers/gpu/drm/ttm/ttm_bo.c | 1 - 1 file changed, 1 deletion(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index e4a20a3a5d16..fc124457ba2f 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -1086,7 +1086,6 @@ int ttm_bo_wait(struct ttm_buffer_object *bo, if (timeout == 0) return -EBUSY;
- dma_resv_add_excl_fence(bo->base.resv, NULL); return 0; } EXPORT_SYMBOL(ttm_bo_wait);
-- 2.25.1