On 9/2/22 19:26, Ruhl, Michael J wrote:
02.09.2022 13:31, Dmitry Osipenko пишет:
01.09.2022 17:02, Ruhl, Michael J пишет: ...
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -331,7 +331,19 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915, continue; }
/*
* dma_buf_unmap_attachment() requires reservation to be
* locked. The imported GEM shouldn't share reservation lock,
* so it's safe to take the lock.
*/
if (obj->base.import_attach)
i915_gem_object_lock(obj, NULL);
There is a lot of stuff going here. Taking the lock may be premature...
__i915_gem_object_pages_fini(obj);
The i915_gem_dmabuf.c:i915_gem_object_put_pages_dmabuf is where unmap_attachment is actually called, would it make more sense to make do the locking there?
The __i915_gem_object_put_pages() is invoked with a held reservation lock, while freeing object is a special time when we know that GEM is unused.
The __i915_gem_free_objects() was taking the lock two weeks ago until the change made Chris Wilson [1] reached linux-next.
[1] https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-
next.git/commit/?id=2826d447fbd60e6a05e53d5f918bceb8c04e315c
I don't think we can take the lock within i915_gem_object_put_pages_dmabuf(), it may/should deadlock other code
paths.
On the other hand, we can check whether the GEM's refcount number is zero in i915_gem_object_put_pages_dmabuf() and then take the lock if it's zero.
Also, seems it should be possible just to bail out from i915_gem_object_put_pages_dmabuf() if refcount=0. The further drm_prime_gem_destroy() will take care of unmapping. Perhaps this could be the best option, I'll give it a test.
i915_gem_object_put_pages() is uses the SG, and the usage for drm_prim_gem_destroy()
from __i915_gem_free_objects() doesn't use the SG because it has been "freed" already, I am not sure if that would work...
Hmm.. with that in mind, maybe moving the base.import_attach check to __i915_gem_object_put_pages with your attach check?
I see you meant __i915_gem_object_pages_fini() here.
atomic_set(&obj->mm.pages_pin_count, 0); if (obj->base.import) i915_gem_object_lock(obj, NULL);
__i915_gem_object_put_pages(obj);
if (obj->base.import) i915_gem_object_unlock(obj, NULL); GEM_BUG_ON(i915_gem_object_has_pages(obj));
Pretty much one step up from the dmabuf interface, but we are guaranteed to not have any pinned pages?
Importer shouldn't hold pages outside of dma-buf API, otherwise it should be a bug.
The other caller of __i915_gem_object_pages_fini is the i915_ttm move_notify which should not conflict (export side, not import side).
Since it appears that not locking during the clean up is desirable, trying to make sure take the lock is taken at the last moment might be the right path?
Reducing the scope of locking usually is preferred more. Yours suggestion works okay, I couldn't spot any problems at least for a non-TTM code paths.
It's indeed a bit not nice that __i915_gem_object_pages_fini() is used by TTM, but should be safe for imported objects. Will be great if anyone from i915 maintainers could ack this variant.