On eviction, we acquire the vm->mutex and then wait on the vma->active. Thereore when binding and pinning the vma, we must follow the same sequennce, lock/pin the vma then mark it active. Otherwise, we mark the vma as active, then wait for the vm->mutex, and meanwhile the evictor waits upon us.
Fixes: 8ccfc20a7d56 ("drm/i915/gt: Mark ring->vma as active while pinned") Signed-off-by: Chris Wilson chris@chris-wilson.co.uk Cc: Tvrtko Ursulin tvrtko.ursulin@intel.com Cc: stable@vger.kernel.org # v5.6+ --- drivers/gpu/drm/i915/gt/intel_context.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c index e4aece20bc80..52db2bde44a3 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.c +++ b/drivers/gpu/drm/i915/gt/intel_context.c @@ -204,25 +204,25 @@ static int __ring_active(struct intel_ring *ring) { int err;
- err = i915_active_acquire(&ring->vma->active); + err = intel_ring_pin(ring); if (err) return err;
- err = intel_ring_pin(ring); + err = i915_active_acquire(&ring->vma->active); if (err) - goto err_active; + goto err_pin;
return 0;
-err_active: - i915_active_release(&ring->vma->active); +err_pin: + intel_ring_unpin(ring); return err; }
static void __ring_retire(struct intel_ring *ring) { - intel_ring_unpin(ring); i915_active_release(&ring->vma->active); + intel_ring_unpin(ring); }
__i915_active_call
On eviction, we acquire the vm->mutex and then wait on the vma->active. Thereore when binding and pinning the vma, we must follow the same sequence, lock/pin the vma then mark it active. Otherwise, we mark the vma as active, then wait for the vm->mutex, and meanwhile the evictor holding the mutex waits upon us to complete our activity.
Fixes: 8ccfc20a7d56 ("drm/i915/gt: Mark ring->vma as active while pinned") Signed-off-by: Chris Wilson chris@chris-wilson.co.uk Cc: Tvrtko Ursulin tvrtko.ursulin@intel.com Cc: stable@vger.kernel.org # v5.6+ --- drivers/gpu/drm/i915/gt/intel_context.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c index e4aece20bc80..52db2bde44a3 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.c +++ b/drivers/gpu/drm/i915/gt/intel_context.c @@ -204,25 +204,25 @@ static int __ring_active(struct intel_ring *ring) { int err;
- err = i915_active_acquire(&ring->vma->active); + err = intel_ring_pin(ring); if (err) return err;
- err = intel_ring_pin(ring); + err = i915_active_acquire(&ring->vma->active); if (err) - goto err_active; + goto err_pin;
return 0;
-err_active: - i915_active_release(&ring->vma->active); +err_pin: + intel_ring_unpin(ring); return err; }
static void __ring_retire(struct intel_ring *ring) { - intel_ring_unpin(ring); i915_active_release(&ring->vma->active); + intel_ring_unpin(ring); }
__i915_active_call
On Mon, 6 Jul 2020 at 18:01, Chris Wilson chris@chris-wilson.co.uk wrote:
On eviction, we acquire the vm->mutex and then wait on the vma->active. Thereore when binding and pinning the vma, we must follow the same sequence, lock/pin the vma then mark it active. Otherwise, we mark the vma as active, then wait for the vm->mutex, and meanwhile the evictor holding the mutex waits upon us to complete our activity.
Fixes: 8ccfc20a7d56 ("drm/i915/gt: Mark ring->vma as active while pinned") Signed-off-by: Chris Wilson chris@chris-wilson.co.uk Cc: Tvrtko Ursulin tvrtko.ursulin@intel.com Cc: stable@vger.kernel.org # v5.6+
Reviewed-by: Matthew Auld matthew.auld@intel.com
linux-stable-mirror@lists.linaro.org