Currently this has no practial relevance I think because there's not many who can pull off a setup with panfrost and another gpu in the same system. But the rules are that if you're setting an exclusive fence, indicating a gpu write access in the implicit fencing system, then you need to wait for all fences, not just the previous exclusive fence.
panfrost against itself has no problem, because it always sets the exclusive fence (but that's probably something that will need to be fixed for vulkan and/or multi-engine gpus, or you'll suffer badly). Also no problem with that against display.
With the prep work done to switch over to the dependency helpers this is now a oneliner.
Signed-off-by: Daniel Vetter daniel.vetter@intel.com Cc: Rob Herring robh@kernel.org Cc: Tomeu Vizoso tomeu.vizoso@collabora.com Cc: Steven Price steven.price@arm.com Cc: Alyssa Rosenzweig alyssa.rosenzweig@collabora.com Cc: Sumit Semwal sumit.semwal@linaro.org Cc: "Christian König" christian.koenig@amd.com Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org --- drivers/gpu/drm/panfrost/panfrost_job.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index 707d912ff64a..619d6104040c 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -203,9 +203,8 @@ static int panfrost_acquire_object_fences(struct drm_gem_object **bos, int i, ret;
for (i = 0; i < bo_count; i++) { - struct dma_fence *fence = dma_resv_get_excl_rcu(bos[i]->resv); - - ret = drm_gem_fence_array_add(deps, fence); + /* panfrost always uses write mode in its current uapi */ + ret = drm_gem_fence_array_add_implicit(deps, bos[i], true); if (ret) return ret; }