On Fri, May 21, 2021 at 8:08 PM Christian König ckoenig.leichtzumerken@gmail.com wrote:
Am 21.05.21 um 17:16 schrieb Daniel Vetter:
On Fri, May 21, 2021 at 05:00:46PM +0200, Bas Nieuwenhuizen wrote:
On Fri, May 21, 2021 at 4:37 PM Daniel Vetter daniel@ffwll.ch wrote:
On Fri, May 21, 2021 at 11:46:23AM +0200, Bas Nieuwenhuizen wrote:
On Fri, May 21, 2021 at 11:10 AM Daniel Vetter daniel.vetter@ffwll.ch wrote:
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index 88a24a0b5691..cc8426e1e8a8 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -617,8 +617,8 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p, amdgpu_bo_list_for_each_entry(e, p->bo_list) { struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
/* Make sure we use the exclusive slot for shared BOs */
if (bo->prime_shared_count)
/* Make sure we use the exclusive slot for all potentially shared BOs */
if (!(bo->flags & AMDGPU_GEM_CREATE_VM_ALWAYS_VALID)) e->tv.num_shared = 0;
I think it also makes sense to skip this with AMDGPU_GEM_CREATE_EXPLICIT_SYNC? It can be shared but I don't think anyone expects implicit sync to happen with those.
Ah yes, I missed this entirely. So the "no implicit flag" is already there, and the _only_ thing that's missing really is a way to fish out the implicit fences, and set them.
https://lore.kernel.org/dri-devel/20210520190007.534046-1-jason@jlekstrand.n...
So I think all that's really needed in radv is not setting RADEON_FLAG_IMPLICIT_SYNC for winsys buffers when Jason's dma-buf ioctl are present (means you need to do some import/export and keep the fd around for winsys buffers, but shouldn't be too bad), and then control the implicit fences entirely explicitly like vk expects.
That is the part I'm less sure about. This is a BO wide flag so we are also disabling implicit sync in the compositor. If the compositor does only do read stuff that is ok, as the inserted exclusive fence will work for that. But as I learned recently the app provided buffer may end up being written to by the X server which open a whole can of potential problems if implicit sync gets disabled between Xserver operations on the app provided buffer. Hence setting that on the WSI buffer is a whole new can of potential problems and hence I've said a submission based flag would be preferred.
I can certainly try it out though.
Hm yeah that's the wrong flag. We need a flag on the drm_file which the explicit userspace sets. And which is valid only for itself.
There's a nice flags field when creating a ctx, but it's not validated and there's already a comment that we have to filter out garbage priority, so that's not use. I'll whip up something entirely untested just as a draft.
We could provide an IOCTL for the BO to change the flag.
That's not the semantics we need.
But could we first figure out the semantics we want to use here?
Cause I'm pretty sure we don't actually need those changes at all and as said before I'm certainly NAKing things which break existing use cases.
Please read how other drivers do this and at least _try_ to understand it. I'm really loosing my patience here with you NAKing patches you're not even understanding (or did you actually read and fully understand the entire story I typed up here, and your NAK is on the entire thing?). There's not much useful conversation to be had with that approach. And with drivers I mean kernel + userspace here.
That's the other frustration part: You're trying to fix this purely in the kernel. This is exactly one of these issues why we require open source userspace, so that we can fix the issues correctly across the entire stack. And meanwhile you're steadfastily refusing to even look at that the userspace side of the picture.
Also I thought through your tlb issue, why are you even putting these tlb flush fences into the shard dma_resv slots? If you store them somewhere else in the amdgpu private part, the oversync issues goes away - in your ttm bo move callback, you can just make your bo copy job depend on them too (you have to anyway) - even for p2p there's not an issue here, because you have the ->move_notify callback, and can then lift the tlb flush fences from your private place to the shared slots so the exporter can see them.
The kernel move fences otoh are a bit more nasty to wring through the p2p dma-buf interface. That one probably needs something new. -Daniel
Regards, Christian.
-Daniel
Are you bored enough to type this up for radv? I'll give Jason's kernel stuff another review meanwhile. -Daniel
e->bo_va = amdgpu_vm_bo_find(vm, bo); }
-- 2.31.0
-- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch