Am 09.12.20 um 11:16 schrieb Daniel Vetter:
[SNIP]
At least I would be fine with that. For now amdgpu is the only exporter who implements the explicit pin/unpin semantics anyway.
Yup, I think that makes sense (if it works). Maybe we could use something like:
a) dma_buf pin exists, driver is dynamic. This means dma_buf_vmap needs to first pin, then call ->vmap. dma_buf_vmap_local otoh can directly call ->vmap since the exporter relies on either a pin or dma_resv_lock.
b) dma_buf pin not implement, driver is a legacy pile. dma_buf_vmap will pin (somewhere at least, or rely on some implicit pin), dma_buf_vmap_local doesn't work and should fail.
I think I read in the dma-buf documentation that pin is supposed to put the BO in a domain that is suitable for scanout. Now I don't really trust this to work. Amdgpu, radeon and nouveau put it into the GTT region. Qxl appears to put it wherever it is.
Uh that sounds wrong, or at least not the full complexity. ->pin's main use right now is to freeze the dma-buf into system memory when there's a non-dynamic attachement for a dynamic exporter. There's also a dma_buf_pin call in amdgpu, and I think that only works for scanout on integrated gpu. Pinning to vram would break sharing for all existing dma-buf users.
Christian can you perhaps explain when amdgpu uses dma_buf_pin()?
You guys are writing mails faster than I can answer :)
dma_buf_pin() should be used when the buffer can't move any more and it is preferred to bin in a location which is accessible by all devices, that usually means system memory.
My take is the ->pin callback should clarify that it should pin into system memory, for legacy (non-dynamic) dma-buf users.
I though I've wrote that on the documentation for the pin callback, but looks like I've forgotten to do this.
And dma_buf_pin() should gain some kerneldoc which then states that "This should only be used in limited use cases like scanout and not for temporary pin operations." I think the problem with this kerneldoc is simply that it tries to document the exporter and importer side of the contract in one place and makes a mess of it, plus leaves the actual importer side function undocumented. I think the kerneldoc also predates the final patch version, and we didn't update it fully.
That's a really good point and I noticed this mess before as well.
How about this: We document the exporter side on the dma-buf function table (because the exporter should implement those), and the importer side on the dma-buf functions which then uses the exporter functions.
Regards, Christian.
I think for less transition work fbdev helpers could first try dma_resv_lock + dma_buf_vmap_local, if that fails, drop the dma_resv_lock and do the pinning dma_buf_vmap. That way we don't have to convert shmem helpers over to dma_resv locking, which should help.
I have meanwhile made a patchset that updates helpers for cma, shmem and vram with vmap_local; and converts fbdev emulation as well. It needs a bit more testing before being posted.
Nice, even better! -Daniel
Best regards Thomas
And ttm drivers would do the new clean interface, so at least everyone using dma_resv today is all fine. Intel's conversion to dma_resv lock is in-flight, but that needs a conversion to the dynamic interface anyway, the current code splats. And dynamic brings means explicit pin/unpin callbacks, so should be good too. -Daniel
Regards, Christian.
Best regards Thomas
[1] https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork....
- The new one, which allows vmapping with just dma_resv locked, and
should have some caching in exporters.
Breaking code and then adding todos about that is kinda not so cool approach here imo.
Also I guess ttm_bo_vmap should have a check that either the buffer is pinned, or dma_resv_lock is held.
Cheers, Daniel
> Best regards > Thomas > >>> --- >>> Documentation/gpu/todo.rst | 15 +++++++++++++ >>> include/drm/drm_gem.h | 4 ++++ >>> include/linux/dma-buf.h | 45 >>> ++++++++++++++++++++++++++++++++++++++ >>> 3 files changed, 64 insertions(+) >>> >>> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst >>> index 009d8e6c7e3c..32bb797a84fc 100644 >>> --- a/Documentation/gpu/todo.rst >>> +++ b/Documentation/gpu/todo.rst >>> @@ -505,6 +505,21 @@ Contact: Thomas Zimmermann >>> tzimmermann@suse.de, Christian König, Daniel Vette >>> Level: Intermediate >>> +Enforce rules for dma-buf vmap and pin ops >>> +------------------------------------------ >>> + >>> +Exporter implementations of vmap and pin in struct >>> dma_buf_ops (and consequently >>> +struct drm_gem_object_funcs) use a variety of locking >>> semantics. Some rely on >>> +the caller holding the dma-buf's reservation lock, some >>> do their own locking, >>> +some don't require any locking. VRAM helpers even used >>> to pin as part of vmap. >>> + >>> +We need to review each exporter and enforce the documented rules. >>> + >>> +Contact: Christian König, Daniel Vetter, Thomas >>> Zimmermann tzimmermann@suse.de >>> + >>> +Level: Advanced >>> + >>> + >>> Core refactorings >>> ================= >>> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h >>> index 5e6daa1c982f..1864c6a721b1 100644 >>> --- a/include/drm/drm_gem.h >>> +++ b/include/drm/drm_gem.h >>> @@ -138,6 +138,8 @@ struct drm_gem_object_funcs { >>> * drm_gem_dmabuf_vmap() helper. >>> * >>> * This callback is optional. >>> + * >>> + * See also struct dma_buf_ops.vmap >>> */ >>> int (*vmap)(struct drm_gem_object *obj, struct >>> dma_buf_map *map); >>> @@ -148,6 +150,8 @@ struct drm_gem_object_funcs { >>> * drm_gem_dmabuf_vunmap() helper. >>> * >>> * This callback is optional. >>> + * >>> + * See also struct dma_buf_ops.vunmap >>> */ >>> void (*vunmap)(struct drm_gem_object *obj, struct >>> dma_buf_map *map); >>> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h >>> index cf72699cb2bc..dc81fdc01dda 100644 >>> --- a/include/linux/dma-buf.h >>> +++ b/include/linux/dma-buf.h >>> @@ -267,7 +267,52 @@ struct dma_buf_ops { >>> */ >>> int (*mmap)(struct dma_buf *, struct vm_area_struct *vma); >>> + /** >>> + * @vmap: >> There's already a @vmap and @vunamp kerneldoc at the top comment, that >> needs to be removed. >> -Daniel >> >>> + * >>> + * Returns a virtual address for the buffer. >>> + * >>> + * Notes to callers: >>> + * >>> + * - Callers must hold the struct dma_buf.resv lock >>> before calling >>> + * this interface. >>> + * >>> + * - Callers must provide means to prevent the >>> mappings from going >>> + * stale, such as holding the reservation lock or providing a >>> + * move-notify callback to the exporter. >>> + * >>> + * Notes to implementors: >>> + * >>> + * - Implementations must expect pairs of @vmap and >>> @vunmap to be >>> + * called frequently and should optimize for this case. >>> + * >>> + * - Implementations should avoid additional operations, such as >>> + * pinning. >>> + * >>> + * - Implementations may expect the caller to hold the dma-buf's >>> + * reservation lock to protect against concurrent calls and >>> + * relocation. >>> + * >>> + * - Implementations may provide additional >>> guarantees, such as working >>> + * without holding the reservation lock. >>> + * >>> + * This callback is optional. >>> + * >>> + * Returns: >>> + * >>> + * 0 on success or a negative error code on failure. >>> + */ >>> int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map); >>> + >>> + /** >>> + * @vunmap: >>> + * >>> + * Releases the address previously returned by @vmap. >>> + * >>> + * This callback is optional. >>> + * >>> + * See also @vmap() >>> + */ >>> void (*vunmap)(struct dma_buf *dmabuf, struct >>> dma_buf_map *map); >>> }; >>> -- >>> 2.29.2 >>> > -- > Thomas Zimmermann > Graphics Driver Developer > SUSE Software Solutions Germany GmbH > Maxfeldstr. 5, 90409 Nürnberg, Germany > (HRB 36809, AG Nürnberg) > Geschäftsführer: Felix Imendörffer >
-- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg) Geschäftsführer: Felix Imendörffer