Let's have some competition here for dma_buf mmap support ;-)
Compared to Rob Clarke's RFC I've ditched the prepare/finish hooks
and corresponding ioctls on the dma_buf file. The major reason for
that is that many people seem to be under the impression that this is
also for synchronization with outstanding asynchronous processsing.
I'm pretty massively opposed to this because:
- It boils down reinventing a new rather general-purpose userspace
synchronization interface. If we look at things like futexes, this
is hard to get right.
- Furthermore a lot of kernel code has to interact with this
synchronization primitive. This smells a look like the dri1 hw_lock,
a horror show I prefer not to reinvent.
- Even more fun is that multiple different subsystems would interact
here, so we have plenty of opportunities to create funny deadlock
scenarios.
I think synchronization is a wholesale different problem from data
sharing and should be tackled as an orthogonal problem.
Now we could demand that prepare/finish may only ensure cache
coherency (as Rob intended), but that runs up into the next problem:
We not only need mmap support to facilitate sw-only processing nodes
in a pipeline (without jumping through hoops by importing the dma_buf
into some sw-access only importer), which allows for a nicer
ION->dma-buf upgrade path for existing Android userspace. We also need
mmap support for existing importing subsystems to support existing
userspace libraries. And a loot of these subsystems are expected to
export coherent userspace mappings.
So prepare/finish can only ever be optional and the exporter /needs/
to support coherent mappings. Given that mmap access is always
somewhat fallback-y in nature I've decided to drop this optimization,
instead of just making it optional. If we demonstrate a clear need for
this, supported by benchmark results, we can always add it in again
later as an optional extension.
Other differences compared to Rob's RFC is the above mentioned support
for mapping a dma-buf through facilities provided by the importer.
Which results in mmap support no longer being optional.
Note taht this dma-buf mmap patch does _not_ support every possible
insanity an existing subsystem could pull of with mmap: Because it
does not allow to intercept pagefaults and shoot down ptes importing
subsystems can't add some magic of their own at these points (e.g. to
automatically synchronize with outstanding rendering or set up some
special resources). I've done a cursory read through a few mmap
implementions of various subsytems and I'm hopeful that we can avoid
this (and the complexity it'd bring with it).
Additonally I've extended the documentation a bit to explain the hows
and whys of this mmap extension.
Comments, reviews and flames highly welcome.
Cheers, Daniel
---
Documentation/dma-buf-sharing.txt | 84 +++++++++++++++++++++++++++++++++---
drivers/base/dma-buf.c | 64 +++++++++++++++++++++++++++-
include/linux/dma-buf.h | 16 +++++++
3 files changed, 156 insertions(+), 8 deletions(-)
diff --git a/Documentation/dma-buf-sharing.txt b/Documentation/dma-buf-sharing.txt
index a6d4c37..c42a4a5 100644
--- a/Documentation/dma-buf-sharing.txt
+++ b/Documentation/dma-buf-sharing.txt
@@ -29,13 +29,6 @@ The buffer-user
in memory, mapped into its own address space, so it can access the same area
of memory.
-*IMPORTANT*: [see https://lkml.org/lkml/2011/12/20/211 for more details]
-For this first version, A buffer shared using the dma_buf sharing API:
-- *may* be exported to user space using "mmap" *ONLY* by exporter, outside of
- this framework.
-- with this new iteration of the dma-buf api cpu access from the kernel has been
- enable, see below for the details.
-
dma-buf operations for device dma only
--------------------------------------
@@ -313,6 +306,83 @@ Access to a dma_buf from the kernel context involves three steps:
enum dma_data_direction dir);
+Direct Userspace Access/mmap Support
+------------------------------------
+
+Being able to mmap an export dma-buf buffer object has 2 main use-cases:
+- CPU fallback processing in a pipeline and
+- supporting existing mmap interfaces in importers.
+
+1. CPU fallback processing in a pipeline
+
+ In many processing pipelines it is sometimes required that the cpu can access
+ the data in a dma-buf (e.g. for thumbnail creation, snapshots, ...). To avoid
+ the need to handle this specially in userspace frameworks for buffer sharing
+ it's ideal if the dma_buf fd itself can be used to access the backing storage
+ from userspace using mmap.
+
+ Furthermore Android's ION framework already supports this (and is otherwise
+ rather similar to dma-buf from a userspace consumer side with using fds as
+ handles, too). So it's beneficial to support this in a similar fashion on
+ dma-buf to have a good transition path for existing Android userspace.
+
+ No special interfaces, userspace simply calls mmap on the dma-buf fd.
+
+2. Supporting existing mmap interfaces in exporters
+
+ Similar to the motivation for kernel cpu access it is again important that
+ the userspace code of a given importing subsystem can use the same interfaces
+ with a imported dma-buf buffer object as with a native buffer object. This is
+ especially important for drm where the userspace part of contemporary OpenGL,
+ X, and other drivers is huge, and reworking them to use a different way to
+ mmap a buffer rather invasive.
+
+ The assumption in the current dma-buf interfaces is that redirecting the
+ initial mmap is all that's needed. A survey of some of the existing
+ subsystems shows that no driver seems to do any nefarious thing like syncing
+ up with outstanding asynchronous processing on the device or allocating
+ special resources at fault time. So hopefully this is good enough, since
+ adding interfaces to intercept pagefaults and allow pte shootdowns would
+ increase the complexity quite a bit.
+
+ Interface:
+ int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
+ unsigned long);
+
+ If the importing subsystem simply provides a special-purpose mmap call to set
+ up a mapping in userspace, calling do_mmap with dma_buf->file will equally
+ achieve that for a dma-buf object.
+
+3. Implementation notes for exporters
+
+ Because dma-buf buffers have invariant size over their lifetime, the dma-buf
+ core checks whether a vma is too large and rejects such mappings. The
+ exporter hence does not need to duplicate this check.
+
+ Because existing importing subsystems might presume coherent mappings for
+ userspace, the exporter needs to set up a coherent mapping. If that's not
+ possible, it needs to fake coherency by manually shooting down ptes when
+ leaving the cpu domain and flushing caches at fault time. Note that all the
+ dma_buf files share the same anon inode, hence the exporter needs to replace
+ the dma_buf file stored in vma->vm_file with it's own if pte shootdown is
+ requred. This is because the kernel uses the underlying inode's address_space
+ for vma tracking (and hence pte tracking at shootdown time with
+ unmap_mapping_range).
+
+ If the above shootdown dance turns out to be too expensive in certain
+ scenarios, we can extend dma-buf with a more explicit cache tracking scheme
+ for userspace mappings. But the current assumption is that using mmap is
+ always a slower path, so some inefficiencies should be acceptable.
+
+ Exporters that shoot down mappings (for any reasons) shall not do any
+ synchronization at fault time with outstanding device operations.
+ Synchronization is an orthogonal issue to sharing the backing storage of a
+ buffer and hence should not be handled by dma-buf itself. This is explictly
+ mentioned here because many people seem to want something like this, but if
+ different exporters handle this differently, buffer sharing can fail in
+ interesting ways depending upong the exporter (if userspace starts depending
+ upon this implicit synchronization).
+
Miscellaneous notes
-------------------
diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c
index 07cbbc6..f3b923c 100644
--- a/drivers/base/dma-buf.c
+++ b/drivers/base/dma-buf.c
@@ -44,8 +44,26 @@ static int dma_buf_release(struct inode *inode, struct file *file)
return 0;
}
+static int dma_buf_mmap_internal(struct file *file, struct vm_areat_struct *vma)
+{
+ struct dma_buf *dmabuf;
+
+ if (!is_dma_buf_file(file))
+ return -EINVAL;
+
+ /* check for overflowing the buffer's size */
+ if (vma->vm_pgoff + ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT) >
+ dmabuf->size >> PAGE_SHIFT)
+ return -EINVAL;
+
+ dmabuf = file->private_data;
+
+ return dmabuf->ops->mmap(dmabuf, vma);
+}
+
static const struct file_operations dma_buf_fops = {
.release = dma_buf_release,
+ .mmap = dma_buf_mmap_internal,
};
/*
@@ -82,7 +100,8 @@ struct dma_buf *dma_buf_export(void *priv, const struct dma_buf_ops *ops,
|| !ops->unmap_dma_buf
|| !ops->release
|| !ops->kmap_atomic
- || !ops->kmap)) {
+ || !ops->kmap
+ || !ops->mmap)) {
return ERR_PTR(-EINVAL);
}
@@ -406,3 +425,46 @@ void dma_buf_kunmap(struct dma_buf *dmabuf, unsigned long page_num,
dmabuf->ops->kunmap(dmabuf, page_num, vaddr);
}
EXPORT_SYMBOL_GPL(dma_buf_kunmap);
+
+
+/**
+ * dma_buf_mmap - Setup up a userspace mmap with the given vma
+ * @dma_buf: [in] buffer that should back the vma
+ * @vma: [in] vma for the mmap
+ * @pgoff: [in] offset in pages where this mmap should start within the
+ * dma-buf buffer.
+ *
+ * This function adjusts the passed in vma so that it points at the file of the
+ * dma_buf operation. It alsog adjusts the starting pgoff and does bounds
+ * checking on the size of the vma. Then it calls the exporters mmap function to
+ * set up the mapping.
+ *
+ * Can return negative error values, returns 0 on success.
+ */
+int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
+ unsigned long pgoff)
+{
+ if (WARN_ON(!dmabuf || !vma))
+ return -EINVAL;
+
+ /* check for offset overflow */
+ if (pgoff + ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT) < pgoff)
+ return -EOVERFLOW;
+
+ /* check for overflowing the buffer's size */
+ if (pgoff + ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT) >
+ dmabuf->size >> PAGE_SHIFT)
+ return -EINVAL;
+
+ /* readjust the vma */
+ if (vma->vm_file)
+ fput(vma->vm_file);
+
+ vma->vm_file = dmabuf->file;
+ get_file(vma->vm_file);
+
+ vma->vm_pgoff = pgoff;
+
+ return dmabuf->ops->mmap(dmabuf, vma);
+}
+EXPORT_SYMBOL_GPL(dma_buf_mmap);
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index ee7ef99..d254630 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -61,6 +61,10 @@ struct dma_buf_attachment;
* This Callback must not sleep.
* @kmap: maps a page from the buffer into kernel address space.
* @kunmap: [optional] unmaps a page from the buffer.
+ * @mmap: used to expose the backing storage to userspace. Note that the
+ * mapping needs to be coherent - if the exporter doesn't directly
+ * support this, it needs to fake coherency by shooting down any ptes
+ * when transitioning away from the cpu domain.
*/
struct dma_buf_ops {
int (*attach)(struct dma_buf *, struct device *,
@@ -92,6 +96,8 @@ struct dma_buf_ops {
void (*kunmap_atomic)(struct dma_buf *, unsigned long, void *);
void *(*kmap)(struct dma_buf *, unsigned long);
void (*kunmap)(struct dma_buf *, unsigned long, void *);
+
+ int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
};
/**
@@ -167,6 +173,9 @@ void *dma_buf_kmap_atomic(struct dma_buf *, unsigned long);
void dma_buf_kunmap_atomic(struct dma_buf *, unsigned long, void *);
void *dma_buf_kmap(struct dma_buf *, unsigned long);
void dma_buf_kunmap(struct dma_buf *, unsigned long, void *);
+
+int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
+ unsigned long);
#else
static inline struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
@@ -247,6 +256,13 @@ static inline void dma_buf_kunmap(struct dma_buf *, unsigned long,
void *)
{
}
+
+static inline int dma_buf_mmap(struct dma_buf *,
+ struct vm_area_struct *vma,
+ unsigned long pgoff)
+{
+ return -ENODEV;
+}
#endif /* CONFIG_DMA_SHARED_BUFFER */
#endif /* __DMA_BUF_H__ */
--
1.7.7.5
Hi Sumit,
Here are 4 dma-buf patches that fix small issues.
Laurent Pinchart (4):
dma-buf: Constify ops argument to dma_buf_export()
dma-buf: Remove unneeded sanity checks
dma-buf: Return error instead of using a goto statement when possible
dma-buf: Move code out of mutex-protected section in dma_buf_attach()
drivers/base/dma-buf.c | 26 +++++++++++---------------
include/linux/dma-buf.h | 8 ++++----
2 files changed, 15 insertions(+), 19 deletions(-)
--
Regards,
Laurent Pinchart
Hello everyone,
This patchset is an incremental patch to patchset created by Sumit
Semwal [1]. The patches are dedicated to help find a better solution for
support of buffer sharing by V4L2 API. It is expected to start discussion on
final installment for dma-buf in vb2-dma-contig allocator. Current version of
the patches contain little documentation. It is going to be fixed after
achieving consensus about design for buffer exporting. Moreover the API
between vb2-core and the allocator should be revised.
The amount of changes to vb2-dma-contig.c was significant making the difference
patch very difficult to read. Therefore the patch was split into two parts.
One removes old file, the next patch creates the version of the file.
The patchset contains extension for DMA API and its implementation for ARM
architecture. Therefore the patchset should be applied on the top of:
http://git.infradead.org/users/kmpark/linux-2.6-samsung/shortlog/refs/heads…
After applying patches from [2] and [1].
v1: List of changes since [1].
- support for DMA api extension dma_get_pages, the function is used to retrieve pages
used to create DMA mapping.
- small fixes/code cleanup to videobuf2
- added prepare and finish callbacks to vb2 allocators, it is used keep consistency between dma-cpu acess to the memory (by Marek Szyprowski)
- support for exporting of DMABUF buffer in V4L2 and Videobuf2, originated from [3].
- support for dma-buf exporting in vb2-dma-contig allocator
- support for DMABUF for s5p-tv and s5p-fimc (capture interface) drivers, originated from [3]
- changed handling for userptr buffers (by Marek Szyprowski, Andrzej Pietrasiewicz)
- let mmap method to use dma_mmap_writecombine call (by Marek Szyprowski)
[1] http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/4296…
[2] https://lkml.org/lkml/2011/12/26/29
[3] http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/3635…
Marek Szyprowski (2):
[media] media: vb2: remove plane argument from call_memop and cleanup
mempriv usage
media: vb2: add prepare/finish callbacks to allocators
Tomasz Stanislawski (8):
arm: dma: support for dma_get_pages
v4l: vb2: fixes for DMABUF support
v4l: add buffer exporting via dmabuf
v4l: vb2: add buffer exporting via dmabuf
v4l: vb2: remove dma-contig allocator
v4l: vb2-dma-contig: code refactoring, support for DMABUF exporting
v4l: fimc: integrate capture i-face with dmabuf
v4l: s5p-tv: mixer: integrate with dmabuf
arch/arm/include/asm/dma-mapping.h | 8 +
arch/arm/mm/dma-mapping.c | 44 ++
drivers/media/video/s5p-fimc/fimc-capture.c | 11 +-
drivers/media/video/s5p-tv/mixer_video.c | 11 +-
drivers/media/video/v4l2-compat-ioctl32.c | 1 +
drivers/media/video/v4l2-ioctl.c | 11 +
drivers/media/video/videobuf2-core.c | 114 ++++-
drivers/media/video/videobuf2-dma-contig.c | 754 +++++++++++++++++++++------
include/linux/dma-mapping.h | 2 +
include/linux/videodev2.h | 1 +
include/media/v4l2-ioctl.h | 1 +
include/media/videobuf2-core.h | 10 +-
12 files changed, 789 insertions(+), 179 deletions(-)
--
1.7.5.4
From: Benjamin Gaignard <benjamin.gaignard(a)linaro.org>
The goal of those patches is to allow ION clients (drivers or userland applications)
to use Contiguous Memory Allocator (CMA).
To get more info about CMA:
http://lists.linaro.org/pipermail/linaro-mm-sig/2012-February/001328.html
patches version 4:
- add ION_HEAP_TYPE_DMA heap type in ion_heap_type enum.
- CMA heap is now a "native" ION heap.
- add ion_heap_create_full function to keep backward compatibilty.
- clean up included files in CMA heap
- ux500-ion is using ion_heap_create_full instead of ion_heap_create
patches version 3:
- add a private field in ion_heap structure instead of expose ion_device
structure to all heaps
- ion_cma_heap is no more a platform driver
- ion_cma_heap use ion_heap private field to store the device pointer and
make the link with reserved CMA regions
- provide ux500-ion driver and configuration file for snowball board to give
an example of how use CMA heaps
patches version 2:
- fix comments done by Andy Green
Benjamin Gaignard (4):
fix ion_platform_data definition
add private field in ion_heap structure
add CMA heap
add test/example driver for ux500 platform
arch/arm/mach-ux500/board-mop500.c | 77 +++++++++++++++++++
drivers/gpu/ion/Kconfig | 6 ++
drivers/gpu/ion/Makefile | 4 +-
drivers/gpu/ion/ion_cma_heap.c | 129 ++++++++++++++++++++++++++++++++
drivers/gpu/ion/ion_heap.c | 14 +++-
drivers/gpu/ion/ion_priv.h | 11 +++
drivers/gpu/ion/ux500/Makefile | 1 +
drivers/gpu/ion/ux500/ux500_ion.c | 142 ++++++++++++++++++++++++++++++++++++
include/linux/ion.h | 5 +-
9 files changed, 386 insertions(+), 3 deletions(-)
create mode 100644 drivers/gpu/ion/ion_cma_heap.c
create mode 100644 drivers/gpu/ion/ux500/Makefile
create mode 100644 drivers/gpu/ion/ux500/ux500_ion.c
> If the API was to also be used for synchronization it would have to
> include an atomic "prepare multiple" ioctl which blocked until all
> the buffers listed by the application were available. In the same
Too slow already. You are now serializing stuff while what we want to do
really is
nobody_else_gets_buffers_next([list])
on available(buffer)
dispatch_work(buffer)
so that you can maximise parallelism without allowing deadlocks. If
you've got a high memory bandwith and 8+ cores the 'stop everything'
model isn't great.
> This might be a good argument for keeping synchronization and cache
> maintenance separate, though even ignoring synchronization I would
> think being able to issue cache maintenance operations for multiple
> buffers in a single ioctl might present some small efficiency gains.
> However as Rob points out, CPU access is already in slow/legacy
> territory.
Dangerous assumption. I do think they should be separate. For one it
makes the case of synchronization needed but hardware cache management
much easier to split cleanly. Assuming CPU access is slow/legacy reflects
a certain model of relatively slow CPU and accelerators where falling off
the acceleration path is bad. On a higher end processor falling off the
acceleration path isn't a performance matter so much as a power concern.
> KDS we differentiated jobs which needed "exclusive access" to a
> buffer and jobs which needed "shared access" to a buffer. Multiple
> jobs could access a buffer at the same time if those jobs all
Makes sense as it's a reader/writer lock and it reflects MESI/MOESI
caching and cache policy in some hardware/software assists.
> display controller will be reading the front buffer, but the GPU
> might also need to read that front buffer. So perhaps adding
> "read-only" & "read-write" access flags to prepare could also be
> interpreted as shared & exclusive accesses, if we went down this
> route for synchronization that is. :-)
mmap includes read/write info so probably using that works out. It also
means that you have the stuff mapped in a way that will bus error or
segfault anyone who goofs rather than give them the usual 'deep
weirdness' behaviour you get with mishandling of caching bits.
Alan
> > dma-buf file descriptor. Userspace access to the buffer should be
> > bracketed with DMA_BUF_IOCTL_{PREPARE,FINISH}_ACCESS ioctl calls to
> > give the exporting driver a chance to deal with cache synchronization
> > and such for cached userspace mappings without resorting to page
There should be flags indicating if this is necessary. We don't want extra
syscalls on hardware that doesn't need it. The other question is what
info is needed as you may only want to poke a few pages out of cache and
the prepare/finish on its own gives no info.
> E.g. If another device was writing to the buffer, the prepare ioctl
> could block until that device had finished accessing that buffer.
How do you avoid deadlocks on this ? We need very clear ways to ensure
things always complete in some form given multiple buffer
owner/requestors and the fact this API has no "prepare-multiple-buffers"
support.
Alan
From: Rob Clark <rob(a)ti.com>
Works in a similar way to get_file(), and is needed in cases such as
when the exporter needs to also keep a reference to the dmabuf (that
is later released with a dma_buf_put()), and possibly other similar
cases.
Signed-off-by: Rob Clark <rob(a)ti.com>
---
Minor update on original to add a missing #include
include/linux/dma-buf.h | 15 +++++++++++++++
1 files changed, 15 insertions(+), 0 deletions(-)
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 891457a..bc4203dc 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -30,6 +30,7 @@
#include <linux/scatterlist.h>
#include <linux/list.h>
#include <linux/dma-mapping.h>
+#include <linux/fs.h>
struct dma_buf;
struct dma_buf_attachment;
@@ -110,6 +111,20 @@ struct dma_buf_attachment {
void *priv;
};
+/**
+ * get_dma_buf - convenience wrapper for get_file.
+ * @dmabuf: [in] pointer to dma_buf
+ *
+ * Increments the reference count on the dma-buf, needed in case of drivers
+ * that either need to create additional references to the dmabuf on the
+ * kernel side. For example, an exporter that needs to keep a dmabuf ptr
+ * so that subsequent exports don't create a new dmabuf.
+ */
+static inline void get_dma_buf(struct dma_buf *dmabuf)
+{
+ get_file(dmabuf->file);
+}
+
#ifdef CONFIG_DMA_SHARED_BUFFER
struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
struct device *dev);
--
1.7.5.4
On Fri, Mar 16, 2012 at 12:24 PM, Tom Cooksey <tom.cooksey(a)arm.com> wrote:
>
>> From: Rob Clark <rob(a)ti.com>
>>
>> Enable optional userspace access to dma-buf buffers via mmap() on the
>> dma-buf file descriptor. Userspace access to the buffer should be
>> bracketed with DMA_BUF_IOCTL_{PREPARE,FINISH}_ACCESS ioctl calls to
>> give the exporting driver a chance to deal with cache synchronization
>> and such for cached userspace mappings without resorting to page
>> faulting tricks. The reasoning behind this is that, while drm
>> drivers tend to have all the mechanisms in place for dealing with
>> page faulting tricks, other driver subsystems may not. And in
>> addition, while page faulting tricks make userspace simpler, there
>> are some associated overheads.
>
> Speaking for the ARM Mali T6xx driver point of view, this API looks
> good for us. Our use-case for mmap is glReadPixels and
> glTex[Sub]Image2D on buffers the driver has imported via dma_buf. In
> the case of glReadPixels, the finish ioctl isn't strictly necessary
> as the CPU won't have written to the buffer and so doesn't need
> flushing. As such, we'd get an additional cache flush which isn't
> really necessary. But hey, it's glReadPixels - it's supposed to be
> slow. :-)
>
> I think requiring the finish ioctl in the API contract is a good
> idea, even if the CPU has only done a ro access as it allows future
> enhancements*. To "fix" the unnecessary flush in glReadPixels, I
> think we'd like to keep the finish but see an "access type"
> parameter added to prepare ioctl indicating if the access is ro or
> rw to allow the cache flush in finish to be skipped if the access
> was ro. As Rebecca says, a debug feature could even be added to
> re-map the pages as ro in prepare(ro) to catch naughty accesses. I'd
> also go as far as to say the debug feature should completely unmap
> the pages after finish too. Though for us, both the access-type
> parameter and debug features are "nice to haves" - we can make
> progress with the code as it currently stands (assuming exporters
> start using the API that is).
Perhaps it isn't a bad idea to include access-type bitmask in the
first version. It would help optimize a bit the cache operations.
> Something which also came up when discussing internally is the topic
> of mmap APIs of the importing device driver. For example, I believe
> DRM has an mmap API on GEM buffer objects. If a new dma_buf import
> ioctl was added to GEM (maybe the PRIME patches already add this),
> how would GEM's bo mmap API work?
My first thought is maybe we should just dis-allow this for now until
we have a chance to see if there are any possible issues with an
importer mmap'ing the buffer to userspace. We could possible have a
helper dma_buf_mmap() fxn which in turn calls dmabuf ops->mmap() so
the mmap'ing is actually performed by the exporter on behalf of the
importer.
>
> * Future enhancements: The prepare/finish bracketing could be used
> as part of a wider synchronization scheme with other devices.
> E.g. If another device was writing to the buffer, the prepare ioctl
> could block until that device had finished accessing that buffer.
> In the same way, another device could be blocked from accessing that
> buffer until the client process called finish. We have already
> started playing with such a scheme in the T6xx driver stack we're
> terming "kernel dependency system". In this scheme each buffer has a
> FIFO of "buffer consumers" waiting to access a buffer. The idea
> being that a "buffer consumer" is fairly abstract and could be any
> device or userspace process participating in the synchronization
> scheme. Examples would be GPU jobs, display controller "scan-out"
> jobs, etc.
>
> So for example, a userspace application could dispatch a GPU
> fragment shading job into the GPU's kernel driver which will write
> to a KMS scanout buffer. The application then immediately issues a
> drm_mode_crtc_page_flip ioctl on the display controller's DRM driver
> to display the soon-to-be-rendered buffer. Inside the kernel, the
> GPU driver adds the fragment job to the dma_buf's FIFO. As the FIFO
> was empty, dma_buf calls into the GPU kernel driver to tell it it
> "owns" access to the buffer and the GPU driver schedules the job to
> run on the GPU. Upon receiving the drm_mode_crtc_page_flip ioctl,
> the DRM/KMS driver adds a scan-out job to the buffer's FIFO.
> However, the FIFO already has the GPU's fragment shading job in it
> so nothing happens until the GPU job completes. When the GPU job
> completes, the GPU driver calls into dma_buf to mark its job
> complete. dma_buf then takes the next job in its FIFO which the KMS
> driver's scanout job, calls into the KMS driver to schedule the
> pageflip. The result? A buffer gets scanned out as soon as it has
> finished being rendered without needing a round-trip to userspace.
> Sure, there are easier ways to achieve that goal, but the idea is
> that the mechanism can be used to synchronize access across multiple
> devices, which makes it useful for lots of other use-cases too.
>
>
> As I say, we have already implemented something which works as I
> describe but where the buffers are abstract resources not linked to
> dma_buf. I'd like to discuss the finer points of the mechanisms
> further, but if it's looking like there's interest in this approach
> we'll start re-writing the code we have to sit on-top of dma_buf
> and posting it as RFCs to the various lists. The intention is to get
> this to mainline, if mainline wants it. :-)
I think we do need some sort of 'sync object' (which might really just
be a 'synchronization queue' object) in the kernel. It probably
shouldn't be built-in to dma-buf, but I expect we'd want the dma_buf
struct to have a 'struct sync_queue *' (or whatever it ends up being
called).
The sync-queue seems like a reasonable approach for pure cpu-sw based
synchronization. The only thing I'm not sure is how to also deal with
hw that supports any sort of auto synchronization without cpu sw
involvement.
BR,
-R
> Personally, what I particularly like about this approach to
> synchronization is that it doesn't require any interfaces to be
> modified. I think/hope that makes it easier to port existing drivers
> and sub-systems to take advantage of it. The buffer itself is the
> synchronization object and interfaces already pass buffers around so
> don't need modification. There are of course some limitations with
> this approach, the main one we can think of being that it can't
> really be used for A/V sync. It kinda assumes "jobs" in the FIFO
> should be run as soon as the preceding job completes, which isn't
> the case when streaming real-time video. Though nothing precludes
> more explicit sync objects being used in conjunction with this
> approach.
>
>
> Cheers,
>
> Tom
>
>
>
>
>
> _______________________________________________
> dri-devel mailing list
> dri-devel(a)lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel