Am 17.04.21 um 16:21 schrieb Muchun Song:
> On Sat, Apr 17, 2021 at 9:44 PM <Peter.Enderborg(a)sony.com> wrote:
>> On 4/17/21 3:07 PM, Muchun Song wrote:
>>> On Sat, Apr 17, 2021 at 6:41 PM Peter Enderborg
>>> <peter.enderborg(a)sony.com> wrote:
>>>> This adds a total used dma-buf memory. Details
>>>> can be found in debugfs, however it is not for everyone
>>>> and not always available. dma-buf are indirect allocated by
>>>> userspace. So with this value we can monitor and detect
>>>> userspace applications that have problems.
>>>>
>>>> Signed-off-by: Peter Enderborg <peter.enderborg(a)sony.com>
>>>> ---
>>>> drivers/dma-buf/dma-buf.c | 13 +++++++++++++
>>>> fs/proc/meminfo.c | 5 ++++-
>>>> include/linux/dma-buf.h | 1 +
>>>> 3 files changed, 18 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
>>>> index f264b70c383e..197e5c45dd26 100644
>>>> --- a/drivers/dma-buf/dma-buf.c
>>>> +++ b/drivers/dma-buf/dma-buf.c
>>>> @@ -37,6 +37,7 @@ struct dma_buf_list {
>>>> };
>>>>
>>>> static struct dma_buf_list db_list;
>>>> +static atomic_long_t dma_buf_global_allocated;
>>>>
>>>> static char *dmabuffs_dname(struct dentry *dentry, char *buffer, int buflen)
>>>> {
>>>> @@ -79,6 +80,7 @@ static void dma_buf_release(struct dentry *dentry)
>>>> if (dmabuf->resv == (struct dma_resv *)&dmabuf[1])
>>>> dma_resv_fini(dmabuf->resv);
>>>>
>>>> + atomic_long_sub(dmabuf->size, &dma_buf_global_allocated);
>>>> module_put(dmabuf->owner);
>>>> kfree(dmabuf->name);
>>>> kfree(dmabuf);
>>>> @@ -586,6 +588,7 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
>>>> mutex_lock(&db_list.lock);
>>>> list_add(&dmabuf->list_node, &db_list.head);
>>>> mutex_unlock(&db_list.lock);
>>>> + atomic_long_add(dmabuf->size, &dma_buf_global_allocated);
>>>>
>>>> return dmabuf;
>>>>
>>>> @@ -1346,6 +1349,16 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
>>>> }
>>>> EXPORT_SYMBOL_GPL(dma_buf_vunmap);
>>>>
>>>> +/**
>>>> + * dma_buf_allocated_pages - Return the used nr of pages
>>>> + * allocated for dma-buf
>>>> + */
>>>> +long dma_buf_allocated_pages(void)
>>>> +{
>>>> + return atomic_long_read(&dma_buf_global_allocated) >> PAGE_SHIFT;
>>>> +}
>>>> +EXPORT_SYMBOL_GPL(dma_buf_allocated_pages);
>>> dma_buf_allocated_pages is only called from fs/proc/meminfo.c.
>>> I am confused why it should be exported. If it won't be called
>>> from the driver module, we should not export it.
>> Ah. I thought you did not want the GPL restriction. I don't have real
>> opinion about it. It's written to be following the rest of the module.
>> It is not needed for the usage of dma-buf in kernel module. But I
>> don't see any reason for hiding it either.
> The modules do not need dma_buf_allocated_pages, hiding it
> can prevent the module from calling it. So I think that
> EXPORT_SYMBOL_GPL is unnecessary. If one day someone
> want to call it from the module, maybe it’s not too late to export
> it at that time.
Yeah, that is a rather good point. Only symbols which should be used by
modules/drivers should be exported.
Christian.
>
>>
>>> Thanks.
>>>
>>>> +
>>>> #ifdef CONFIG_DEBUG_FS
>>>> static int dma_buf_debug_show(struct seq_file *s, void *unused)
>>>> {
>>>> diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
>>>> index 6fa761c9cc78..ccc7c40c8db7 100644
>>>> --- a/fs/proc/meminfo.c
>>>> +++ b/fs/proc/meminfo.c
>>>> @@ -16,6 +16,7 @@
>>>> #ifdef CONFIG_CMA
>>>> #include <linux/cma.h>
>>>> #endif
>>>> +#include <linux/dma-buf.h>
>>>> #include <asm/page.h>
>>>> #include "internal.h"
>>>>
>>>> @@ -145,7 +146,9 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
>>>> show_val_kb(m, "CmaFree: ",
>>>> global_zone_page_state(NR_FREE_CMA_PAGES));
>>>> #endif
>>>> -
>>>> +#ifdef CONFIG_DMA_SHARED_BUFFER
>>>> + show_val_kb(m, "DmaBufTotal: ", dma_buf_allocated_pages());
>>>> +#endif
>>>> hugetlb_report_meminfo(m);
>>>>
>>>> arch_report_meminfo(m);
>>>> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
>>>> index efdc56b9d95f..5b05816bd2cd 100644
>>>> --- a/include/linux/dma-buf.h
>>>> +++ b/include/linux/dma-buf.h
>>>> @@ -507,4 +507,5 @@ int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
>>>> unsigned long);
>>>> int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map);
>>>> void dma_buf_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map);
>>>> +long dma_buf_allocated_pages(void);
>>>> #endif /* __DMA_BUF_H__ */
>>>> --
>>>> 2.17.1
>>>>
On Thu, Apr 15, 2021 at 05:40:24PM +0200, Thomas Hellström (Intel) wrote:
>
> On 4/15/21 4:40 PM, Daniel Vetter wrote:
> > On Thu, Apr 15, 2021 at 4:02 PM Thomas Hellström (Intel)
> > <thomas_os(a)shipmail.org> wrote:
> > >
> > > On 4/15/21 3:37 PM, Daniel Vetter wrote:
> > > > On Tue, Apr 13, 2021 at 09:57:06AM +0200, Christian König wrote:
> > > > > Am 13.04.21 um 09:50 schrieb Thomas Hellström:
> > > > > > Hi!
> > > > > >
> > > > > > During the dma_resv conversion of the i915 driver, we've been using ww
> > > > > > transaction lock lists to keep track of ww_mutexes that are locked
> > > > > > during the transaction so that they can be batch unlocked at suitable
> > > > > > locations. Including also the LMEM/VRAM eviction we've ended up with
> > > > > > two static lists per transaction context; one typically unlocked at the
> > > > > > end of transaction and one initialized before and unlocked after each
> > > > > > buffer object validate. This enables us to do sleeping locking at
> > > > > > eviction and keep objects locked on the eviction list until we
> > > > > > eventually succeed allocating memory (modulo minor flaws of course).
> > > > > >
> > > > > > It would be beneficial with the i915 TTM conversion to be able to
> > > > > > introduce a similar functionality that would work in ttm but also
> > > > > > cross-driver in, for example move_notify. It would also be beneficial
> > > > > > to be able to put any dma_resv ww mutex on the lists, and not require
> > > > > > it to be embedded in a particular object type.
> > > > > >
> > > > > > I started scetching on some utilities for this. For TTM, for example,
> > > > > > the idea would be to pass a list head for the ww transaction lock list
> > > > > > in the ttm_operation_ctx. A function taking a ww_mutex could then
> > > > > > either attach a grabbed lock to the list for batch unlocking, or be
> > > > > > responsible for unlocking when the lock's scope is exited. It's also
> > > > > > possible to create sublists if so desired. I believe the below would be
> > > > > > sufficient to cover the i915 functionality.
> > > > > >
> > > > > > Any comments and suggestions appreciated!
> > > > > ah yes Daniel and I haven been discussing something like this for years.
> > > > >
> > > > > I also came up with rough implementation, but we always ran into lifetime
> > > > > issues.
> > > > >
> > > > > In other words the ww_mutexes which are on the list would need to be kept
> > > > > alive until unlocked.
> > > > >
> > > > > Because of this we kind of backed up and said we would need this on the GEM
> > > > > level instead of working with dma_resv objects.
> > > > Yeah there's a few funny concerns here that make this awkward:
> > > > - For simplicity doing these helpers at the gem level should make things a
> > > > bit easier, because then we have a standard way to drop the reference.
> > > > It does mean that the only thing you can lock like this are gem objects,
> > > > but I think that's fine. At least for a first cut.
> > > >
> > > > - This is a bit awkward for vmwgfx, but a) Zack has mentioned he's looking
> > > > into adopting gem bo internally to be able to drop a pile of code and
> > > > stop making vmwgfx the only special-case we have b) drivers which don't
> > > > need this won't need this, so should be fine.
> > > >
> > > > The other awkward thing I guess is that ttm would need to use the
> > > > embedded kref from the gem bo, but that should be transparent I think.
> > > >
> > > > - Next up is dma-buf: For i915 we'd like to do the same eviction trick
> > > > also through p2p dma-buf callbacks, so that this works the same as
> > > > eviction/reservation within a gpu. But for these internal bo you might
> > > > not have a dma-buf, so we can't just lift the trick to the dma-buf
> > > > level. But I think if we pass e.g. a struct list_head and a callback to
> > > > unreference/unlock all the buffers in there to the exporter, plus
> > > > similar for the slowpath lock, then that should be doable without
> > > > glorious layering inversions between dma-buf and gem.
> > > >
> > > > I think for dma-buf it should even be ok if this requires that we
> > > > allocate an entire structure with kmalloc or something, allocating
> > > > memory while holding dma_resv is ok.
> > > Yes, the thing here with the suggested helpers is that you would just
> > > embed a trans_lockitem struct in the gem object (and defines the gem
> > > object ops). Otherwise and for passing to dma-buf this is pretty much
> > > exactly what you are suggesting, but the huge benefit of encapsulating
> > > the needed members like this is that when we need to change something we
> > > change it in just one place.
> > >
> > > For anything that doesn't have a gem object (dma-buf, vmwgfx or
> > > whatever) you have the choice of either allocating a struct
> > > trans_lockitem or embed it wherever you prefer. In particular, this is
> > > beneficial where you have a single dma-resv class ww-mutex sitting
> > > somewhere in the way and you don't want to needlessly have a gem object
> > > that embeds it.
> > The thing is, everyone who actually uses dma_resv_lock has a
> > gem_buffer_object underneath. So it feels a bit like flexibility for
> > no real need, and I think it would make it slightly more awkard for
> > gem drivers to neatly integrate into their cs path. The lockitem
> > struct works, but it is a bit cumbersome.
>
> Well that's partly because of it's impossible to use a standalone ww_mutex
> in a locking transaction that can only add gem objects to the list :/.
> Already in the i915 driver we have and may want to add various places where
> we have dead gem objects sitting because of this.
>
> Also, more importantly, If we pass a list down the dma-buf move_notify(), a
> trans_lockitem is pretty much exactly what we expect back (except of course
> for the private pointer). It would be odd if we'd expect all list items to
> be gem objects when it's a dma-buf interface?
>
> >
> > Also if we add some wrappers to e.g. add a gem_bo to the ctx, then if
> > we decide to slip the lockitem in there, we still only need to touch
> > the helper code, and not all drivers.
>
> Well, yes assuming we always have an embedding gem object for a dma_resv
> that might be true, but either way I don't really expect the gem helpers to
> look very different. We will need the ops anyway and a specialized context
> so if the only thing we're debating is whether or not to embed a struct in
> the gem object, unless you really insist on using the gem object initially,
> I suggest we try this and if it becomes awkward, just
> s/trans_lockitem/drm_gem_object/
Hm yeah I think you convinced me this is a bit a bikeshed :-) Maybe call
it dma_resv_lockitem or so if we go with top-level generic solution. And
then embed it wheterever we feel like.
The annoying thing with tha generic thing is that I'd like to avoid the
full callback pointer in all the gem objects, but maybe another ptr really
doesn't matter on this if we add a linked list anyway ...
-Daniel
>
> /Thomas
>
>
> > -Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Thu, Apr 15, 2021 at 4:02 PM Thomas Hellström (Intel)
<thomas_os(a)shipmail.org> wrote:
>
>
> On 4/15/21 3:37 PM, Daniel Vetter wrote:
> > On Tue, Apr 13, 2021 at 09:57:06AM +0200, Christian König wrote:
> >>
> >> Am 13.04.21 um 09:50 schrieb Thomas Hellström:
> >>> Hi!
> >>>
> >>> During the dma_resv conversion of the i915 driver, we've been using ww
> >>> transaction lock lists to keep track of ww_mutexes that are locked
> >>> during the transaction so that they can be batch unlocked at suitable
> >>> locations. Including also the LMEM/VRAM eviction we've ended up with
> >>> two static lists per transaction context; one typically unlocked at the
> >>> end of transaction and one initialized before and unlocked after each
> >>> buffer object validate. This enables us to do sleeping locking at
> >>> eviction and keep objects locked on the eviction list until we
> >>> eventually succeed allocating memory (modulo minor flaws of course).
> >>>
> >>> It would be beneficial with the i915 TTM conversion to be able to
> >>> introduce a similar functionality that would work in ttm but also
> >>> cross-driver in, for example move_notify. It would also be beneficial
> >>> to be able to put any dma_resv ww mutex on the lists, and not require
> >>> it to be embedded in a particular object type.
> >>>
> >>> I started scetching on some utilities for this. For TTM, for example,
> >>> the idea would be to pass a list head for the ww transaction lock list
> >>> in the ttm_operation_ctx. A function taking a ww_mutex could then
> >>> either attach a grabbed lock to the list for batch unlocking, or be
> >>> responsible for unlocking when the lock's scope is exited. It's also
> >>> possible to create sublists if so desired. I believe the below would be
> >>> sufficient to cover the i915 functionality.
> >>>
> >>> Any comments and suggestions appreciated!
> >> ah yes Daniel and I haven been discussing something like this for years.
> >>
> >> I also came up with rough implementation, but we always ran into lifetime
> >> issues.
> >>
> >> In other words the ww_mutexes which are on the list would need to be kept
> >> alive until unlocked.
> >>
> >> Because of this we kind of backed up and said we would need this on the GEM
> >> level instead of working with dma_resv objects.
> > Yeah there's a few funny concerns here that make this awkward:
> > - For simplicity doing these helpers at the gem level should make things a
> > bit easier, because then we have a standard way to drop the reference.
> > It does mean that the only thing you can lock like this are gem objects,
> > but I think that's fine. At least for a first cut.
> >
> > - This is a bit awkward for vmwgfx, but a) Zack has mentioned he's looking
> > into adopting gem bo internally to be able to drop a pile of code and
> > stop making vmwgfx the only special-case we have b) drivers which don't
> > need this won't need this, so should be fine.
> >
> > The other awkward thing I guess is that ttm would need to use the
> > embedded kref from the gem bo, but that should be transparent I think.
> >
> > - Next up is dma-buf: For i915 we'd like to do the same eviction trick
> > also through p2p dma-buf callbacks, so that this works the same as
> > eviction/reservation within a gpu. But for these internal bo you might
> > not have a dma-buf, so we can't just lift the trick to the dma-buf
> > level. But I think if we pass e.g. a struct list_head and a callback to
> > unreference/unlock all the buffers in there to the exporter, plus
> > similar for the slowpath lock, then that should be doable without
> > glorious layering inversions between dma-buf and gem.
> >
> > I think for dma-buf it should even be ok if this requires that we
> > allocate an entire structure with kmalloc or something, allocating
> > memory while holding dma_resv is ok.
>
> Yes, the thing here with the suggested helpers is that you would just
> embed a trans_lockitem struct in the gem object (and defines the gem
> object ops). Otherwise and for passing to dma-buf this is pretty much
> exactly what you are suggesting, but the huge benefit of encapsulating
> the needed members like this is that when we need to change something we
> change it in just one place.
>
> For anything that doesn't have a gem object (dma-buf, vmwgfx or
> whatever) you have the choice of either allocating a struct
> trans_lockitem or embed it wherever you prefer. In particular, this is
> beneficial where you have a single dma-resv class ww-mutex sitting
> somewhere in the way and you don't want to needlessly have a gem object
> that embeds it.
The thing is, everyone who actually uses dma_resv_lock has a
gem_buffer_object underneath. So it feels a bit like flexibility for
no real need, and I think it would make it slightly more awkard for
gem drivers to neatly integrate into their cs path. The lockitem
struct works, but it is a bit cumbersome.
Also if we add some wrappers to e.g. add a gem_bo to the ctx, then if
we decide to slip the lockitem in there, we still only need to touch
the helper code, and not all drivers.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Am 13.04.21 um 09:50 schrieb Thomas Hellström:
> Hi!
>
> During the dma_resv conversion of the i915 driver, we've been using ww
> transaction lock lists to keep track of ww_mutexes that are locked
> during the transaction so that they can be batch unlocked at suitable
> locations. Including also the LMEM/VRAM eviction we've ended up with
> two static lists per transaction context; one typically unlocked at the
> end of transaction and one initialized before and unlocked after each
> buffer object validate. This enables us to do sleeping locking at
> eviction and keep objects locked on the eviction list until we
> eventually succeed allocating memory (modulo minor flaws of course).
>
> It would be beneficial with the i915 TTM conversion to be able to
> introduce a similar functionality that would work in ttm but also
> cross-driver in, for example move_notify. It would also be beneficial
> to be able to put any dma_resv ww mutex on the lists, and not require
> it to be embedded in a particular object type.
>
> I started scetching on some utilities for this. For TTM, for example,
> the idea would be to pass a list head for the ww transaction lock list
> in the ttm_operation_ctx. A function taking a ww_mutex could then
> either attach a grabbed lock to the list for batch unlocking, or be
> responsible for unlocking when the lock's scope is exited. It's also
> possible to create sublists if so desired. I believe the below would be
> sufficient to cover the i915 functionality.
>
> Any comments and suggestions appreciated!
ah yes Daniel and I haven been discussing something like this for years.
I also came up with rough implementation, but we always ran into
lifetime issues.
In other words the ww_mutexes which are on the list would need to be
kept alive until unlocked.
Because of this we kind of backed up and said we would need this on the
GEM level instead of working with dma_resv objects.
Regards,
Christian.
>
> 8<------------------------------------------------------
>
> #ifndef _TRANSACTION_LOCKLIST_H_
> #define _TRANSACTION_LOCKLIST_H_
>
> struct trans_lockitem;
>
> /**
> * struct trans_locklist_ops - Ops structure for the ww locklist
> functionality.
> *
> * Typically a const struct trans_locklist_ops is defined for each type
> that
> * embeds a struct trans_lockitem, or hav a struct trans_lockitem
> pointing
> * at it using the private pointer. It can be a buffer object,
> reservation
> * object, a single ww_mutex or even a sublist of trans_lockitems.
> */
> struct trans_locklist_ops {
> /**
> * slow_lock: Slow lock to relax the transaction. Only used by
> * a contending lock item.
> * @item: The struct trans_lockitem to lock
> * @intr: Whether to lock interruptible
> *
> * Return: -ERESTARTSYS: Hit a signal when relaxing,
> * -EAGAIN, relaxing succesful, but the contending lock
> * remains unlocked.
> */
> int (*slow_lock) (struct trans_lockitem *item, bool intr);
>
> /**
> * unlock: Unlock.
> * @item: The struct trans_lockitem to unlock.
> */
> void (*unlock) (struct trans_lockitem *item);
>
> /**
> * unlock: Unlock.
> * @item: The struct trans_lockitem to put. This function may
> be NULL.
> */
> void (*put) (struct trans_lockitem *item);
> };
>
> /**
> * struct trans_lockitem
> * @ops: Pointer to an Ops structure for this lockitem.
> * @link: List link for the transaction locklist.
> * @private: Private info.
> * @relax_unlock: Unlock contending lock after relaxation since it is
> * likely not needed after a transaction restart.
> *
> * A struct trans_lockitem typically represents a single lock, but is
> * generic enough to represent a sublist of locks. It can either be
> * embedded, or allocated on demand. A kmem_cache might be beneficial.
> */
> struct trans_lockitem {
> const struct trans_locklist_ops *ops;
> struct list_head link;
> u32 relax_unlock:1;
> void *private;
> };
>
> /* unlock example */
> static inline void trans_unlock_put_all(struct list_head *list)
> {
> struct trans_lockitem *lock, *next;
>
> list_for_each_entry_safe(lock, next, typeof(*lock), link) {
> lock->ops->unlock(lock);
> list_del_init(&lock->link);
> if (lock->ops_put)
> lock->ops->put(lock);
> }
> }
>
> /* Backoff example */
> static inline int __must_check trans_backoff(struct list_head *list,
> bool intr,
> struct trans_lockitem
> *contending)
> {
> int ret = 0;
>
> trans_unlock_put_all(list);
> if (contending) {
> ret = contending->ops->slow_lock(contending, intr);
> if (!ret && contending->relax_unlock)
> contending->unlock(contending);
> if (ret == -EAGAIN)
> ret = 0;
> contending->ops->put(contending);
> }
>
> return ret;
> }
>
>
> #endif _TRANSACTION_LOCKLIST_
>
>
tldr; DMA buffers aren't normal memory, expecting that you can use
them like that (like calling get_user_pages works, or that they're
accounting like any other normal memory) cannot be guaranteed.
Since some userspace only runs on integrated devices, where all
buffers are actually all resident system memory, there's a huge
temptation to assume that a struct page is always present and useable
like for any more pagecache backed mmap. This has the potential to
result in a uapi nightmare.
To stop this gap require that DMA buffer mmaps are VM_PFNMAP, which
blocks get_user_pages and all the other struct page based
infrastructure for everyone. In spirit this is the uapi counterpart to
the kernel-internal CONFIG_DMABUF_DEBUG.
Motivated by a recent patch which wanted to swich the system dma-buf
heap to vm_insert_page instead of vm_insert_pfn.
v2:
Jason brought up that we also want to guarantee that all ptes have the
pte_special flag set, to catch fast get_user_pages (on architectures
that support this). Allowing VM_MIXEDMAP (like VM_SPECIAL does) would
still allow vm_insert_page, but limiting to VM_PFNMAP will catch that.
>From auditing the various functions to insert pfn pte entires
(vm_insert_pfn_prot, remap_pfn_range and all it's callers like
dma_mmap_wc) it looks like VM_PFNMAP is already required anyway, so
this should be the correct flag to check for.
References: https://lore.kernel.org/lkml/CAKMK7uHi+mG0z0HUmNt13QCCvutuRVjpcR0NjRL12k-Wb…
Acked-by: Christian König <christian.koenig(a)amd.com>
Cc: Jason Gunthorpe <jgg(a)ziepe.ca>
Cc: Suren Baghdasaryan <surenb(a)google.com>
Cc: Matthew Wilcox <willy(a)infradead.org>
Cc: John Stultz <john.stultz(a)linaro.org>
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: "Christian König" <christian.koenig(a)amd.com>
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
---
drivers/dma-buf/dma-buf.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index f264b70c383e..06cb1d2e9fdc 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -127,6 +127,7 @@ static struct file_system_type dma_buf_fs_type = {
static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
{
struct dma_buf *dmabuf;
+ int ret;
if (!is_dma_buf_file(file))
return -EINVAL;
@@ -142,7 +143,11 @@ static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
dmabuf->size >> PAGE_SHIFT)
return -EINVAL;
- return dmabuf->ops->mmap(dmabuf, vma);
+ ret = dmabuf->ops->mmap(dmabuf, vma);
+
+ WARN_ON(!(vma->vm_flags & VM_PFNMAP));
+
+ return ret;
}
static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence)
@@ -1244,6 +1249,8 @@ EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access);
int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
unsigned long pgoff)
{
+ int ret;
+
if (WARN_ON(!dmabuf || !vma))
return -EINVAL;
@@ -1264,7 +1271,11 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
vma_set_file(vma, dmabuf->file);
vma->vm_pgoff = pgoff;
- return dmabuf->ops->mmap(dmabuf, vma);
+ ret = dmabuf->ops->mmap(dmabuf, vma);
+
+ WARN_ON(!(vma->vm_flags & VM_PFNMAP));
+
+ return ret;
}
EXPORT_SYMBOL_GPL(dma_buf_mmap);
--
2.31.0