Am 31.01.21 um 18:39 schrieb Joe Perches:
> On Wed, 2021-02-03 at 14:26 +0100, Christian König wrote:
>> Am 30.01.21 um 19:47 schrieb Joe Perches:
>>> On Mon, 2020-08-24 at 21:56 -0700, Joe Perches wrote:
>>>> Use semicolons and braces.
>>> Ping?
>>>> Signed-off-by: Joe Perches <joe(a)perches.com>
>> Reviewed-by: Christian König <christian.koenig(a)amd.com>
>>
>> Do you have commit rights to drm-misc-next?
> No.
Pushed.
Thanks for the help,
Christian.
>
>>>> ---
>>>> drivers/dma-buf/st-dma-fence.c | 7 +++++--
>>>> 1 file changed, 5 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/drivers/dma-buf/st-dma-fence.c b/drivers/dma-buf/st-dma-fence.c
>>>> index e593064341c8..c8a12d7ad71a 100644
>>>> --- a/drivers/dma-buf/st-dma-fence.c
>>>> +++ b/drivers/dma-buf/st-dma-fence.c
>>>> @@ -471,8 +471,11 @@ static int thread_signal_callback(void *arg)
>>>> dma_fence_signal(f1);
>>>>
>>>> smp_store_mb(cb.seen, false);
>>>> - if (!f2 || dma_fence_add_callback(f2, &cb.cb, simple_callback))
>>>> - miss++, cb.seen = true;
>>>> + if (!f2 ||
>>>> + dma_fence_add_callback(f2, &cb.cb, simple_callback)) {
>>>> + miss++;
>>>> + cb.seen = true;
>>>> + }
>>>>
>>>> if (!t->before)
>>>> dma_fence_signal(f1);
>
On Thu, Jan 28, 2021 at 08:53:25AM +0100, Michal Hocko wrote:
> On Wed 27-01-21 12:42:45, Minchan Kim wrote:
> > On Tue, Jan 26, 2021 at 08:44:49AM +0100, Michal Hocko wrote:
> > > On Mon 25-01-21 11:33:36, Minchan Kim wrote:
> > > > On Mon, Jan 25, 2021 at 02:12:00PM +0100, Michal Hocko wrote:
> > > > > On Thu 21-01-21 09:55:00, Minchan Kim wrote:
> > > > > > Contiguous memory allocation can be stalled due to waiting
> > > > > > on page writeback and/or page lock which causes unpredictable
> > > > > > delay. It's a unavoidable cost for the requestor to get *big*
> > > > > > contiguous memory but it's expensive for *small* contiguous
> > > > > > memory(e.g., order-4) because caller could retry the request
> > > > > > in different range where would have easy migratable pages
> > > > > > without stalling.
> > > > > >
> > > > > > This patch introduce __GFP_NORETRY as compaction gfp_mask in
> > > > > > alloc_contig_range so it will fail fast without blocking
> > > > > > when it encounters pages needed waiting.
> > > > >
> > > > > I am not against controling how hard this allocator tries with gfp mask
> > > > > but this changelog is rather void on any data and any user.
> > > > >
> > > > > It is also rather dubious to have retries when then caller says to not
> > > > > retry.
> > > >
> > > > Since max_tries is 1 with ++tries, it shouldn't retry.
> > >
> > > OK, I have missed that. This is a tricky code. ASYNC mode should be
> > > completely orthogonal to the retries count. Those are different things.
> > > Page allocator does an explicit bail out based on __GFP_NORETRY. You
> > > should be doing the same.
> >
> > Before sending next revision, let me check this part again.
> >
> > I want to use __GFP_NORETRY to indicate "opportunistic-easy-to-fail attempt"
> > and I want to use ASYNC migrate_mode to help the goal.
> >
> > Do you see the problem?
>
> No, as I've said. This is a normal NORETRY policy. And ASYNC migration
> is a mere implementation detail you do not have bother your users about.
> This is the semantic view. From the implementation POV it should be the
> gfp mask to drive decisions rather than a random (ASYNC) flag to control
> retries as you did here.
Make sense.
Let me cook next revision.
Thanks for the review, Michal.
On Tue, Jan 26, 2021 at 08:44:49AM +0100, Michal Hocko wrote:
> On Mon 25-01-21 11:33:36, Minchan Kim wrote:
> > On Mon, Jan 25, 2021 at 02:12:00PM +0100, Michal Hocko wrote:
> > > On Thu 21-01-21 09:55:00, Minchan Kim wrote:
> > > > Contiguous memory allocation can be stalled due to waiting
> > > > on page writeback and/or page lock which causes unpredictable
> > > > delay. It's a unavoidable cost for the requestor to get *big*
> > > > contiguous memory but it's expensive for *small* contiguous
> > > > memory(e.g., order-4) because caller could retry the request
> > > > in different range where would have easy migratable pages
> > > > without stalling.
> > > >
> > > > This patch introduce __GFP_NORETRY as compaction gfp_mask in
> > > > alloc_contig_range so it will fail fast without blocking
> > > > when it encounters pages needed waiting.
> > >
> > > I am not against controling how hard this allocator tries with gfp mask
> > > but this changelog is rather void on any data and any user.
> > >
> > > It is also rather dubious to have retries when then caller says to not
> > > retry.
> >
> > Since max_tries is 1 with ++tries, it shouldn't retry.
>
> OK, I have missed that. This is a tricky code. ASYNC mode should be
> completely orthogonal to the retries count. Those are different things.
> Page allocator does an explicit bail out based on __GFP_NORETRY. You
> should be doing the same.
Before sending next revision, let me check this part again.
I want to use __GFP_NORETRY to indicate "opportunistic-easy-to-fail attempt"
and I want to use ASYNC migrate_mode to help the goal.
Do you see the problem?
On Tue, Jan 26, 2021 at 08:38:08AM +0100, Michal Hocko wrote:
> On Mon 25-01-21 11:42:34, Minchan Kim wrote:
> > On Mon, Jan 25, 2021 at 02:07:01PM +0100, Michal Hocko wrote:
> > > On Thu 21-01-21 09:54:59, Minchan Kim wrote:
> > > > The upcoming patch will introduce __GFP_NORETRY semantic
> > > > in alloc_contig_range which is a failfast mode of the API.
> > > > Instead of adding a additional parameter for gfp, replace
> > > > no_warn with gfp flag.
> > > >
> > > > To keep old behaviors, it follows the rule below.
> > > >
> > > > no_warn gfp_flags
> > > >
> > > > false GFP_KERNEL
> > > > true GFP_KERNEL|__GFP_NOWARN
> > > > gfp & __GFP_NOWARN GFP_KERNEL | (gfp & __GFP_NOWARN)
> > > >
> > > > Reviewed-by: Suren Baghdasaryan <surenb(a)google.com>
> > > > Signed-off-by: Minchan Kim <minchan(a)kernel.org>
> > > [...]
> > > > diff --git a/mm/cma.c b/mm/cma.c
> > > > index 0ba69cd16aeb..d50627686fec 100644
> > > > --- a/mm/cma.c
> > > > +++ b/mm/cma.c
> > > > @@ -419,13 +419,13 @@ static inline void cma_debug_show_areas(struct cma *cma) { }
> > > > * @cma: Contiguous memory region for which the allocation is performed.
> > > > * @count: Requested number of pages.
> > > > * @align: Requested alignment of pages (in PAGE_SIZE order).
> > > > - * @no_warn: Avoid printing message about failed allocation
> > > > + * @gfp_mask: GFP mask to use during the cma allocation.
> > >
> > > Call out supported gfp flags explicitly. Have a look at kvmalloc_node
> > > for a guidance.
> >
> > How about this?
> >
> >
> > diff --git a/mm/cma.c b/mm/cma.c
> > index d50627686fec..b94727b694d6 100644
> > --- a/mm/cma.c
> > +++ b/mm/cma.c
> > @@ -423,6 +423,10 @@ static inline void cma_debug_show_areas(struct cma *cma) { }
> > *
> > * This function allocates part of contiguous memory on specific
> > * contiguous memory area.
> > + *
> > + * For gfp_mask, GFP_KERNEL and __GFP_NORETRY are supported. __GFP_NORETRY
> > + * will avoid costly functions(e.g., waiting on page_writeback and locking)
> > + * at current implementaion during the page migration.
>
> rather than explicitly mentioning what the flag implies I think it would
> be more useful to state the intended usecase. See how kvmalloc_node says
> "__GFP_RETRY_MAYFAIL is supported, and it should be used only if kmalloc is
> preferable to the vmalloc fallback, due to visible performance
> drawbacks.
> __GFP_NOWARN is also supported to suppress allocation failure messages."
>
> This would help people not familiar with internals to see whether this
> flag is a good fit for them.
>
> In this case I woul go with
> "
> @flags: gfp mask. Must be compatible (superset) with GFP_KERNEL.
> [...]
> Reclaim modifiers (__GFP_RETRY_MAYFAIL, __GFP_NOFAIL) are not supported.
> __GFP_NORETRY is supported, and it should be used for opportunistic
> allocation attempts that should rather fail quickly when the caller has
> a fallback strategy.
> "
>
> Obviously for this patch you will go with a simple statement that
> Reclaim modifiers are not supported at all.
After more discussion for gfp_flags in thread of next patch, let me
changes a bit more based on it.
Thanks for the suggestion, Michal.
On Tue, Jan 26, 2021 at 08:46:05AM +0100, Michal Hocko wrote:
> On Thu 21-01-21 09:55:02, Minchan Kim wrote:
> > From: Hyesoo Yu <hyesoo.yu(a)samsung.com>
> >
> > This patch supports chunk heap that allocates the buffers that
> > arranged into a list a fixed size chunks taken from CMA.
> >
> > The chunk heap driver is bound directly to a reserved_memory
> > node by following Rob Herring's suggestion in [1].
> >
> > [1] https://lore.kernel.org/lkml/20191025225009.50305-2-john.stultz@linaro.org/…
>
> Who is using this allocator in the kernel?
Userspace uses the memory via mapping it via dmabuf.
On Tue, Jan 26, 2021 at 08:44:49AM +0100, Michal Hocko wrote:
> On Mon 25-01-21 11:33:36, Minchan Kim wrote:
> > On Mon, Jan 25, 2021 at 02:12:00PM +0100, Michal Hocko wrote:
> > > On Thu 21-01-21 09:55:00, Minchan Kim wrote:
> > > > Contiguous memory allocation can be stalled due to waiting
> > > > on page writeback and/or page lock which causes unpredictable
> > > > delay. It's a unavoidable cost for the requestor to get *big*
> > > > contiguous memory but it's expensive for *small* contiguous
> > > > memory(e.g., order-4) because caller could retry the request
> > > > in different range where would have easy migratable pages
> > > > without stalling.
> > > >
> > > > This patch introduce __GFP_NORETRY as compaction gfp_mask in
> > > > alloc_contig_range so it will fail fast without blocking
> > > > when it encounters pages needed waiting.
> > >
> > > I am not against controling how hard this allocator tries with gfp mask
> > > but this changelog is rather void on any data and any user.
> > >
> > > It is also rather dubious to have retries when then caller says to not
> > > retry.
> >
> > Since max_tries is 1 with ++tries, it shouldn't retry.
>
> OK, I have missed that. This is a tricky code. ASYNC mode should be
> completely orthogonal to the retries count. Those are different things.
> Page allocator does an explicit bail out based on __GFP_NORETRY. You
> should be doing the same.
A concern with __GFP_NOWAIT is regardless of flags passed to cma_alloc,
internal implementation of alloc_contig_range inside will use blockable
operation. See __alloc_contig_migrate_range.
If we go with __GFP_NOWAIT, we should propagate the gfp_mask inside of
__alloc_contig_migrate_range to make cma_alloc consistent with alloc_pages.
(IIUC, that's what you want - make gfp_mask consistent between cma_alloc
and alloc_pages) but I am worry about the direction will make complicate
situation since cma invovles migration context as well as target page
allocation context. Sometime, the single gfp flag could be trouble
to express both contexts all at once.
>
> > >
> > > Also why didn't you consider GFP_NOWAIT semantic for non blocking mode?
> >
> > GFP_NOWAIT seems to be low(specific) flags rather than the one I want to
> > express. Even though I said only page writeback/lock in the description,
> > the goal is to avoid costly operations we might find later so such
> > "failfast", I thought GFP_NORETRY would be good fit.
>
> I suspect you are too focused on implementation details here. Think
> about the indended semantic. Callers of this functionality will not
> think about those (I hope because if they rely on these details then the
> whole thing will become unmaintainable because any change would require
> an audit of all existing users). All you should be caring about is to
> control how expensive the call can be. GFP_NOWAIT is not really low
> level from that POV. It gives you a very lightweight non-sleeping
> attempt to allocate. GFP_NORETRY will give you potentially sleeping but
> an opportunistic-easy-to-fail attempt. And so on. See how that is
> absolutely free of any page writeback or any specific locking.
With above reason I mentioned, I wanted to express __GFP_NORETRY as
"opportunistic-easy-to-fail attempt" to support cma_alloc as "failfast"
for migration context.
> --
> Michal Hocko
> SUSE Labs
On Mon, Jan 25, 2021 at 02:07:01PM +0100, Michal Hocko wrote:
> On Thu 21-01-21 09:54:59, Minchan Kim wrote:
> > The upcoming patch will introduce __GFP_NORETRY semantic
> > in alloc_contig_range which is a failfast mode of the API.
> > Instead of adding a additional parameter for gfp, replace
> > no_warn with gfp flag.
> >
> > To keep old behaviors, it follows the rule below.
> >
> > no_warn gfp_flags
> >
> > false GFP_KERNEL
> > true GFP_KERNEL|__GFP_NOWARN
> > gfp & __GFP_NOWARN GFP_KERNEL | (gfp & __GFP_NOWARN)
> >
> > Reviewed-by: Suren Baghdasaryan <surenb(a)google.com>
> > Signed-off-by: Minchan Kim <minchan(a)kernel.org>
> [...]
> > diff --git a/mm/cma.c b/mm/cma.c
> > index 0ba69cd16aeb..d50627686fec 100644
> > --- a/mm/cma.c
> > +++ b/mm/cma.c
> > @@ -419,13 +419,13 @@ static inline void cma_debug_show_areas(struct cma *cma) { }
> > * @cma: Contiguous memory region for which the allocation is performed.
> > * @count: Requested number of pages.
> > * @align: Requested alignment of pages (in PAGE_SIZE order).
> > - * @no_warn: Avoid printing message about failed allocation
> > + * @gfp_mask: GFP mask to use during the cma allocation.
>
> Call out supported gfp flags explicitly. Have a look at kvmalloc_node
> for a guidance.
How about this?
diff --git a/mm/cma.c b/mm/cma.c
index d50627686fec..b94727b694d6 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -423,6 +423,10 @@ static inline void cma_debug_show_areas(struct cma *cma) { }
*
* This function allocates part of contiguous memory on specific
* contiguous memory area.
+ *
+ * For gfp_mask, GFP_KERNEL and __GFP_NORETRY are supported. __GFP_NORETRY
+ * will avoid costly functions(e.g., waiting on page_writeback and locking)
+ * at current implementaion during the page migration.
*/
struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align,
gfp_t gfp_mask)
On Mon, Jan 25, 2021 at 02:12:00PM +0100, Michal Hocko wrote:
> On Thu 21-01-21 09:55:00, Minchan Kim wrote:
> > Contiguous memory allocation can be stalled due to waiting
> > on page writeback and/or page lock which causes unpredictable
> > delay. It's a unavoidable cost for the requestor to get *big*
> > contiguous memory but it's expensive for *small* contiguous
> > memory(e.g., order-4) because caller could retry the request
> > in different range where would have easy migratable pages
> > without stalling.
> >
> > This patch introduce __GFP_NORETRY as compaction gfp_mask in
> > alloc_contig_range so it will fail fast without blocking
> > when it encounters pages needed waiting.
>
> I am not against controling how hard this allocator tries with gfp mask
> but this changelog is rather void on any data and any user.
>
> It is also rather dubious to have retries when then caller says to not
> retry.
Since max_tries is 1 with ++tries, it shouldn't retry.
>
> Also why didn't you consider GFP_NOWAIT semantic for non blocking mode?
GFP_NOWAIT seems to be low(specific) flags rather than the one I want to
express. Even though I said only page writeback/lock in the description,
the goal is to avoid costly operations we might find later so such
"failfast", I thought GFP_NORETRY would be good fit.
>
> > Signed-off-by: Minchan Kim <minchan(a)kernel.org>
> > ---
> > mm/page_alloc.c | 8 ++++++--
> > 1 file changed, 6 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index b031a5ae0bd5..1cdc3ee0b22e 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -8491,12 +8491,16 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
> > unsigned int nr_reclaimed;
> > unsigned long pfn = start;
> > unsigned int tries = 0;
> > + unsigned int max_tries = 5;
> > int ret = 0;
> > struct migration_target_control mtc = {
> > .nid = zone_to_nid(cc->zone),
> > .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL,
> > };
> >
> > + if (cc->alloc_contig && cc->mode == MIGRATE_ASYNC)
> > + max_tries = 1;
> > +
> > migrate_prep();
> >
> > while (pfn < end || !list_empty(&cc->migratepages)) {
> > @@ -8513,7 +8517,7 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
> > break;
> > }
> > tries = 0;
> > - } else if (++tries == 5) {
> > + } else if (++tries == max_tries) {
> > ret = ret < 0 ? ret : -EBUSY;
> > break;
> > }
> > @@ -8564,7 +8568,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
> > .nr_migratepages = 0,
> > .order = -1,
> > .zone = page_zone(pfn_to_page(start)),
> > - .mode = MIGRATE_SYNC,
> > + .mode = gfp_mask & __GFP_NORETRY ? MIGRATE_ASYNC : MIGRATE_SYNC,
> > .ignore_skip_hint = true,
> > .no_set_skip_hint = true,
> > .gfp_mask = current_gfp_context(gfp_mask),
> > --
> > 2.30.0.296.g2bfb1c46d8-goog
>
> --
> Michal Hocko
> SUSE Labs
Requested by Thomas. I think it justifies a new level, since I tried
to make some forward progress on this last summer, and gave up (for
now). This is very tricky.
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
Cc: Maxime Ripard <mripard(a)kernel.org>
Cc: Thomas Zimmermann <tzimmermann(a)suse.de>
Cc: David Airlie <airlied(a)linux.ie>
Cc: Daniel Vetter <daniel(a)ffwll.ch>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: "Christian König" <christian.koenig(a)amd.com>
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
---
Documentation/gpu/todo.rst | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
index dea9082c0e5f..f872d3d33218 100644
--- a/Documentation/gpu/todo.rst
+++ b/Documentation/gpu/todo.rst
@@ -23,6 +23,9 @@ Advanced: Tricky tasks that need fairly good understanding of the DRM subsystem
and graphics topics. Generally need the relevant hardware for development and
testing.
+Expert: Only attempt these if you've successfully completed some tricky
+refactorings already and are an expert in the specific area
+
Subsystem-wide refactorings
===========================
@@ -168,6 +171,22 @@ Contact: Daniel Vetter, respective driver maintainers
Level: Advanced
+Move Buffer Object Locking to dma_resv_lock()
+---------------------------------------------
+
+Many drivers have their own per-object locking scheme, usually using
+mutex_lock(). This causes all kinds of trouble for buffer sharing, since
+depending which driver is the exporter and importer, the locking hierarchy is
+reversed.
+
+To solve this we need one standard per-object locking mechanism, which is
+dma_resv_lock(). This lock needs to be called as the outermost lock, with all
+other driver specific per-object locks removed. The problem is tha rolling out
+the actual change to the locking contract is a flag day, due to struct dma_buf
+buffer sharing.
+
+Level: Expert
+
Convert logging to drm_* functions with drm_device paramater
------------------------------------------------------------
--
2.30.0
This is rather overkill since currently all drivers call this from
hardirq (or at least timers). But maybe in the future we're going to
have thread irq handlers and what not, doesn't hurt to be prepared.
Plus this is an easy start for sprinkling these fence annotations into
shared code.
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
Cc: linux-rdma(a)vger.kernel.org
Cc: amd-gfx(a)lists.freedesktop.org
Cc: intel-gfx(a)lists.freedesktop.org
Cc: Chris Wilson <chris(a)chris-wilson.co.uk>
Cc: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
Cc: Christian König <christian.koenig(a)amd.com>
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
---
drivers/gpu/drm/drm_vblank.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
index 30912d8f82a5..f2aeb9bf325f 100644
--- a/drivers/gpu/drm/drm_vblank.c
+++ b/drivers/gpu/drm/drm_vblank.c
@@ -24,6 +24,7 @@
* OTHER DEALINGS IN THE SOFTWARE.
*/
+#include <linux/dma-fence.h>
#include <linux/export.h>
#include <linux/kthread.h>
#include <linux/moduleparam.h>
@@ -1922,7 +1923,7 @@ bool drm_handle_vblank(struct drm_device *dev, unsigned int pipe)
{
struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
unsigned long irqflags;
- bool disable_irq;
+ bool disable_irq, fence_cookie;
if (drm_WARN_ON_ONCE(dev, !drm_dev_has_vblank(dev)))
return false;
@@ -1930,6 +1931,8 @@ bool drm_handle_vblank(struct drm_device *dev, unsigned int pipe)
if (drm_WARN_ON(dev, pipe >= dev->num_crtcs))
return false;
+ fence_cookie = dma_fence_begin_signalling();
+
spin_lock_irqsave(&dev->event_lock, irqflags);
/* Need timestamp lock to prevent concurrent execution with
@@ -1942,6 +1945,7 @@ bool drm_handle_vblank(struct drm_device *dev, unsigned int pipe)
if (!vblank->enabled) {
spin_unlock(&dev->vblank_time_lock);
spin_unlock_irqrestore(&dev->event_lock, irqflags);
+ dma_fence_end_signalling(fence_cookie);
return false;
}
@@ -1968,6 +1972,8 @@ bool drm_handle_vblank(struct drm_device *dev, unsigned int pipe)
if (disable_irq)
vblank_disable_fn(&vblank->disable_timer);
+ dma_fence_end_signalling(fence_cookie);
+
return true;
}
EXPORT_SYMBOL(drm_handle_vblank);
--
2.30.0
This is a bit disappointing since we need to split the annotations
over all the different parts.
I was considering just leaking the critical section into the
->atomic_commit_tail callback of each driver. But that would mean we
need to pass the fence_cookie into each driver (there's a total of 13
implementations of this hook right now), so bad flag day. And also a
bit leaky abstraction.
Hence just do it function-by-function.
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
Cc: linux-rdma(a)vger.kernel.org
Cc: amd-gfx(a)lists.freedesktop.org
Cc: intel-gfx(a)lists.freedesktop.org
Cc: Chris Wilson <chris(a)chris-wilson.co.uk>
Cc: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
Cc: Christian König <christian.koenig(a)amd.com>
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
---
drivers/gpu/drm/drm_atomic_helper.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
index 4a66768b6057..69121d2925bd 100644
--- a/drivers/gpu/drm/drm_atomic_helper.c
+++ b/drivers/gpu/drm/drm_atomic_helper.c
@@ -1567,6 +1567,7 @@ EXPORT_SYMBOL(drm_atomic_helper_wait_for_flip_done);
void drm_atomic_helper_commit_tail(struct drm_atomic_state *old_state)
{
struct drm_device *dev = old_state->dev;
+ bool fence_cookie = dma_fence_begin_signalling();
drm_atomic_helper_commit_modeset_disables(dev, old_state);
@@ -1578,6 +1579,8 @@ void drm_atomic_helper_commit_tail(struct drm_atomic_state *old_state)
drm_atomic_helper_commit_hw_done(old_state);
+ dma_fence_end_signalling(fence_cookie);
+
drm_atomic_helper_wait_for_vblanks(dev, old_state);
drm_atomic_helper_cleanup_planes(dev, old_state);
@@ -1597,6 +1600,7 @@ EXPORT_SYMBOL(drm_atomic_helper_commit_tail);
void drm_atomic_helper_commit_tail_rpm(struct drm_atomic_state *old_state)
{
struct drm_device *dev = old_state->dev;
+ bool fence_cookie = dma_fence_begin_signalling();
drm_atomic_helper_commit_modeset_disables(dev, old_state);
@@ -1609,6 +1613,8 @@ void drm_atomic_helper_commit_tail_rpm(struct drm_atomic_state *old_state)
drm_atomic_helper_commit_hw_done(old_state);
+ dma_fence_end_signalling(fence_cookie);
+
drm_atomic_helper_wait_for_vblanks(dev, old_state);
drm_atomic_helper_cleanup_planes(dev, old_state);
@@ -1624,6 +1630,9 @@ static void commit_tail(struct drm_atomic_state *old_state)
ktime_t start;
s64 commit_time_ms;
unsigned int i, new_self_refresh_mask = 0;
+ bool fence_cookie;
+
+ fence_cookie = dma_fence_begin_signalling();
funcs = dev->mode_config.helper_private;
@@ -1652,6 +1661,8 @@ static void commit_tail(struct drm_atomic_state *old_state)
if (new_crtc_state->self_refresh_active)
new_self_refresh_mask |= BIT(i);
+ dma_fence_end_signalling(fence_cookie);
+
if (funcs && funcs->atomic_commit_tail)
funcs->atomic_commit_tail(old_state);
else
@@ -1810,6 +1821,7 @@ int drm_atomic_helper_commit(struct drm_device *dev,
bool nonblock)
{
int ret;
+ bool fence_cookie;
if (state->async_update) {
ret = drm_atomic_helper_prepare_planes(dev, state);
@@ -1832,6 +1844,8 @@ int drm_atomic_helper_commit(struct drm_device *dev,
if (ret)
return ret;
+ fence_cookie = dma_fence_begin_signalling();
+
if (!nonblock) {
ret = drm_atomic_helper_wait_for_fences(dev, state, true);
if (ret)
@@ -1869,6 +1883,7 @@ int drm_atomic_helper_commit(struct drm_device *dev,
*/
drm_atomic_state_get(state);
+ dma_fence_end_signalling(fence_cookie);
if (nonblock)
queue_work(system_unbound_wq, &state->commit_work);
else
@@ -1877,6 +1892,7 @@ int drm_atomic_helper_commit(struct drm_device *dev,
return 0;
err:
+ dma_fence_end_signalling(fence_cookie);
drm_atomic_helper_cleanup_planes(dev, state);
return ret;
}
--
2.30.0
This patchset introduces a new dma heap, "chunk-heap" that makes it
easy to perform the bulk allocation of high order pages.
It has been created to help optimize the 4K/8K HDR video playback
with secure DRM HW to protect contents on memory. The HW needs
physically contiguous memory chunks(e.g, 64K) up to several hundred
MB memory.
To make such high-order big bulk allocations work, chunk-heap uses
CMA area. To avoid CMA allocation long stall on blocking pages(e.g.,
page writeback and/or page locking), it uses failfast mode of the
CMA API(i.e., __GFP_NORETRY) so it will continue to find easy
migratable pages in different pageblocks without stalling. At last
resort, it will allow the blocking only if it couldn't find the
available memory in the end.
First two patches introduces the failfast mode as __GFP_NORETRY
in alloc_contig_range and the allow to use it from the CMA API.
Third patch introduces device tree syntax for chunk-heap to bind
the specific CMA area with chunk-heap.
Finally, last patch implements chunk-heap as dma-buf heap.
* since v2 -
* introduce gfp_mask with __GFP_NORETRY on cma_alloc - mhocko
* do not expoert CMA APIs - Christoph
* use compatible string for DT instead of dma-heap specific property - Hridya
* Since v1 - https://lore.kernel.org/linux-mm/20201117181935.3613581-1-minchan@kernel.or…
* introduce alloc_contig_mode - David
* use default CMA instead of device tree - John
Hyesoo Yu (2):
dt-bindings: reserved-memory: Make DMA-BUF CMA heap DT-configurable
dma-buf: heaps: add chunk heap to dmabuf heaps
Minchan Kim (2):
mm: cma: introduce gfp flag in cma_alloc instead of no_warn
mm: failfast mode with __GFP_NORETRY in alloc_contig_range
.../reserved-memory/dma_heap_chunk.yaml | 58 +++
drivers/dma-buf/heaps/Kconfig | 8 +
drivers/dma-buf/heaps/Makefile | 1 +
drivers/dma-buf/heaps/chunk_heap.c | 477 ++++++++++++++++++
drivers/dma-buf/heaps/cma_heap.c | 2 +-
drivers/s390/char/vmcp.c | 2 +-
include/linux/cma.h | 2 +-
kernel/dma/contiguous.c | 3 +-
mm/cma.c | 12 +-
mm/cma_debug.c | 2 +-
mm/hugetlb.c | 6 +-
mm/page_alloc.c | 8 +-
mm/secretmem.c | 3 +-
13 files changed, 568 insertions(+), 16 deletions(-)
create mode 100644 Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml
create mode 100644 drivers/dma-buf/heaps/chunk_heap.c
--
2.30.0.284.gd98b1dd5eaa7-goog
On Fri, Jan 15, 2021 at 06:22:23PM +0000, Zack Rusin wrote:
>
> > On Jan 15, 2021, at 13:12, Lee Jones <lee.jones(a)linaro.org> wrote:
> >
> > This set is part of a larger effort attempting to clean-up W=1
> > kernel builds, which are currently overwhelmingly riddled with
> > niggly little warnings.
> >
> > Penultimate set, promise. :)
>
>
> Thank you for all that work. For all the vmwgfx bits:
> Reviewed-by: Zack Rusin <zackr(a)vmware.com>
I pulled all the non-vmxgfx patches to drm-misc-next, I'll leave the vmw
stuff to Zack.
Thanks for your work here!
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
We have too many people abusing the struct page they can get at but
really shouldn't in importers. Aside from that the backing page might
simply not exist (for dynamic p2p mappings) looking at it and using it
e.g. for mmap can also wreak the page handling of the exporter
completely. Importers really must go through the proper interface like
dma_buf_mmap for everything.
I'm semi-tempted to enforce this for dynamic importers since those
really have no excuse at all to break the rules.
Unfortuantely we can't store the right pointers somewhere safe to make
sure we oops on something recognizable, so best is to just wrangle
them a bit by flipping all the bits. At least on x86 kernel addresses
have all their high bits sets and the struct page array is fairly low
in the kernel mapping, so flipping all the bits gives us a very high
pointer in userspace and hence excellent chances for an invalid
dereference.
v2: Add a note to the @map_dma_buf hook that exporters shouldn't do
fancy caching tricks, which would blow up with this address scrambling
trick here (Chris)
Enable by default when CONFIG_DMA_API_DEBUG is enabled.
Reviewed-by: Chris Wilson <chris(a)chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
Cc: Chris Wilson <chris(a)chris-wilson.co.uk>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: "Christian König" <christian.koenig(a)amd.com>
Cc: David Stevens <stevensd(a)chromium.org>
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
---
drivers/dma-buf/Kconfig | 8 +++++++
drivers/dma-buf/dma-buf.c | 49 +++++++++++++++++++++++++++++++++++----
include/linux/dma-buf.h | 6 +++++
3 files changed, 59 insertions(+), 4 deletions(-)
diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig
index 4f8224a6ac95..4e16c71c24b7 100644
--- a/drivers/dma-buf/Kconfig
+++ b/drivers/dma-buf/Kconfig
@@ -50,6 +50,14 @@ config DMABUF_MOVE_NOTIFY
This is marked experimental because we don't yet have a consistent
execution context and memory management between drivers.
+config DMABUF_DEBUG
+ bool "DMA-BUF debug checks"
+ default y if DMA_API_DEBUG
+ help
+ This option enables additional checks for DMA-BUF importers and
+ exporters. Specifically it validates that importers do not peek at the
+ underlying struct page when they import a buffer.
+
config DMABUF_SELFTESTS
tristate "Selftests for the dma-buf interfaces"
default n
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 1c9bd51db110..6e4725f7dfde 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -666,6 +666,30 @@ void dma_buf_put(struct dma_buf *dmabuf)
}
EXPORT_SYMBOL_GPL(dma_buf_put);
+static struct sg_table * __map_dma_buf(struct dma_buf_attachment *attach,
+ enum dma_data_direction direction)
+{
+ struct sg_table *sg_table;
+
+ sg_table = attach->dmabuf->ops->map_dma_buf(attach, direction);
+
+#if CONFIG_DMABUF_DEBUG
+ if (sg_table) {
+ int i;
+ struct scatterlist *sg;
+
+ /* To catch abuse of the underlying struct page by importers mix
+ * up the bits, but take care to preserve the low SG_ bits to
+ * not corrupt the sgt. The mixing is undone in __unmap_dma_buf
+ * before passing the sgt back to the exporter. */
+ for_each_sgtable_sg(sg_table, sg, i)
+ sg->page_link ^= ~0xffUL;
+ }
+#endif
+
+ return sg_table;
+}
+
/**
* dma_buf_dynamic_attach - Add the device to dma_buf's attachments list
* @dmabuf: [in] buffer to attach device to.
@@ -737,7 +761,7 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev,
goto err_unlock;
}
- sgt = dmabuf->ops->map_dma_buf(attach, DMA_BIDIRECTIONAL);
+ sgt = __map_dma_buf(attach, DMA_BIDIRECTIONAL);
if (!sgt)
sgt = ERR_PTR(-ENOMEM);
if (IS_ERR(sgt)) {
@@ -784,6 +808,23 @@ struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
}
EXPORT_SYMBOL_GPL(dma_buf_attach);
+static void __unmap_dma_buf(struct dma_buf_attachment *attach,
+ struct sg_table *sg_table,
+ enum dma_data_direction direction)
+{
+
+#if CONFIG_DMABUF_DEBUG
+ if (sg_table) {
+ int i;
+ struct scatterlist *sg;
+
+ for_each_sgtable_sg(sg_table, sg, i)
+ sg->page_link ^= ~0xffUL;
+ }
+#endif
+ attach->dmabuf->ops->unmap_dma_buf(attach, sg_table, direction);
+}
+
/**
* dma_buf_detach - Remove the given attachment from dmabuf's attachments list
* @dmabuf: [in] buffer to detach from.
@@ -802,7 +843,7 @@ void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach)
if (dma_buf_is_dynamic(attach->dmabuf))
dma_resv_lock(attach->dmabuf->resv, NULL);
- dmabuf->ops->unmap_dma_buf(attach, attach->sgt, attach->dir);
+ __unmap_dma_buf(attach, attach->sgt, attach->dir);
if (dma_buf_is_dynamic(attach->dmabuf)) {
dma_buf_unpin(attach);
@@ -924,7 +965,7 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach,
}
}
- sg_table = attach->dmabuf->ops->map_dma_buf(attach, direction);
+ sg_table = __map_dma_buf(attach, direction);
if (!sg_table)
sg_table = ERR_PTR(-ENOMEM);
@@ -987,7 +1028,7 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
if (dma_buf_is_dynamic(attach->dmabuf))
dma_resv_assert_held(attach->dmabuf->resv);
- attach->dmabuf->ops->unmap_dma_buf(attach, sg_table, direction);
+ __unmap_dma_buf(attach, sg_table, direction);
if (dma_buf_is_dynamic(attach->dmabuf) &&
!IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY))
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 628681bf6c99..efdc56b9d95f 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -154,6 +154,12 @@ struct dma_buf_ops {
* On failure, returns a negative error value wrapped into a pointer.
* May also return -EINTR when a signal was received while being
* blocked.
+ *
+ * Note that exporters should not try to cache the scatter list, or
+ * return the same one for multiple calls. Caching is done either by the
+ * DMA-BUF code (for non-dynamic importers) or the importer. Ownership
+ * of the scatter list is transferred to the caller, and returned by
+ * @unmap_dma_buf.
*/
struct sg_table * (*map_dma_buf)(struct dma_buf_attachment *,
enum dma_data_direction);
--
2.29.2
On Tue, Jan 12, 2021 at 02:11:24PM +0100, Thomas Zimmermann wrote:
> Hi
>
> Am 11.01.21 um 17:50 schrieb Daniel Vetter:
> > On Fri, Jan 08, 2021 at 10:43:31AM +0100, Thomas Zimmermann wrote:
> > > Implementations of the vmap/vunmap GEM callbacks may perform pinning
> > > of the BO and may acquire the associated reservation object's lock.
> > > Callers that only require a mapping of the contained memory can thus
> > > interfere with other tasks that require exact pinning, such as scanout.
> > > This is less of an issue with private SHMEM buffers, but may happen
> > > with imported ones.
> > >
> > > Therefore provide the new interfaces drm_gem_shmem_vmap_local() and
> > > drm_gem_shmem_vunmap_local(), which only perform the vmap/vunmap
> > > operations. Callers have to hold the reservation lock while the mapping
> > > persists.
> > >
> > > This patch also connects GEM SHMEM helpers to GEM object functions with
> > > equivalent functionality.
> > >
> > > v4:
> > > * call dma_buf_{vmap,vunmap}_local() where necessary (Daniel)
> > > * move driver changes into separate patches (Daniel)
> > >
> > > Signed-off-by: Thomas Zimmermann <tzimmermann(a)suse.de>
> > > ---
> > > drivers/gpu/drm/drm_gem_shmem_helper.c | 90 +++++++++++++++++++++++---
> > > include/drm/drm_gem_shmem_helper.h | 2 +
> > > 2 files changed, 84 insertions(+), 8 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > > index 9825c378dfa6..298832b2b43b 100644
> > > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> > > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > > @@ -32,6 +32,8 @@ static const struct drm_gem_object_funcs drm_gem_shmem_funcs = {
> > > .get_sg_table = drm_gem_shmem_get_sg_table,
> > > .vmap = drm_gem_shmem_vmap,
> > > .vunmap = drm_gem_shmem_vunmap,
> > > + .vmap_local = drm_gem_shmem_vmap_local,
> > > + .vunmap_local = drm_gem_shmem_vunmap_local,
> > > .mmap = drm_gem_shmem_mmap,
> > > };
> > > @@ -261,7 +263,8 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj)
> > > }
> > > EXPORT_SYMBOL(drm_gem_shmem_unpin);
> > > -static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map)
> > > +static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map,
> > > + bool local)
> >
> > This is a bit spaghetti and also has the problem that we're not changing
> > shmem->vmap_use_count under different locks, depending upon which path
> > we're taking.
> >
> > I think the cleanest would be if we pull the if (import_attach) case out
> > of the _locked() version completely, for all cases, and also outside of
> > the shmem->vmap_lock. This means no caching of vmaps in the shmem layer
> > anymore for imported buffers, but this is no longer a problem: We cache
> > them in the exporters instead (I think at least, if not maybe need to fix
> > that where it's expensive).
>
> If we do that, what protects shmem->vaddr from concurrent access near line
> 281? would it be kept NULL then?
>
> Also, we have some stats in debugfs (see drm_gem_shmem_print_info) which
> would be incorrect (or misleading at least).
We'd need to disable all that for pass-through vmap of imported objects.
> Given all that, would it be possible to remove vmap_lock in favor of taking
> the resv lock in vmap/vunmap?
All possible (and imo long-term desirable), the trouble is in rolling it
out. I've looked at rolling out dma_resv as the one and only lock for
shmem helpers before, and gave up. Exynos is the worst (but not the only)
offender:
- it has it's own per-object lock
- that per-object lock is taken most often before calling into various
vfuncs, which means for a gradual transition the dma_resv lock would
nest within that existing per-object lock (until we've completely
replaced it)
- but exynos also uses dma_resv already as an outermost lock in its
command submission path
iow as soon as you add dma_resv_lock anywhere in shmem helpers, we've
angered lockdep with a deadlock.
That means the only path I think is feasible is adding dma_resv lock to
all drivers paths first, _outside_ of any existing driver specific
per-object locks. Then remove the driver-specific object locks, and only
then can we sprinkle dma_resv_assert_locked all over shmem helpers.
Ofc any driver without per-driver locks of their own could directly switch
over to dma_resv lock, but until we've converted over all the drivers with
their own locking shmem helpers would be stuck where they are right now.
I gave up :-/ But maybe if you only try to tackle vmap it might be
feasible, since a lot fewer callers.
Cheers, Daniel
>
> Best regards
> Thomas
>
> >
> > Other option would be to unly pull it out for the _vmap_local case, but
> > that's a bit ugly because no longer symmetrical in the various paths.
> >
> > > {
> > > struct drm_gem_object *obj = &shmem->base;
> > > int ret = 0;
> > > @@ -272,7 +275,10 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct
> > > }
> > > if (obj->import_attach) {
> > > - ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
> > > + if (local)
> > > + ret = dma_buf_vmap_local(obj->import_attach->dmabuf, map);
> > > + else
> > > + ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
> > > if (!ret) {
> > > if (WARN_ON(map->is_iomem)) {
> > > ret = -EIO;
> > > @@ -313,7 +319,7 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct
> > > return ret;
> > > }
> > > -/*
> > > +/**
> > > * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
> > > * @shmem: shmem GEM object
> > > * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
> > > @@ -339,15 +345,53 @@ int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> > > ret = mutex_lock_interruptible(&shmem->vmap_lock);
> > > if (ret)
> > > return ret;
> > > - ret = drm_gem_shmem_vmap_locked(shmem, map);
> > > + ret = drm_gem_shmem_vmap_locked(shmem, map, false);
> > > mutex_unlock(&shmem->vmap_lock);
> > > return ret;
> > > }
> > > EXPORT_SYMBOL(drm_gem_shmem_vmap);
> > > +/**
> > > + * drm_gem_shmem_vmap_local - Create a virtual mapping for a shmem GEM object
> > > + * @shmem: shmem GEM object
> > > + * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
> > > + * store.
> > > + *
> > > + * This function makes sure that a contiguous kernel virtual address mapping
> > > + * exists for the buffer backing the shmem GEM object.
> > > + *
> > > + * The function is called with the BO's reservation object locked. Callers must
> > > + * hold the lock until after unmapping the buffer.
> > > + *
> > > + * This function can be used to implement &drm_gem_object_funcs.vmap_local. But
> > > + * it can also be called by drivers directly, in which case it will hide the
> > > + * differences between dma-buf imported and natively allocated objects.
> >
> > So for the other callbacks I tried to make sure we have different entry
> > points for this, since it's not really the same thing and because of the
> > locking mess we have with dma_resv_lock vs various pre-existing local
> > locking scheme, it's easy to get a mess.
> >
> > I think the super clean version here would be to also export just the
> > internal stuff for the ->v(un)map_local hooks, but that's maybe a bit too
> > much boilerplate for no real gain.
> > -Daniel
> >
> > > + *
> > > + * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap_local().
> > > + *
> > > + * Returns:
> > > + * 0 on success or a negative error code on failure.
> > > + */
> > > +int drm_gem_shmem_vmap_local(struct drm_gem_object *obj, struct dma_buf_map *map)
> > > +{
> > > + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> > > + int ret;
> > > +
> > > + dma_resv_assert_held(obj->resv);
> > > +
> > > + ret = mutex_lock_interruptible(&shmem->vmap_lock);
> > > + if (ret)
> > > + return ret;
> > > + ret = drm_gem_shmem_vmap_locked(shmem, map, true);
> > > + mutex_unlock(&shmem->vmap_lock);
> > > +
> > > + return ret;
> > > +}
> > > +EXPORT_SYMBOL(drm_gem_shmem_vmap_local);
> > > +
> > > static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
> > > - struct dma_buf_map *map)
> > > + struct dma_buf_map *map, bool local)
> > > {
> > > struct drm_gem_object *obj = &shmem->base;
> > > @@ -358,7 +402,10 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
> > > return;
> > > if (obj->import_attach)
> > > - dma_buf_vunmap(obj->import_attach->dmabuf, map);
> > > + if (local)
> > > + dma_buf_vunmap_local(obj->import_attach->dmabuf, map);
> > > + else
> > > + dma_buf_vunmap(obj->import_attach->dmabuf, map);
> > > else
> > > vunmap(shmem->vaddr);
> > > @@ -366,7 +413,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
> > > drm_gem_shmem_put_pages(shmem);
> > > }
> > > -/*
> > > +/**
> > > * drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object
> > > * @shmem: shmem GEM object
> > > * @map: Kernel virtual address where the SHMEM GEM object was mapped
> > > @@ -384,11 +431,38 @@ void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> > > struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> > > mutex_lock(&shmem->vmap_lock);
> > > - drm_gem_shmem_vunmap_locked(shmem, map);
> > > + drm_gem_shmem_vunmap_locked(shmem, map, false);
> > > mutex_unlock(&shmem->vmap_lock);
> > > }
> > > EXPORT_SYMBOL(drm_gem_shmem_vunmap);
> > > +/**
> > > + * drm_gem_shmem_vunmap_local - Unmap a virtual mapping fo a shmem GEM object
> > > + * @shmem: shmem GEM object
> > > + * @map: Kernel virtual address where the SHMEM GEM object was mapped
> > > + *
> > > + * This function cleans up a kernel virtual address mapping acquired by
> > > + * drm_gem_shmem_vmap_local(). The mapping is only removed when the use count
> > > + * drops to zero.
> > > + *
> > > + * The function is called with the BO's reservation object locked.
> > > + *
> > > + * This function can be used to implement &drm_gem_object_funcs.vmap_local.
> > > + * But it can also be called by drivers directly, in which case it will hide
> > > + * the differences between dma-buf imported and natively allocated objects.
> > > + */
> > > +void drm_gem_shmem_vunmap_local(struct drm_gem_object *obj, struct dma_buf_map *map)
> > > +{
> > > + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> > > +
> > > + dma_resv_assert_held(obj->resv);
> > > +
> > > + mutex_lock(&shmem->vmap_lock);
> > > + drm_gem_shmem_vunmap_locked(shmem, map, true);
> > > + mutex_unlock(&shmem->vmap_lock);
> > > +}
> > > +EXPORT_SYMBOL(drm_gem_shmem_vunmap_local);
> > > +
> > > struct drm_gem_shmem_object *
> > > drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
> > > struct drm_device *dev, size_t size,
> > > diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> > > index 434328d8a0d9..3f59bdf749aa 100644
> > > --- a/include/drm/drm_gem_shmem_helper.h
> > > +++ b/include/drm/drm_gem_shmem_helper.h
> > > @@ -114,7 +114,9 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
> > > int drm_gem_shmem_pin(struct drm_gem_object *obj);
> > > void drm_gem_shmem_unpin(struct drm_gem_object *obj);
> > > int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> > > +int drm_gem_shmem_vmap_local(struct drm_gem_object *obj, struct dma_buf_map *map);
> > > void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> > > +void drm_gem_shmem_vunmap_local(struct drm_gem_object *obj, struct dma_buf_map *map);
> > > int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv);
> > > --
> > > 2.29.2
> > >
> >
>
> --
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Tue, Jan 12, 2021 at 08:54:02AM +0100, Thomas Zimmermann wrote:
> Hi
>
> Am 11.01.21 um 18:06 schrieb Daniel Vetter:
> > On Fri, Jan 08, 2021 at 10:43:38AM +0100, Thomas Zimmermann wrote:
> > > Cursor updates in vboxvideo require a short-term mapping of the
> > > source BO. Use drm_gem_vram_vmap_local() and avoid the pinning
> > > operations.
> > >
> > > Signed-off-by: Thomas Zimmermann <tzimmermann(a)suse.de>
> >
> > All these drivers patches break the dma_resv_lock vs
> > dma_fence_begin/end_signalling nesting rules, so this doesn't work.
> >
> > Generally this is what the prepare/cleanup_fb hooks are for, that's where
> > mappings (including vmaps) are meant to be set up, permanently.
> >
> > I'm kinda not clear on why we need all these changes, I thought the
> > locking problem is just in the fb helper paths, because it's outside of
> > the atomic path and could conflict with an atomic update at the same time?
> > So only that one should get the vmap_local treatment, everything else
> > should keep the normal vmap treatment.
>
> Kind of responding to all your comment on the driver changes:
>
> These drivers only require short-term mappings, so using vmap_local would be
> the natural choice. For SHMEM helpers, it's mostly a cosmetic thing. For
> VRAM helpers, I was hoping to remove the vmap/vunmap helpers entirely. One
> cannot really map the BOs for the long-term, so not having the helpers at
> all would make sense.
>
> But reading all your comments on the driver patches, I'd rather not update
> the drivers here but later convert them to use prepare_fb/cleanup_fb in the
> correct way.
Ack from me on this plan. I think I got all the other patches with an r-b
or ack?
-Daniel
>
> Best regards
> Thomas
>
> > -Daniel
> > > ---
> > > drivers/gpu/drm/vboxvideo/vbox_mode.c | 15 +++++++++------
> > > 1 file changed, 9 insertions(+), 6 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c
> > > index dbc0dd53c69e..215b37c78c10 100644
> > > --- a/drivers/gpu/drm/vboxvideo/vbox_mode.c
> > > +++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c
> > > @@ -381,7 +381,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
> > > container_of(plane->dev, struct vbox_private, ddev);
> > > struct vbox_crtc *vbox_crtc = to_vbox_crtc(plane->state->crtc);
> > > struct drm_framebuffer *fb = plane->state->fb;
> > > - struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(fb->obj[0]);
> > > + struct drm_gem_object *obj = fb->obj[0];
> > > + struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(obj);
> > > u32 width = plane->state->crtc_w;
> > > u32 height = plane->state->crtc_h;
> > > size_t data_size, mask_size;
> > > @@ -401,11 +402,12 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
> > > vbox_crtc->cursor_enabled = true;
> > > - ret = drm_gem_vram_vmap(gbo, &map);
> > > + ret = dma_resv_lock(obj->resv, NULL);
> > > + if (ret)
> > > + return;
> > > + ret = drm_gem_vram_vmap_local(gbo, &map);
> > > if (ret) {
> > > - /*
> > > - * BUG: we should have pinned the BO in prepare_fb().
> > > - */
> > > + dma_resv_unlock(obj->resv);
> > > mutex_unlock(&vbox->hw_mutex);
> > > DRM_WARN("Could not map cursor bo, skipping update\n");
> > > return;
> > > @@ -421,7 +423,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
> > > data_size = width * height * 4 + mask_size;
> > > copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);
> > > - drm_gem_vram_vunmap(gbo, &map);
> > > + drm_gem_vram_vunmap_local(gbo, &map);
> > > + dma_resv_unlock(obj->resv);
> > > flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |
> > > VBOX_MOUSE_POINTER_ALPHA;
> > > --
> > > 2.29.2
> > >
> >
>
> --
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Fri, Jan 08, 2021 at 10:43:38AM +0100, Thomas Zimmermann wrote:
> Cursor updates in vboxvideo require a short-term mapping of the
> source BO. Use drm_gem_vram_vmap_local() and avoid the pinning
> operations.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann(a)suse.de>
All these drivers patches break the dma_resv_lock vs
dma_fence_begin/end_signalling nesting rules, so this doesn't work.
Generally this is what the prepare/cleanup_fb hooks are for, that's where
mappings (including vmaps) are meant to be set up, permanently.
I'm kinda not clear on why we need all these changes, I thought the
locking problem is just in the fb helper paths, because it's outside of
the atomic path and could conflict with an atomic update at the same time?
So only that one should get the vmap_local treatment, everything else
should keep the normal vmap treatment.
-Daniel
> ---
> drivers/gpu/drm/vboxvideo/vbox_mode.c | 15 +++++++++------
> 1 file changed, 9 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c
> index dbc0dd53c69e..215b37c78c10 100644
> --- a/drivers/gpu/drm/vboxvideo/vbox_mode.c
> +++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c
> @@ -381,7 +381,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
> container_of(plane->dev, struct vbox_private, ddev);
> struct vbox_crtc *vbox_crtc = to_vbox_crtc(plane->state->crtc);
> struct drm_framebuffer *fb = plane->state->fb;
> - struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(fb->obj[0]);
> + struct drm_gem_object *obj = fb->obj[0];
> + struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(obj);
> u32 width = plane->state->crtc_w;
> u32 height = plane->state->crtc_h;
> size_t data_size, mask_size;
> @@ -401,11 +402,12 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
>
> vbox_crtc->cursor_enabled = true;
>
> - ret = drm_gem_vram_vmap(gbo, &map);
> + ret = dma_resv_lock(obj->resv, NULL);
> + if (ret)
> + return;
> + ret = drm_gem_vram_vmap_local(gbo, &map);
> if (ret) {
> - /*
> - * BUG: we should have pinned the BO in prepare_fb().
> - */
> + dma_resv_unlock(obj->resv);
> mutex_unlock(&vbox->hw_mutex);
> DRM_WARN("Could not map cursor bo, skipping update\n");
> return;
> @@ -421,7 +423,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
> data_size = width * height * 4 + mask_size;
>
> copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);
> - drm_gem_vram_vunmap(gbo, &map);
> + drm_gem_vram_vunmap_local(gbo, &map);
> + dma_resv_unlock(obj->resv);
>
> flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |
> VBOX_MOUSE_POINTER_ALPHA;
> --
> 2.29.2
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Fri, Jan 08, 2021 at 10:43:31AM +0100, Thomas Zimmermann wrote:
> Implementations of the vmap/vunmap GEM callbacks may perform pinning
> of the BO and may acquire the associated reservation object's lock.
> Callers that only require a mapping of the contained memory can thus
> interfere with other tasks that require exact pinning, such as scanout.
> This is less of an issue with private SHMEM buffers, but may happen
> with imported ones.
>
> Therefore provide the new interfaces drm_gem_shmem_vmap_local() and
> drm_gem_shmem_vunmap_local(), which only perform the vmap/vunmap
> operations. Callers have to hold the reservation lock while the mapping
> persists.
>
> This patch also connects GEM SHMEM helpers to GEM object functions with
> equivalent functionality.
>
> v4:
> * call dma_buf_{vmap,vunmap}_local() where necessary (Daniel)
> * move driver changes into separate patches (Daniel)
>
> Signed-off-by: Thomas Zimmermann <tzimmermann(a)suse.de>
> ---
> drivers/gpu/drm/drm_gem_shmem_helper.c | 90 +++++++++++++++++++++++---
> include/drm/drm_gem_shmem_helper.h | 2 +
> 2 files changed, 84 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 9825c378dfa6..298832b2b43b 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -32,6 +32,8 @@ static const struct drm_gem_object_funcs drm_gem_shmem_funcs = {
> .get_sg_table = drm_gem_shmem_get_sg_table,
> .vmap = drm_gem_shmem_vmap,
> .vunmap = drm_gem_shmem_vunmap,
> + .vmap_local = drm_gem_shmem_vmap_local,
> + .vunmap_local = drm_gem_shmem_vunmap_local,
> .mmap = drm_gem_shmem_mmap,
> };
>
> @@ -261,7 +263,8 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj)
> }
> EXPORT_SYMBOL(drm_gem_shmem_unpin);
>
> -static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map)
> +static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map,
> + bool local)
This is a bit spaghetti and also has the problem that we're not changing
shmem->vmap_use_count under different locks, depending upon which path
we're taking.
I think the cleanest would be if we pull the if (import_attach) case out
of the _locked() version completely, for all cases, and also outside of
the shmem->vmap_lock. This means no caching of vmaps in the shmem layer
anymore for imported buffers, but this is no longer a problem: We cache
them in the exporters instead (I think at least, if not maybe need to fix
that where it's expensive).
Other option would be to unly pull it out for the _vmap_local case, but
that's a bit ugly because no longer symmetrical in the various paths.
> {
> struct drm_gem_object *obj = &shmem->base;
> int ret = 0;
> @@ -272,7 +275,10 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct
> }
>
> if (obj->import_attach) {
> - ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
> + if (local)
> + ret = dma_buf_vmap_local(obj->import_attach->dmabuf, map);
> + else
> + ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
> if (!ret) {
> if (WARN_ON(map->is_iomem)) {
> ret = -EIO;
> @@ -313,7 +319,7 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct
> return ret;
> }
>
> -/*
> +/**
> * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
> * @shmem: shmem GEM object
> * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
> @@ -339,15 +345,53 @@ int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> ret = mutex_lock_interruptible(&shmem->vmap_lock);
> if (ret)
> return ret;
> - ret = drm_gem_shmem_vmap_locked(shmem, map);
> + ret = drm_gem_shmem_vmap_locked(shmem, map, false);
> mutex_unlock(&shmem->vmap_lock);
>
> return ret;
> }
> EXPORT_SYMBOL(drm_gem_shmem_vmap);
>
> +/**
> + * drm_gem_shmem_vmap_local - Create a virtual mapping for a shmem GEM object
> + * @shmem: shmem GEM object
> + * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
> + * store.
> + *
> + * This function makes sure that a contiguous kernel virtual address mapping
> + * exists for the buffer backing the shmem GEM object.
> + *
> + * The function is called with the BO's reservation object locked. Callers must
> + * hold the lock until after unmapping the buffer.
> + *
> + * This function can be used to implement &drm_gem_object_funcs.vmap_local. But
> + * it can also be called by drivers directly, in which case it will hide the
> + * differences between dma-buf imported and natively allocated objects.
So for the other callbacks I tried to make sure we have different entry
points for this, since it's not really the same thing and because of the
locking mess we have with dma_resv_lock vs various pre-existing local
locking scheme, it's easy to get a mess.
I think the super clean version here would be to also export just the
internal stuff for the ->v(un)map_local hooks, but that's maybe a bit too
much boilerplate for no real gain.
-Daniel
> + *
> + * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap_local().
> + *
> + * Returns:
> + * 0 on success or a negative error code on failure.
> + */
> +int drm_gem_shmem_vmap_local(struct drm_gem_object *obj, struct dma_buf_map *map)
> +{
> + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> + int ret;
> +
> + dma_resv_assert_held(obj->resv);
> +
> + ret = mutex_lock_interruptible(&shmem->vmap_lock);
> + if (ret)
> + return ret;
> + ret = drm_gem_shmem_vmap_locked(shmem, map, true);
> + mutex_unlock(&shmem->vmap_lock);
> +
> + return ret;
> +}
> +EXPORT_SYMBOL(drm_gem_shmem_vmap_local);
> +
> static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
> - struct dma_buf_map *map)
> + struct dma_buf_map *map, bool local)
> {
> struct drm_gem_object *obj = &shmem->base;
>
> @@ -358,7 +402,10 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
> return;
>
> if (obj->import_attach)
> - dma_buf_vunmap(obj->import_attach->dmabuf, map);
> + if (local)
> + dma_buf_vunmap_local(obj->import_attach->dmabuf, map);
> + else
> + dma_buf_vunmap(obj->import_attach->dmabuf, map);
> else
> vunmap(shmem->vaddr);
>
> @@ -366,7 +413,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
> drm_gem_shmem_put_pages(shmem);
> }
>
> -/*
> +/**
> * drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object
> * @shmem: shmem GEM object
> * @map: Kernel virtual address where the SHMEM GEM object was mapped
> @@ -384,11 +431,38 @@ void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
> struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>
> mutex_lock(&shmem->vmap_lock);
> - drm_gem_shmem_vunmap_locked(shmem, map);
> + drm_gem_shmem_vunmap_locked(shmem, map, false);
> mutex_unlock(&shmem->vmap_lock);
> }
> EXPORT_SYMBOL(drm_gem_shmem_vunmap);
>
> +/**
> + * drm_gem_shmem_vunmap_local - Unmap a virtual mapping fo a shmem GEM object
> + * @shmem: shmem GEM object
> + * @map: Kernel virtual address where the SHMEM GEM object was mapped
> + *
> + * This function cleans up a kernel virtual address mapping acquired by
> + * drm_gem_shmem_vmap_local(). The mapping is only removed when the use count
> + * drops to zero.
> + *
> + * The function is called with the BO's reservation object locked.
> + *
> + * This function can be used to implement &drm_gem_object_funcs.vmap_local.
> + * But it can also be called by drivers directly, in which case it will hide
> + * the differences between dma-buf imported and natively allocated objects.
> + */
> +void drm_gem_shmem_vunmap_local(struct drm_gem_object *obj, struct dma_buf_map *map)
> +{
> + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> +
> + dma_resv_assert_held(obj->resv);
> +
> + mutex_lock(&shmem->vmap_lock);
> + drm_gem_shmem_vunmap_locked(shmem, map, true);
> + mutex_unlock(&shmem->vmap_lock);
> +}
> +EXPORT_SYMBOL(drm_gem_shmem_vunmap_local);
> +
> struct drm_gem_shmem_object *
> drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
> struct drm_device *dev, size_t size,
> diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> index 434328d8a0d9..3f59bdf749aa 100644
> --- a/include/drm/drm_gem_shmem_helper.h
> +++ b/include/drm/drm_gem_shmem_helper.h
> @@ -114,7 +114,9 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
> int drm_gem_shmem_pin(struct drm_gem_object *obj);
> void drm_gem_shmem_unpin(struct drm_gem_object *obj);
> int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +int drm_gem_shmem_vmap_local(struct drm_gem_object *obj, struct dma_buf_map *map);
> void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void drm_gem_shmem_vunmap_local(struct drm_gem_object *obj, struct dma_buf_map *map);
>
> int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv);
>
> --
> 2.29.2
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch