Am 31.08.21 um 05:44 schrieb guangming.cao(a)mediatek.com:
> From: Guangming Cao <Guangming.Cao(a)mediatek.com>
>
>> Am 30.08.21 um 12:01 schrieb guangming.cao(a)mediatek.com:
>>> From: Guangming Cao <Guangming.Cao(a)mediatek.com>
>>>
>>> Current flow, one dmabuf maybe call cache sync many times if
>>> it has beed mapped more than one time.
>> Well I'm not an expert on DMA heaps, but this will most likely not work
>> correctly.
>>
> All attachments of one dmabuf will add into a list, I think it means dmabuf
> supports map more than one time. Could you tell me more about it?
Yes, that's correct and all of those needs to be synced as far as I know.
See the dma_sync_sgtable_for_cpu() is intentionally for each SG table
given out.
>>> Is there any case that attachments of one dmabuf will points to
>>> different memory? If not, seems do sync only one time is more better.
>> I think that this can happen, yes.
>>
>> Christian.
>>
> Seems it's a very special case on Android, if you don't mind, could you
> tell me more about it?
That might be the case, nevertheless this change here is illegal from
the DMA API point of view as far as I can see.
Regards,
Christian.
>
>>> Signed-off-by: Guangming Cao <Guangming.Cao(a)mediatek.com>
>>> ---
>>> drivers/dma-buf/heaps/system_heap.c | 14 ++++++++------
>>> 1 file changed, 8 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
>>> index 23a7e74ef966..909ef652a8c8 100644
>>> --- a/drivers/dma-buf/heaps/system_heap.c
>>> +++ b/drivers/dma-buf/heaps/system_heap.c
>>> @@ -162,9 +162,10 @@ static int system_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>>> invalidate_kernel_vmap_range(buffer->vaddr, buffer->len);
>>>
>>> list_for_each_entry(a, &buffer->attachments, list) {
>>> - if (!a->mapped)
>>> - continue;
>>> - dma_sync_sgtable_for_cpu(a->dev, a->table, direction);
>>> + if (a->mapped) {
>>> + dma_sync_sgtable_for_cpu(a->dev, a->table, direction);
>>> + break;
>>> + }
>>> }
>>> mutex_unlock(&buffer->lock);
>>>
>>> @@ -183,9 +184,10 @@ static int system_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
>>> flush_kernel_vmap_range(buffer->vaddr, buffer->len);
>>>
>>> list_for_each_entry(a, &buffer->attachments, list) {
>>> - if (!a->mapped)
>>> - continue;
>>> - dma_sync_sgtable_for_device(a->dev, a->table, direction);
>>> + if (!a->mapped) {
>>> + dma_sync_sgtable_for_device(a->dev, a->table, direction);
>>> + break;
>>> + }
>>> }
>>> mutex_unlock(&buffer->lock);
>>>
Specifically document the new/clarified rules around how the shared
fences do not have any ordering requirements against the exclusive
fence.
But also document all the things a bit better, given how central
struct dma_resv to dynamic buffer management the docs have been very
inadequat.
- Lots more links to other pieces of the puzzle. Unfortunately
ttm_buffer_object has no docs, so no links :-(
- Explain/complain a bit about dma_resv_locking_ctx(). I still don't
like that one, but fixing the ttm call chains is going to be
horrible. Plus we want to plug in real slowpath locking when we do
that anyway.
- Main part of the patch is some actual docs for struct dma_resv.
Overall I think we still have a lot of bad naming in this area (e.g.
dma_resv.fence is singular, but contains the multiple shared fences),
but I think that's more indicative of how the semantics and rules are
just not great.
Another thing that's real awkard is how chaining exclusive fences
right now means direct dma_resv.exclusive_fence pointer access with an
rcu_assign_pointer. Not so great either.
v2:
- Fix a pile of typos (Matt, Jason)
- Hammer it in that breaking the rules leads to use-after-free issues
around dma-buf sharing (Christian)
Reviewed-by: Christian König <christian.koenig(a)amd.com>
Cc: Jason Ekstrand <jason(a)jlekstrand.net>
Cc: Matthew Auld <matthew.auld(a)intel.com>
Reviewed-by: Matthew Auld <matthew.auld(a)intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: "Christian König" <christian.koenig(a)amd.com>
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
---
drivers/dma-buf/dma-resv.c | 24 ++++++---
include/linux/dma-buf.h | 7 +++
include/linux/dma-resv.h | 104 +++++++++++++++++++++++++++++++++++--
3 files changed, 124 insertions(+), 11 deletions(-)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
index e744fd87c63c..84fbe60629e3 100644
--- a/drivers/dma-buf/dma-resv.c
+++ b/drivers/dma-buf/dma-resv.c
@@ -48,6 +48,8 @@
* write operations) or N shared fences (read operations). The RCU
* mechanism is used to protect read access to fences from locked
* write-side updates.
+ *
+ * See struct dma_resv for more details.
*/
DEFINE_WD_CLASS(reservation_ww_class);
@@ -137,7 +139,11 @@ EXPORT_SYMBOL(dma_resv_fini);
* @num_fences: number of fences we want to add
*
* Should be called before dma_resv_add_shared_fence(). Must
- * be called with obj->lock held.
+ * be called with @obj locked through dma_resv_lock().
+ *
+ * Note that the preallocated slots need to be re-reserved if @obj is unlocked
+ * at any time before calling dma_resv_add_shared_fence(). This is validated
+ * when CONFIG_DEBUG_MUTEXES is enabled.
*
* RETURNS
* Zero for success, or -errno
@@ -234,8 +240,10 @@ EXPORT_SYMBOL(dma_resv_reset_shared_max);
* @obj: the reservation object
* @fence: the shared fence to add
*
- * Add a fence to a shared slot, obj->lock must be held, and
+ * Add a fence to a shared slot, @obj must be locked with dma_resv_lock(), and
* dma_resv_reserve_shared() has been called.
+ *
+ * See also &dma_resv.fence for a discussion of the semantics.
*/
void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence)
{
@@ -278,9 +286,11 @@ EXPORT_SYMBOL(dma_resv_add_shared_fence);
/**
* dma_resv_add_excl_fence - Add an exclusive fence.
* @obj: the reservation object
- * @fence: the shared fence to add
+ * @fence: the exclusive fence to add
*
- * Add a fence to the exclusive slot. The obj->lock must be held.
+ * Add a fence to the exclusive slot. @obj must be locked with dma_resv_lock().
+ * Note that this function replaces all fences attached to @obj, see also
+ * &dma_resv.fence_excl for a discussion of the semantics.
*/
void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence)
{
@@ -609,9 +619,11 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
* fence
*
* Callers are not required to hold specific locks, but maybe hold
- * dma_resv_lock() already
+ * dma_resv_lock() already.
+ *
* RETURNS
- * true if all fences signaled, else false
+ *
+ * True if all fences signaled, else false.
*/
bool dma_resv_test_signaled(struct dma_resv *obj, bool test_all)
{
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 678b2006be78..fc62b5f9980c 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -420,6 +420,13 @@ struct dma_buf {
* - Dynamic importers should set fences for any access that they can't
* disable immediately from their &dma_buf_attach_ops.move_notify
* callback.
+ *
+ * IMPORTANT:
+ *
+ * All drivers must obey the struct dma_resv rules, specifically the
+ * rules for updating fences, see &dma_resv.fence_excl and
+ * &dma_resv.fence. If these dependency rules are broken access tracking
+ * can be lost resulting in use after free issues.
*/
struct dma_resv *resv;
diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
index e1ca2080a1ff..9100dd3dc21f 100644
--- a/include/linux/dma-resv.h
+++ b/include/linux/dma-resv.h
@@ -62,16 +62,90 @@ struct dma_resv_list {
/**
* struct dma_resv - a reservation object manages fences for a buffer
- * @lock: update side lock
- * @seq: sequence count for managing RCU read-side synchronization
- * @fence_excl: the exclusive fence, if there is one currently
- * @fence: list of current shared fences
+ *
+ * There are multiple uses for this, with sometimes slightly different rules in
+ * how the fence slots are used.
+ *
+ * One use is to synchronize cross-driver access to a struct dma_buf, either for
+ * dynamic buffer management or just to handle implicit synchronization between
+ * different users of the buffer in userspace. See &dma_buf.resv for a more
+ * in-depth discussion.
+ *
+ * The other major use is to manage access and locking within a driver in a
+ * buffer based memory manager. struct ttm_buffer_object is the canonical
+ * example here, since this is where reservation objects originated from. But
+ * use in drivers is spreading and some drivers also manage struct
+ * drm_gem_object with the same scheme.
*/
struct dma_resv {
+ /**
+ * @lock:
+ *
+ * Update side lock. Don't use directly, instead use the wrapper
+ * functions like dma_resv_lock() and dma_resv_unlock().
+ *
+ * Drivers which use the reservation object to manage memory dynamically
+ * also use this lock to protect buffer object state like placement,
+ * allocation policies or throughout command submission.
+ */
struct ww_mutex lock;
+
+ /**
+ * @seq:
+ *
+ * Sequence count for managing RCU read-side synchronization, allows
+ * read-only access to @fence_excl and @fence while ensuring we take a
+ * consistent snapshot.
+ */
seqcount_ww_mutex_t seq;
+ /**
+ * @fence_excl:
+ *
+ * The exclusive fence, if there is one currently.
+ *
+ * There are two ways to update this fence:
+ *
+ * - First by calling dma_resv_add_excl_fence(), which replaces all
+ * fences attached to the reservation object. To guarantee that no
+ * fences are lost, this new fence must signal only after all previous
+ * fences, both shared and exclusive, have signalled. In some cases it
+ * is convenient to achieve that by attaching a struct dma_fence_array
+ * with all the new and old fences.
+ *
+ * - Alternatively the fence can be set directly, which leaves the
+ * shared fences unchanged. To guarantee that no fences are lost, this
+ * new fence must signal only after the previous exclusive fence has
+ * signalled. Since the shared fences are staying intact, it is not
+ * necessary to maintain any ordering against those. If semantically
+ * only a new access is added without actually treating the previous
+ * one as a dependency the exclusive fences can be strung together
+ * using struct dma_fence_chain.
+ *
+ * Note that actual semantics of what an exclusive or shared fence mean
+ * is defined by the user, for reservation objects shared across drivers
+ * see &dma_buf.resv.
+ */
struct dma_fence __rcu *fence_excl;
+
+ /**
+ * @fence:
+ *
+ * List of current shared fences.
+ *
+ * There are no ordering constraints of shared fences against the
+ * exclusive fence slot. If a waiter needs to wait for all access, it
+ * has to wait for both sets of fences to signal.
+ *
+ * A new fence is added by calling dma_resv_add_shared_fence(). Since
+ * this often needs to be done past the point of no return in command
+ * submission it cannot fail, and therefore sufficient slots need to be
+ * reserved by calling dma_resv_reserve_shared().
+ *
+ * Note that actual semantics of what an exclusive or shared fence mean
+ * is defined by the user, for reservation objects shared across drivers
+ * see &dma_buf.resv.
+ */
struct dma_resv_list __rcu *fence;
};
@@ -98,6 +172,13 @@ static inline void dma_resv_reset_shared_max(struct dma_resv *obj) {}
* undefined order, a #ww_acquire_ctx is passed to unwind if a cycle
* is detected. See ww_mutex_lock() and ww_acquire_init(). A reservation
* object may be locked by itself by passing NULL as @ctx.
+ *
+ * When a die situation is indicated by returning -EDEADLK all locks held by
+ * @ctx must be unlocked and then dma_resv_lock_slow() called on @obj.
+ *
+ * Unlocked by calling dma_resv_unlock().
+ *
+ * See also dma_resv_lock_interruptible() for the interruptible variant.
*/
static inline int dma_resv_lock(struct dma_resv *obj,
struct ww_acquire_ctx *ctx)
@@ -119,6 +200,12 @@ static inline int dma_resv_lock(struct dma_resv *obj,
* undefined order, a #ww_acquire_ctx is passed to unwind if a cycle
* is detected. See ww_mutex_lock() and ww_acquire_init(). A reservation
* object may be locked by itself by passing NULL as @ctx.
+ *
+ * When a die situation is indicated by returning -EDEADLK all locks held by
+ * @ctx must be unlocked and then dma_resv_lock_slow_interruptible() called on
+ * @obj.
+ *
+ * Unlocked by calling dma_resv_unlock().
*/
static inline int dma_resv_lock_interruptible(struct dma_resv *obj,
struct ww_acquire_ctx *ctx)
@@ -134,6 +221,8 @@ static inline int dma_resv_lock_interruptible(struct dma_resv *obj,
* Acquires the reservation object after a die case. This function
* will sleep until the lock becomes available. See dma_resv_lock() as
* well.
+ *
+ * See also dma_resv_lock_slow_interruptible() for the interruptible variant.
*/
static inline void dma_resv_lock_slow(struct dma_resv *obj,
struct ww_acquire_ctx *ctx)
@@ -167,7 +256,7 @@ static inline int dma_resv_lock_slow_interruptible(struct dma_resv *obj,
* if they overlap with a writer.
*
* Also note that since no context is provided, no deadlock protection is
- * possible.
+ * possible, which is also not needed for a trylock.
*
* Returns true if the lock was acquired, false otherwise.
*/
@@ -193,6 +282,11 @@ static inline bool dma_resv_is_locked(struct dma_resv *obj)
*
* Returns the context used to lock a reservation object or NULL if no context
* was used or the object is not locked at all.
+ *
+ * WARNING: This interface is pretty horrible, but TTM needs it because it
+ * doesn't pass the struct ww_acquire_ctx around in some very long callchains.
+ * Everyone else just uses it to check whether they're holding a reservation or
+ * not.
*/
static inline struct ww_acquire_ctx *dma_resv_locking_ctx(struct dma_resv *obj)
{
--
2.32.0
Am 30.08.21 um 12:01 schrieb guangming.cao(a)mediatek.com:
> From: Guangming Cao <Guangming.Cao(a)mediatek.com>
>
> Current flow, one dmabuf maybe call cache sync many times if
> it has beed mapped more than one time.
Well I'm not an expert on DMA heaps, but this will most likely not work
correctly.
> Is there any case that attachments of one dmabuf will points to
> different memory? If not, seems do sync only one time is more better.
I think that this can happen, yes.
Christian.
>
> Signed-off-by: Guangming Cao <Guangming.Cao(a)mediatek.com>
> ---
> drivers/dma-buf/heaps/system_heap.c | 14 ++++++++------
> 1 file changed, 8 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
> index 23a7e74ef966..909ef652a8c8 100644
> --- a/drivers/dma-buf/heaps/system_heap.c
> +++ b/drivers/dma-buf/heaps/system_heap.c
> @@ -162,9 +162,10 @@ static int system_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> invalidate_kernel_vmap_range(buffer->vaddr, buffer->len);
>
> list_for_each_entry(a, &buffer->attachments, list) {
> - if (!a->mapped)
> - continue;
> - dma_sync_sgtable_for_cpu(a->dev, a->table, direction);
> + if (a->mapped) {
> + dma_sync_sgtable_for_cpu(a->dev, a->table, direction);
> + break;
> + }
> }
> mutex_unlock(&buffer->lock);
>
> @@ -183,9 +184,10 @@ static int system_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
> flush_kernel_vmap_range(buffer->vaddr, buffer->len);
>
> list_for_each_entry(a, &buffer->attachments, list) {
> - if (!a->mapped)
> - continue;
> - dma_sync_sgtable_for_device(a->dev, a->table, direction);
> + if (!a->mapped) {
> + dma_sync_sgtable_for_device(a->dev, a->table, direction);
> + break;
> + }
> }
> mutex_unlock(&buffer->lock);
>
Originally drm_sched_job_init was the point of no return, after which
drivers must submit a job. I've split that up, which allows us to fix
this issue pretty easily.
Only thing we have to take care of is to not skip to error paths after
that. Other drivers do this the same for out-fence and similar things.
Fixes: 1d8a5ca436ee ("drm/msm: Conversion to drm scheduler")
Cc: Rob Clark <robdclark(a)chromium.org>
Cc: Rob Clark <robdclark(a)gmail.com>
Cc: Sean Paul <sean(a)poorly.run>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: "Christian König" <christian.koenig(a)amd.com>
Cc: linux-arm-msm(a)vger.kernel.org
Cc: dri-devel(a)lists.freedesktop.org
Cc: freedreno(a)lists.freedesktop.org
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
---
drivers/gpu/drm/msm/msm_gem_submit.c | 15 +++++++--------
1 file changed, 7 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
index 6d6c44f0e1f3..d0ed4ddc509e 100644
--- a/drivers/gpu/drm/msm/msm_gem_submit.c
+++ b/drivers/gpu/drm/msm/msm_gem_submit.c
@@ -52,9 +52,6 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev,
return ERR_PTR(ret);
}
- /* FIXME: this is way too early */
- drm_sched_job_arm(&job->base);
-
xa_init_flags(&submit->deps, XA_FLAGS_ALLOC);
kref_init(&submit->ref);
@@ -883,6 +880,9 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
submit->user_fence = dma_fence_get(&submit->base.s_fence->finished);
+ /* point of no return, we _have_ to submit no matter what */
+ drm_sched_job_arm(&submit->base);
+
/*
* Allocate an id which can be used by WAIT_FENCE ioctl to map back
* to the underlying fence.
@@ -892,17 +892,16 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
if (submit->fence_id < 0) {
ret = submit->fence_id = 0;
submit->fence_id = 0;
- goto out;
}
- if (args->flags & MSM_SUBMIT_FENCE_FD_OUT) {
+ if (ret == 0 && args->flags & MSM_SUBMIT_FENCE_FD_OUT) {
struct sync_file *sync_file = sync_file_create(submit->user_fence);
if (!sync_file) {
ret = -ENOMEM;
- goto out;
+ } else {
+ fd_install(out_fence_fd, sync_file->file);
+ args->fence_fd = out_fence_fd;
}
- fd_install(out_fence_fd, sync_file->file);
- args->fence_fd = out_fence_fd;
}
submit_attach_object_fences(submit);
--
2.32.0
On 2021-08-23 22:30, Christophe JAILLET wrote:
> The wrappers in include/linux/pci-dma-compat.h should go away.
>
> The patch has been generated with the coccinelle script below.
>
> @@
> expression e1, e2, e3, e4;
> @@
> - pci_free_consistent(e1, e2, e3, e4)
> + dma_free_coherent(&e1->dev, e2, e3, e4)
>
> Signed-off-by: Christophe JAILLET <christophe.jaillet(a)wanadoo.fr>
> ---
> If needed, see post from Christoph Hellwig on the kernel-janitors ML:
> https://marc.info/?l=kernel-janitors&m=158745678307186&w=4
>
> This has *NOT* been compile tested because I don't have the needed
> configuration.
> ssdfs
> ---
> drivers/parport/parport_gsc.c | 5 ++---
> 1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/parport/parport_gsc.c b/drivers/parport/parport_gsc.c
> index 1e43b3f399a8..db912fa6b6df 100644
> --- a/drivers/parport/parport_gsc.c
> +++ b/drivers/parport/parport_gsc.c
> @@ -390,9 +390,8 @@ static int __exit parport_remove_chip(struct parisc_device *dev)
> if (p->irq != PARPORT_IRQ_NONE)
> free_irq(p->irq, p);
> if (priv->dma_buf)
> - pci_free_consistent(priv->dev, PAGE_SIZE,
> - priv->dma_buf,
> - priv->dma_handle);
> + dma_free_coherent(&priv->dev->dev, PAGE_SIZE,
> + priv->dma_buf, priv->dma_handle);
Hmm, seeing a free on its own made me wonder where the corresponding
alloc was, but on closer inspection it seems there isn't one. AFAICS
priv->dma_buf is only ever assigned with NULL (and priv->dev doesn't
seem to be assigned at all), so this could likely just be removed. In
fact it looks like all the references to DMA in this driver are just
copy-paste from parport_pc and unused.
Robin.
> kfree (p->private_data);
> parport_put_port(p);
> kfree (ops); /* hope no-one cached it */
>
On Fri, Aug 20, 2021 at 9:23 PM Thomas Zimmermann <tzimmermann(a)suse.de> wrote:
> Hi
>
> Am 20.08.21 um 17:45 schrieb syzbot:
> > syzbot has bisected this issue to:
>
> Good bot!
>
> >
> > commit ea40d7857d5250e5400f38c69ef9e17321e9c4a2
> > Author: Daniel Vetter <daniel.vetter(a)ffwll.ch>
> > Date: Fri Oct 9 23:21:56 2020 +0000
> >
> > drm/vkms: fbdev emulation support
>
> Here's a guess.
>
> GEM SHMEM + fbdev emulation requires that
> (drm_mode_config.prefer_shadow_fbdev = true). Otherwise, deferred I/O
> and SHMEM conflict over the use of page flags IIRC.
But we should only set up defio if fb->dirty is set, which vkms
doesn't do. So there's something else going on? So there must be
something else funny going on here I think ... No idea what's going on
really.
-Daniel
> From a quick grep, vkms doesn't set prefer_shadow_fbdev and an alarming
> amount of SHMEM-based drivers don't do either.
>
> Best regards
> Thomas
>
> >
> > bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=11c31d55300000
> > start commit: 614cb2751d31 Merge tag 'trace-v5.14-rc6' of git://git.kern..
> > git tree: upstream
> > final oops: https://syzkaller.appspot.com/x/report.txt?x=13c31d55300000
> > console output: https://syzkaller.appspot.com/x/log.txt?x=15c31d55300000
> > kernel config: https://syzkaller.appspot.com/x/.config?x=96f0602203250753
> > dashboard link: https://syzkaller.appspot.com/bug?extid=91525b2bd4b5dff71619
> > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=122bce0e300000
> >
> > Reported-by: syzbot+91525b2bd4b5dff71619(a)syzkaller.appspotmail.com
> > Fixes: ea40d7857d52 ("drm/vkms: fbdev emulation support")
> >
> > For information about bisection process see: https://goo.gl/tpsmEJ#bisection
> >
>
> --
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Thu, Aug 19, 2021 at 12:53 PM Desmond Cheong Zhi Xi
<desmondcheongzx(a)gmail.com> wrote:
>
> On 18/8/21 7:02 pm, Daniel Vetter wrote:
> > On Wed, Aug 18, 2021 at 03:38:22PM +0800, Desmond Cheong Zhi Xi wrote:
> >> In a future patch, a read lock on drm_device.master_rwsem is
> >> held in the ioctl handler before the check for ioctl
> >> permissions. However, this produces the following lockdep splat:
> >>
> >> ======================================================
> >> WARNING: possible circular locking dependency detected
> >> 5.14.0-rc6-CI-Patchwork_20831+ #1 Tainted: G U
> >> ------------------------------------------------------
> >> kms_lease/1752 is trying to acquire lock:
> >> ffffffff827bad88 (drm_global_mutex){+.+.}-{3:3}, at: drm_open+0x64/0x280
> >>
> >> but task is already holding lock:
> >> ffff88812e350108 (&dev->master_rwsem){++++}-{3:3}, at:
> >> drm_ioctl_kernel+0xfb/0x1a0
> >>
> >> which lock already depends on the new lock.
> >>
> >> the existing dependency chain (in reverse order) is:
> >>
> >> -> #2 (&dev->master_rwsem){++++}-{3:3}:
> >> lock_acquire+0xd3/0x310
> >> down_read+0x3b/0x140
> >> drm_master_internal_acquire+0x1d/0x60
> >> drm_client_modeset_commit+0x10/0x40
> >> __drm_fb_helper_restore_fbdev_mode_unlocked+0x88/0xb0
> >> drm_fb_helper_set_par+0x34/0x40
> >> intel_fbdev_set_par+0x11/0x40 [i915]
> >> fbcon_init+0x270/0x4f0
> >> visual_init+0xc6/0x130
> >> do_bind_con_driver+0x1de/0x2c0
> >> do_take_over_console+0x10e/0x180
> >> do_fbcon_takeover+0x53/0xb0
> >> register_framebuffer+0x22d/0x310
> >> __drm_fb_helper_initial_config_and_unlock+0x36c/0x540
> >> intel_fbdev_initial_config+0xf/0x20 [i915]
> >> async_run_entry_fn+0x28/0x130
> >> process_one_work+0x26d/0x5c0
> >> worker_thread+0x37/0x390
> >> kthread+0x13b/0x170
> >> ret_from_fork+0x1f/0x30
> >>
> >> -> #1 (&helper->lock){+.+.}-{3:3}:
> >> lock_acquire+0xd3/0x310
> >> __mutex_lock+0xa8/0x930
> >> __drm_fb_helper_restore_fbdev_mode_unlocked+0x44/0xb0
> >> intel_fbdev_restore_mode+0x2b/0x50 [i915]
> >> drm_lastclose+0x27/0x50
> >> drm_release_noglobal+0x42/0x60
> >> __fput+0x9e/0x250
> >> task_work_run+0x6b/0xb0
> >> exit_to_user_mode_prepare+0x1c5/0x1d0
> >> syscall_exit_to_user_mode+0x19/0x50
> >> do_syscall_64+0x46/0xb0
> >> entry_SYSCALL_64_after_hwframe+0x44/0xae
> >>
> >> -> #0 (drm_global_mutex){+.+.}-{3:3}:
> >> validate_chain+0xb39/0x1e70
> >> __lock_acquire+0x5a1/0xb70
> >> lock_acquire+0xd3/0x310
> >> __mutex_lock+0xa8/0x930
> >> drm_open+0x64/0x280
> >> drm_stub_open+0x9f/0x100
> >> chrdev_open+0x9f/0x1d0
> >> do_dentry_open+0x14a/0x3a0
> >> dentry_open+0x53/0x70
> >> drm_mode_create_lease_ioctl+0x3cb/0x970
> >> drm_ioctl_kernel+0xc9/0x1a0
> >> drm_ioctl+0x201/0x3d0
> >> __x64_sys_ioctl+0x6a/0xa0
> >> do_syscall_64+0x37/0xb0
> >> entry_SYSCALL_64_after_hwframe+0x44/0xae
> >>
> >> other info that might help us debug this:
> >> Chain exists of:
> >> drm_global_mutex --> &helper->lock --> &dev->master_rwsem
> >> Possible unsafe locking scenario:
> >> CPU0 CPU1
> >> ---- ----
> >> lock(&dev->master_rwsem);
> >> lock(&helper->lock);
> >> lock(&dev->master_rwsem);
> >> lock(drm_global_mutex);
> >>
> >> *** DEADLOCK ***
> >>
> >> The lock hierarchy inversion happens because we grab the
> >> drm_global_mutex while already holding on to master_rwsem. To avoid
> >> this, we do some prep work to grab the drm_global_mutex before
> >> checking for ioctl permissions.
> >>
> >> At the same time, we update the check for the global mutex to use the
> >> drm_dev_needs_global_mutex helper function.
> >
> > This is intentional, essentially we force all non-legacy drivers to have
> > unlocked ioctl (otherwise everyone forgets to set that flag).
> >
> > For non-legacy drivers the global lock only ensures ordering between
> > drm_open and lastclose (I think at least), and between
> > drm_dev_register/unregister and the backwards ->load/unload callbacks
> > (which are called in the wrong place, but we cannot fix that for legacy
> > drivers).
> >
> > ->load/unload should be completely unused (maybe radeon still uses it),
> > and ->lastclose is also on the decline.
> >
>
> Ah ok got it, I'll change the check back to
> drm_core_check_feature(dev, DRIVER_LEGACY) then.
>
> > Maybe we should update the comment of drm_global_mutex to explain what it
> > protects and why.
> >
>
> The comments in drm_dev_needs_global_mutex make sense I think, I just
> didn't read the code closely enough.
>
> > I'm also confused how this patch connects to the splat, since for i915 we
>
> Right, my bad, this is a separate instance of circular locking. I was
> too hasty when I saw that for legacy drivers we might grab master_rwsem
> then drm_global_mutex in the ioctl handler.
>
> > shouldn't be taking the drm_global_lock here at all. The problem seems to
> > be the drm_open_helper when we create a new lease, which is an entirely
> > different can of worms.
> >
> > I'm honestly not sure how to best do that, but we should be able to create
> > a file and then call drm_open_helper directly, or well a version of that
> > which never takes the drm_global_mutex. Because that is not needed for
> > nested drm_file opening:
> > - legacy drivers never go down this path because leases are only supported
> > with modesetting, and modesetting is only supported for non-legacy
> > drivers
> > - the races against dev->open_count due to last_close or ->load callbacks
> > don't matter, because for the entire ioctl we already have an open
> > drm_file and that wont disappear.
> >
> > So this should work, but I'm not entirely sure how to make it work.
> > -Daniel
> >
>
> One idea that comes to mind is to change the outcome of
> drm_dev_needs_global_mutex while we're in the ioctl, but that requires
> more locking which sounds like a bad idea.
>
> Another idea, which is quite messy, but just for thoughts, uses the idea
> of pushing the master_rwsem read lock down:
Yeah I think that's cleaner, and I think that also should work a lot
better for the other ioctls:
- We don't have a need to flush readers anymore since we'll just take
the rwsem in write mode
- There's much less inversions, and maybe we could even get rid of the
spinlock since at that point all readers should at least have the
rwsem read-locked.
>
> diff --git a/drivers/gpu/drm/drm_ioctl.c b/drivers/gpu/drm/drm_ioctl.c
> index 7f523e1c5650..5d05e744b728 100644
> --- a/drivers/gpu/drm/drm_ioctl.c
> +++ b/drivers/gpu/drm/drm_ioctl.c
> @@ -712,7 +712,7 @@ static const struct drm_ioctl_desc drm_ioctls[] = {
> DRM_RENDER_ALLOW),
> DRM_IOCTL_DEF(DRM_IOCTL_CRTC_GET_SEQUENCE, drm_crtc_get_sequence_ioctl, 0),
> DRM_IOCTL_DEF(DRM_IOCTL_CRTC_QUEUE_SEQUENCE, drm_crtc_queue_sequence_ioctl, 0),
> - DRM_IOCTL_DEF(DRM_IOCTL_MODE_CREATE_LEASE, drm_mode_create_lease_ioctl, DRM_MASTER),
> + DRM_IOCTL_DEF(DRM_IOCTL_MODE_CREATE_LEASE, drm_mode_create_lease_ioctl, 0),
> DRM_IOCTL_DEF(DRM_IOCTL_MODE_LIST_LESSEES, drm_mode_list_lessees_ioctl, DRM_MASTER),
> DRM_IOCTL_DEF(DRM_IOCTL_MODE_GET_LEASE, drm_mode_get_lease_ioctl, DRM_MASTER),
> DRM_IOCTL_DEF(DRM_IOCTL_MODE_REVOKE_LEASE, drm_mode_revoke_lease_ioctl, DRM_MASTER),
> diff --git a/drivers/gpu/drm/drm_lease.c b/drivers/gpu/drm/drm_lease.c
> index 983701198ffd..a25bc69522b4 100644
> --- a/drivers/gpu/drm/drm_lease.c
> +++ b/drivers/gpu/drm/drm_lease.c
> @@ -500,6 +500,19 @@ int drm_mode_create_lease_ioctl(struct drm_device *dev,
> return -EINVAL;
> }
>
> + /* Clone the lessor file to create a new file for us */
> + DRM_DEBUG_LEASE("Allocating lease file\n");
> + lessee_file = file_clone_open(lessor_file);
> + if (IS_ERR(lessee_file))
> + return PTR_ERR(lessee_file);
> +
> + down_read(&dev->master_rwsem);
> +
> + if (!drm_is_current_master(lessor_priv)) {
> + ret = -EACCES;
> + goto out_file;
> + }
> +
> lessor = drm_file_get_master(lessor_priv);
> /* Do not allow sub-leases */
> if (lessor->lessor) {
> @@ -547,14 +560,6 @@ int drm_mode_create_lease_ioctl(struct drm_device *dev,
> goto out_leases;
> }
>
> - /* Clone the lessor file to create a new file for us */
> - DRM_DEBUG_LEASE("Allocating lease file\n");
> - lessee_file = file_clone_open(lessor_file);
> - if (IS_ERR(lessee_file)) {
> - ret = PTR_ERR(lessee_file);
> - goto out_lessee;
> - }
> -
> lessee_priv = lessee_file->private_data;
> /* Change the file to a master one */
> drm_master_put(&lessee_priv->master);
> @@ -571,17 +576,19 @@ int drm_mode_create_lease_ioctl(struct drm_device *dev,
> fd_install(fd, lessee_file);
>
> drm_master_put(&lessor);
> + up_read(&dev->master_rwsem);
> DRM_DEBUG_LEASE("drm_mode_create_lease_ioctl succeeded\n");
> return 0;
>
> -out_lessee:
> - drm_master_put(&lessee);
> -
> out_leases:
> put_unused_fd(fd);
>
> out_lessor:
> drm_master_put(&lessor);
> +
> +out_file:
> + up_read(&dev->master_rwsem);
> + fput(lessee_file);
> DRM_DEBUG_LEASE("drm_mode_create_lease_ioctl failed: %d\n", ret);
> return ret;
> }
>
>
> Something like this would also address the other deadlock we'd hit in
> drm_mode_create_lease_ioctl():
>
> drm_ioctl_kernel():
> down_read(&master_rwsem); <--- down_read()
> drm_mode_create_lease_ioctl():
> drm_lease_create():
> file_clone_open():
> ...
> drm_open():
> drm_open_helper():
> drm_master_open():
> down_write(&master_rwsem); <--- down_write()
>
> Overall, I think the suggestion to push master_rwsem write locks down
> into ioctls would solve the nesting problem for those ioctls.
Yup, my gut feeling agress. And the above is a nice solution without
having to dig out all the code for creating a file directly (it's
doable I think at least, we do it for dma-buf).
> Although I'm still a little concerned that, just like here, there might
> be deeply embedded nested locking, so locking becomes prone to breaking.
> It does smell a bit to me.
Yeah, that's pretty much the bane of locking cleanup/rework. You have
to do it to figure out what goes boom :-/ Even with the most careful
audit there's surprises left.
-Daniel
> >> Signed-off-by: Desmond Cheong Zhi Xi <desmondcheongzx(a)gmail.com>
> >> ---
> >> drivers/gpu/drm/drm_ioctl.c | 18 +++++++++---------
> >> 1 file changed, 9 insertions(+), 9 deletions(-)
> >>
> >> diff --git a/drivers/gpu/drm/drm_ioctl.c b/drivers/gpu/drm/drm_ioctl.c
> >> index 880fc565d599..2cb57378a787 100644
> >> --- a/drivers/gpu/drm/drm_ioctl.c
> >> +++ b/drivers/gpu/drm/drm_ioctl.c
> >> @@ -779,19 +779,19 @@ long drm_ioctl_kernel(struct file *file, drm_ioctl_t *func, void *kdata,
> >> if (drm_dev_is_unplugged(dev))
> >> return -ENODEV;
> >>
> >> + /* Enforce sane locking for modern driver ioctls. */
> >> + if (unlikely(drm_dev_needs_global_mutex(dev)) && !(flags & DRM_UNLOCKED))
> >> + mutex_lock(&drm_global_mutex);
> >> +
> >> retcode = drm_ioctl_permit(flags, file_priv);
> >> if (unlikely(retcode))
> >> - return retcode;
> >> + goto out;
> >>
> >> - /* Enforce sane locking for modern driver ioctls. */
> >> - if (likely(!drm_core_check_feature(dev, DRIVER_LEGACY)) ||
> >> - (flags & DRM_UNLOCKED))
> >> - retcode = func(dev, kdata, file_priv);
> >> - else {
> >> - mutex_lock(&drm_global_mutex);
> >> - retcode = func(dev, kdata, file_priv);
> >> + retcode = func(dev, kdata, file_priv);
> >> +
> >> +out:
> >> + if (unlikely(drm_dev_needs_global_mutex(dev)) && !(flags & DRM_UNLOCKED))
> >> mutex_unlock(&drm_global_mutex);
> >> - }
> >> return retcode;
> >> }
> >> EXPORT_SYMBOL(drm_ioctl_kernel);
> >> --
> >> 2.25.1
> >>
> >
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch