Hi Thomas,
Am 10.04.25 um 21:43 schrieb Thomas Petazzoni:
> Hello Christian,
>
> Thanks for your feedback!
>
> On Thu, 10 Apr 2025 18:29:12 +0200
> Christian König <christian.koenig(a)amd.com> wrote:
>
>>> Many UIO users performing DMA from their UIO device need to access the
>>> DMA addresses of the allocated buffers. There are out-of-tree drivers
>>> that allow to do it but nothing in the mainline.
>> Well that basically disqualifies this patch set in the first paragraph.
>>
>> To justify some kernel change we always need an in kernel user of the
>> interface, since this is purely for out-of-tree drivers this is a
>> no-go to begin with.
> I'm not sure to understand your comment here. This patch series is
> about extending the UIO interface... which is a user-space interface.
> So obviously it has no "in-kernel user" because it's meant to be used
> from user-space. Could you clarify what you meant here?
Bastien wrote about "out-of-tree drivers" which is something the upstream kernel explicitly does not support.
When you make that UIO API and have an open source userspace driver then that is probably a good justification to do this.
What the kernel community tries to prevent here is that people start using the UAPI to write closed source drivers in userspace.
>> What you could potentially do is to create an UIO driver which
>> imports DMA-bufs, pins them and then provide the DMA addresses to
>> userspace.
>>
>> But please be aware that DMA-fences are fundamentally incompatible
>> with UIO. So you won't be able to do any form of synchronization
>> which probably makes the implementation pretty limited.
> Could you clarify why DMA-fences would be needed here, and for what
> synchronization?
In general DMA-buf is an interface which enables you do share device specific buffers between different drivers as well as between userspace processes.
For this to work with for example cameras, GPUs or RDMA NICs you need to some kind of synchronization primitive, e.g. device A can only starts it's DMA transaction when device B has completed his.
The problem is that this synchronization approach is fundamentally incompatible with UIO. See here for more details: https://www.kernel.org/doc/html/latest/driver-api/dma-buf.html#indefinite-d…
> The DMA buffers allocated here are DMA coherent buffers. So the
> user-space application that uses UIO will allocate such buffers once at
> application start, retrieve their DMA address, and then program DMA
> transfers as it sees fit using the memory-mapped registers accessible
> through UIO. I'm not sure which synchronization you are referring to.
> We are not "chaining" DMA transfers, with for example a camera
> interface feeding its DMA buffers to an ISP or something like that. The
> typical use case here is some IP block in an FPGA that does DMA
> transfers to transfer data to/from some sort of specialized I/O
> interface. We get an interrupt when the transfer is done, and we can
> re-use the buffer for the next transfer.
Well why do you then want to use DMA-buf in the first place? As far as I know what you describe can perfectly be done with the normal UIO memory management interfaces.
DMA-buf is only interesting when you actually want to share something in between devices or between applications.
Regards,
Christian.
> If you could clarify here as well, it would definitely help in
> understanding the shortcomings/limitations.
>
> Thanks a lot!
>
> Thomas Petazzoni
Am 10.04.25 um 16:53 schrieb Bastien Curutchet:
> Hi all,
>
> Many UIO users performing DMA from their UIO device need to access the
> DMA addresses of the allocated buffers. There are out-of-tree drivers
> that allow to do it but nothing in the mainline.
Well that basically disqualifies this patch set in the first paragraph.
To justify some kernel change we always need an in kernel user of the interface, since this is purely for out-of-tree drivers this is a no-go to begin with.
> I know DMA shouldn't be handled by userspace but, IMHO, since UIO
> drivers exist, it would be better if they offered a way of doing this.
Leaking DMA addresses to userspace is usually seen as quite some security hole, but on the other hand with UIO you don't have much other choice.
> This patch series use the dma-heap framework which already allows
> userspace to allocate DMA buffers. I tried to avoid 'polluting' the
> existing heaps to prevent inappropriate uses of this new feature by
> introducing a new UIO heap, which is the only one implementing this
> behavior.
Yeah, that won't fly at all.
What you could potentially do is to create an UIO driver which imports DMA-bufs, pins them and then provide the DMA addresses to userspace.
But please be aware that DMA-fences are fundamentally incompatible with UIO. So you won't be able to do any form of synchronization which probably makes the implementation pretty limited.
Regards,
Christian.
>
> PATCH 1 allows the creation of heaps that don't implement map/unmap_buf
> operations as UIO heap doesn't use them.
> PATCH 2 adds the DMA_BUF_IOCTL_GET_DMA_ADDR which transmits the DMA
> addresses to userspace.
> PATCH 3 implements the UIO heap.
>
> It has been tested with the uio_pci_generic driver on a PowerPC.
>
> Signed-off-by: Bastien Curutchet <bastien.curutchet(a)bootlin.com>
> ---
> Bastien Curutchet (3):
> dma-buf: Allow heap that doesn't provide map_buf/unmap_buf
> dma-buf: Add DMA_BUF_IOCTL_GET_DMA_ADDR
> uio: Add UIO_DMABUF_HEAP
>
> drivers/dma-buf/dma-buf.c | 29 +++++++++--
> drivers/uio/Kconfig | 9 ++++
> drivers/uio/Makefile | 1 +
> drivers/uio/uio.c | 4 ++
> drivers/uio/uio_heap.c | 120 +++++++++++++++++++++++++++++++++++++++++++
> include/linux/dma-buf.h | 1 +
> include/linux/uio_driver.h | 2 +
> include/uapi/linux/dma-buf.h | 1 +
> 8 files changed, 164 insertions(+), 3 deletions(-)
> ---
> base-commit: 5f13fa25acaa4f586aaed12efcf7436e004eeaf2
> change-id: 20250408-uio-dma-9b011e9e7f0b
>
> Best regards,
Am 10.04.25 um 15:09 schrieb Philipp Stanner:
> On Thu, 2025-04-10 at 14:58 +0200, Christian König wrote:
>> Am 10.04.25 um 11:24 schrieb Philipp Stanner:
>>> Nouveau currently relies on the assumption that dma_fences will
>>> only
>>> ever get signaled through nouveau_fence_signal(), which takes care
>>> of
>>> removing a signaled fence from the list nouveau_fence_chan.pending.
>>>
>>> This self-imposed rule is violated in nouveau_fence_done(), where
>>> dma_fence_is_signaled() (somewhat surprisingly, considering its
>>> name)
>>> can signal the fence without removing it from the list. This
>>> enables
>>> accesses to already signaled fences through the list, which is a
>>> bug.
>>>
>>> In particular, it can race with nouveau_fence_context_kill(), which
>>> would then attempt to set an error code on an already signaled
>>> fence,
>>> which is illegal.
>>>
>>> In nouveau_fence_done(), the call to nouveau_fence_update() already
>>> ensures to signal all ready fences. Thus, the signaling potentially
>>> performed by dma_fence_is_signaled() is actually not necessary.
>> Ah, I now got what you are trying to do here! But that won't help.
>>
>> The problem is it is perfectly valid for somebody external (e.g.
>> other driver, TTM etc...) to call dma_fence_is_signaled() on a
>> nouveau fence.
>>
>> This will then in turn still signal the fence and leave it on the
>> pending list and creating the problem you have.
> Good to hear – precisely that then is the use case for a dma_fence
> callback! ^_^ It guarantees that, no matter who signals a fence, no
> matter at what place, a certain action will always be performed.
>
> I can't think of any other mechanism which could guarantee that a
> signaled fence immediately gets removed from nouveau's pending list,
> other than the callbacks.
>
> But seriously, I don't think that anyone does this currently, nor do I
> think that anyone could get away with doing it without the entire
> computer burning down.
Yeah, I don't think that this is possible at the moment.
When you do stuff like that from the provider side you will always run into lifetime issues because in the signaling from interrupt case you then drop the last reference before the signaling is completed.
How about the attached (not even compile tested) patch? I think it should fix the issue.
Regards,
Christian.
>
> P.
>
>
>
>> Regards,
>> Christian.
>>
>>> Replace the call to dma_fence_is_signaled() with
>>> nouveau_fence_base_is_signaled().
>>>
>>> Cc: <stable(a)vger.kernel.org> # 4.10+, precise commit not to be
>>> determined
>>> Signed-off-by: Philipp Stanner <phasta(a)kernel.org>
>>> ---
>>> drivers/gpu/drm/nouveau/nouveau_fence.c | 2 +-
>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.c
>>> b/drivers/gpu/drm/nouveau/nouveau_fence.c
>>> index 7cc84472cece..33535987d8ed 100644
>>> --- a/drivers/gpu/drm/nouveau/nouveau_fence.c
>>> +++ b/drivers/gpu/drm/nouveau/nouveau_fence.c
>>> @@ -274,7 +274,7 @@ nouveau_fence_done(struct nouveau_fence *fence)
>>> nvif_event_block(&fctx->event);
>>> spin_unlock_irqrestore(&fctx->lock, flags);
>>> }
>>> - return dma_fence_is_signaled(&fence->base);
>>> + return test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence-
>>>> base.flags);
>>> }
>>>
>>> static long
Am 10.04.25 um 11:24 schrieb Philipp Stanner:
> Nouveau currently relies on the assumption that dma_fences will only
> ever get signaled through nouveau_fence_signal(), which takes care of
> removing a signaled fence from the list nouveau_fence_chan.pending.
>
> This self-imposed rule is violated in nouveau_fence_done(), where
> dma_fence_is_signaled() (somewhat surprisingly, considering its name)
> can signal the fence without removing it from the list. This enables
> accesses to already signaled fences through the list, which is a bug.
>
> In particular, it can race with nouveau_fence_context_kill(), which
> would then attempt to set an error code on an already signaled fence,
> which is illegal.
>
> In nouveau_fence_done(), the call to nouveau_fence_update() already
> ensures to signal all ready fences. Thus, the signaling potentially
> performed by dma_fence_is_signaled() is actually not necessary.
Ah, I now got what you are trying to do here! But that won't help.
The problem is it is perfectly valid for somebody external (e.g. other driver, TTM etc...) to call dma_fence_is_signaled() on a nouveau fence.
This will then in turn still signal the fence and leave it on the pending list and creating the problem you have.
Regards,
Christian.
>
> Replace the call to dma_fence_is_signaled() with
> nouveau_fence_base_is_signaled().
>
> Cc: <stable(a)vger.kernel.org> # 4.10+, precise commit not to be determined
> Signed-off-by: Philipp Stanner <phasta(a)kernel.org>
> ---
> drivers/gpu/drm/nouveau/nouveau_fence.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.c b/drivers/gpu/drm/nouveau/nouveau_fence.c
> index 7cc84472cece..33535987d8ed 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_fence.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_fence.c
> @@ -274,7 +274,7 @@ nouveau_fence_done(struct nouveau_fence *fence)
> nvif_event_block(&fctx->event);
> spin_unlock_irqrestore(&fctx->lock, flags);
> }
> - return dma_fence_is_signaled(&fence->base);
> + return test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->base.flags);
> }
>
> static long
Am 10.04.25 um 14:21 schrieb Danilo Krummrich:
> On Thu, Apr 10, 2025 at 02:13:34PM +0200, Christian König wrote:
>> Am 10.04.25 um 11:24 schrieb Philipp Stanner:
>>> Nouveau currently relies on the assumption that dma_fences will only
>>> ever get signaled through nouveau_fence_signal(), which takes care of
>>> removing a signaled fence from the list nouveau_fence_chan.pending.
>>>
>>> This self-imposed rule is violated in nouveau_fence_done(), where
>>> dma_fence_is_signaled() (somewhat surprisingly, considering its name)
>>> can signal the fence without removing it from the list. This enables
>>> accesses to already signaled fences through the list, which is a bug.
>>>
>>> In particular, it can race with nouveau_fence_context_kill(), which
>>> would then attempt to set an error code on an already signaled fence,
>>> which is illegal.
>>>
>>> In nouveau_fence_done(), the call to nouveau_fence_update() already
>>> ensures to signal all ready fences. Thus, the signaling potentially
>>> performed by dma_fence_is_signaled() is actually not necessary.
>>>
>>> Replace the call to dma_fence_is_signaled() with
>>> nouveau_fence_base_is_signaled().
>>>
>>> Cc: <stable(a)vger.kernel.org> # 4.10+, precise commit not to be determined
>>> Signed-off-by: Philipp Stanner <phasta(a)kernel.org>
>>> ---
>>> drivers/gpu/drm/nouveau/nouveau_fence.c | 2 +-
>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.c b/drivers/gpu/drm/nouveau/nouveau_fence.c
>>> index 7cc84472cece..33535987d8ed 100644
>>> --- a/drivers/gpu/drm/nouveau/nouveau_fence.c
>>> +++ b/drivers/gpu/drm/nouveau/nouveau_fence.c
>>> @@ -274,7 +274,7 @@ nouveau_fence_done(struct nouveau_fence *fence)
>>> nvif_event_block(&fctx->event);
>>> spin_unlock_irqrestore(&fctx->lock, flags);
>>> }
>>> - return dma_fence_is_signaled(&fence->base);
>>> + return test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->base.flags);
>> See the code above that:
>>
>> if (fence->base.ops == &nouveau_fence_ops_legacy ||
>> fence->base.ops == &nouveau_fence_ops_uevent) {
> I think this check is a bit pointless given that fence is already a struct
> nouveau_fence. :)
Oh, good point. I totally missed that.
In this case that indeed doesn't make any sense at all.
(Unless somebody just blindly upcasted the structure, but I really hope that this isn't the case here).
Regards,
Christian.
Am 10.04.25 um 11:24 schrieb Philipp Stanner:
> nouveau_fence_done() contains an if-branch which checks for the
> existence of either of two fence backend ops. Those two are the only
> backend ops existing in Nouveau, however; and at least one backend ops
> must be in use for the entire driver to be able to work. The if branch
> is, therefore, surplus.
>
> Remove the if-branch.
What happens here is that nouveau checks if the fence comes from itself or some external source.
So when you remove that check you potentially illegally uses nouveau_fctx() on a non-nouveau fence.
Regards,
Christian.
>
> Signed-off-by: Philipp Stanner <phasta(a)kernel.org>
> ---
> drivers/gpu/drm/nouveau/nouveau_fence.c | 24 +++++++++++-------------
> 1 file changed, 11 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.c b/drivers/gpu/drm/nouveau/nouveau_fence.c
> index 33535987d8ed..db6f4494405c 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_fence.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_fence.c
> @@ -259,21 +259,19 @@ nouveau_fence_emit(struct nouveau_fence *fence)
> bool
> nouveau_fence_done(struct nouveau_fence *fence)
> {
> - if (fence->base.ops == &nouveau_fence_ops_legacy ||
> - fence->base.ops == &nouveau_fence_ops_uevent) {
> - struct nouveau_fence_chan *fctx = nouveau_fctx(fence);
> - struct nouveau_channel *chan;
> - unsigned long flags;
> + struct nouveau_fence_chan *fctx = nouveau_fctx(fence);
> + struct nouveau_channel *chan;
> + unsigned long flags;
>
> - if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->base.flags))
> - return true;
> + if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->base.flags))
> + return true;
> +
> + spin_lock_irqsave(&fctx->lock, flags);
> + chan = rcu_dereference_protected(fence->channel, lockdep_is_held(&fctx->lock));
> + if (chan && nouveau_fence_update(chan, fctx))
> + nvif_event_block(&fctx->event);
> + spin_unlock_irqrestore(&fctx->lock, flags);
>
> - spin_lock_irqsave(&fctx->lock, flags);
> - chan = rcu_dereference_protected(fence->channel, lockdep_is_held(&fctx->lock));
> - if (chan && nouveau_fence_update(chan, fctx))
> - nvif_event_block(&fctx->event);
> - spin_unlock_irqrestore(&fctx->lock, flags);
> - }
> return test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->base.flags);
> }
>