On 6/3/25 13:30, Tvrtko Ursulin wrote:
>
> On 02/06/2025 19:00, Christian König wrote:
>> On 6/2/25 17:25, Tvrtko Ursulin wrote:
>>>
>>> On 02/06/2025 15:42, Christian König wrote:
>>>> On 6/2/25 15:05, Tvrtko Ursulin wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>> On 15/05/2025 14:15, Christian König wrote:
>>>>>> Hey drm-misc maintainers,
>>>>>>
>>>>>> can you guys please backmerge drm-next into drm-misc-next?
>>>>>>
>>>>>> I want to push this patch here but it depends on changes which are partially in drm-next and partially in drm-misc-next.
>>>>>
>>>>> Looks like the backmerge is still pending?
>>>>
>>>> Yes, @Maarten, @Maxime and @Thomas ping on this.
>>>>
>>>>> In the meantime, Christian, any chance you will have some bandwith to think about the tail end of the series? Specifically patch 6 and how that is used onward.
>>>>
>>>> Well the RCU grace period is quite a nifty hack. I wanted to go over it again after merging the first patches from this series.
>>>>
>>>> In general looks like a good idea to me, I just don't like that we explicitely need to expose dma_fence_access_begin() and dma_fence_access_end().
>>>>
>>>> Especially we can't do that while calling fence->ops->release.
>>>
>>> Hm why not? You think something will take offence of the rcu_read_lock()?
>>
>> Yes, especially it is perfectly legitimate to call synchronize_rcu() or lock semaphores/mutexes from that callback.
>>
>> Either keep the RCU critical section only for the trace or even better come up with some different approach, e.g. copying the string under the RCU lock or something like that.
>
> Hmm but the kerneldoc explicity says callback can be called from irq context:
>
> /**
> * @release:
> *
> * Called on destruction of fence to release additional resources.
> * Can be called from irq context. This callback is optional. If it is
> * NULL, then dma_fence_free() is instead called as the default
> * implementation.
> */
> void (*release)(struct dma_fence *fence);
Ah, right. I mixed that up with the dma-buf object.
Yeah in that case that is probably harmless. We delegate the final free to a work item if necessary anyway.
But I would still like to avoid having the RCU cover the release as well. Or why is there any reason why we would explicitely want to do this?
Regards,
Christian.
>
>
> Regards,
>
> Tvrtko
>
>>
>> Regards,
>> Christian.
>>
>>>
>>> Regards,
>>>
>>> Tvrtko
>>>
>>>>>> On 5/15/25 11:49, Tvrtko Ursulin wrote:
>>>>>>> With the goal of reducing the need for drivers to touch (and dereference)
>>>>>>> fence->ops, we move the 64-bit seqnos flag from struct dma_fence_ops to
>>>>>>> the fence->flags.
>>>>>>>
>>>>>>> Drivers which were setting this flag are changed to use new
>>>>>>> dma_fence_init64() instead of dma_fence_init().
>>>>>>>
>>>>>>> v2:
>>>>>>> * Streamlined init and added kerneldoc.
>>>>>>> * Rebase for amdgpu userq which landed since.
>>>>>>>
>>>>>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin(a)igalia.com>
>>>>>>> Reviewed-by: Christian König <christian.koenig(a)amd.com> # v1
>>>>>>> ---
>>>>>>> drivers/dma-buf/dma-fence-chain.c | 5 +-
>>>>>>> drivers/dma-buf/dma-fence.c | 69 ++++++++++++++-----
>>>>>>> .../drm/amd/amdgpu/amdgpu_eviction_fence.c | 7 +-
>>>>>>> .../gpu/drm/amd/amdgpu/amdgpu_userq_fence.c | 5 +-
>>>>>>> .../gpu/drm/amd/amdgpu/amdgpu_vm_tlb_fence.c | 5 +-
>>>>>>> include/linux/dma-fence.h | 14 ++--
>>>>>>> 6 files changed, 64 insertions(+), 41 deletions(-)
>>>>>>>
>>>>>>> diff --git a/drivers/dma-buf/dma-fence-chain.c b/drivers/dma-buf/dma-fence-chain.c
>>>>>>> index 90424f23fd73..a8a90acf4f34 100644
>>>>>>> --- a/drivers/dma-buf/dma-fence-chain.c
>>>>>>> +++ b/drivers/dma-buf/dma-fence-chain.c
>>>>>>> @@ -218,7 +218,6 @@ static void dma_fence_chain_set_deadline(struct dma_fence *fence,
>>>>>>> }
>>>>>>> const struct dma_fence_ops dma_fence_chain_ops = {
>>>>>>> - .use_64bit_seqno = true,
>>>>>>> .get_driver_name = dma_fence_chain_get_driver_name,
>>>>>>> .get_timeline_name = dma_fence_chain_get_timeline_name,
>>>>>>> .enable_signaling = dma_fence_chain_enable_signaling,
>>>>>>> @@ -262,8 +261,8 @@ void dma_fence_chain_init(struct dma_fence_chain *chain,
>>>>>>> seqno = max(prev->seqno, seqno);
>>>>>>> }
>>>>>>> - dma_fence_init(&chain->base, &dma_fence_chain_ops,
>>>>>>> - &chain->lock, context, seqno);
>>>>>>> + dma_fence_init64(&chain->base, &dma_fence_chain_ops, &chain->lock,
>>>>>>> + context, seqno);
>>>>>>> /*
>>>>>>> * Chaining dma_fence_chain container together is only allowed through
>>>>>>> diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
>>>>>>> index f0cdd3e99d36..705b59787731 100644
>>>>>>> --- a/drivers/dma-buf/dma-fence.c
>>>>>>> +++ b/drivers/dma-buf/dma-fence.c
>>>>>>> @@ -989,24 +989,9 @@ void dma_fence_describe(struct dma_fence *fence, struct seq_file *seq)
>>>>>>> }
>>>>>>> EXPORT_SYMBOL(dma_fence_describe);
>>>>>>> -/**
>>>>>>> - * dma_fence_init - Initialize a custom fence.
>>>>>>> - * @fence: the fence to initialize
>>>>>>> - * @ops: the dma_fence_ops for operations on this fence
>>>>>>> - * @lock: the irqsafe spinlock to use for locking this fence
>>>>>>> - * @context: the execution context this fence is run on
>>>>>>> - * @seqno: a linear increasing sequence number for this context
>>>>>>> - *
>>>>>>> - * Initializes an allocated fence, the caller doesn't have to keep its
>>>>>>> - * refcount after committing with this fence, but it will need to hold a
>>>>>>> - * refcount again if &dma_fence_ops.enable_signaling gets called.
>>>>>>> - *
>>>>>>> - * context and seqno are used for easy comparison between fences, allowing
>>>>>>> - * to check which fence is later by simply using dma_fence_later().
>>>>>>> - */
>>>>>>> -void
>>>>>>> -dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>>>> - spinlock_t *lock, u64 context, u64 seqno)
>>>>>>> +static void
>>>>>>> +__dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>>>> + spinlock_t *lock, u64 context, u64 seqno, unsigned long flags)
>>>>>>> {
>>>>>>> BUG_ON(!lock);
>>>>>>> BUG_ON(!ops || !ops->get_driver_name || !ops->get_timeline_name);
>>>>>>> @@ -1017,9 +1002,55 @@ dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>>>> fence->lock = lock;
>>>>>>> fence->context = context;
>>>>>>> fence->seqno = seqno;
>>>>>>> - fence->flags = 0UL;
>>>>>>> + fence->flags = flags;
>>>>>>> fence->error = 0;
>>>>>>> trace_dma_fence_init(fence);
>>>>>>> }
>>>>>>> +
>>>>>>> +/**
>>>>>>> + * dma_fence_init - Initialize a custom fence.
>>>>>>> + * @fence: the fence to initialize
>>>>>>> + * @ops: the dma_fence_ops for operations on this fence
>>>>>>> + * @lock: the irqsafe spinlock to use for locking this fence
>>>>>>> + * @context: the execution context this fence is run on
>>>>>>> + * @seqno: a linear increasing sequence number for this context
>>>>>>> + *
>>>>>>> + * Initializes an allocated fence, the caller doesn't have to keep its
>>>>>>> + * refcount after committing with this fence, but it will need to hold a
>>>>>>> + * refcount again if &dma_fence_ops.enable_signaling gets called.
>>>>>>> + *
>>>>>>> + * context and seqno are used for easy comparison between fences, allowing
>>>>>>> + * to check which fence is later by simply using dma_fence_later().
>>>>>>> + */
>>>>>>> +void
>>>>>>> +dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>>>> + spinlock_t *lock, u64 context, u64 seqno)
>>>>>>> +{
>>>>>>> + __dma_fence_init(fence, ops, lock, context, seqno, 0UL);
>>>>>>> +}
>>>>>>> EXPORT_SYMBOL(dma_fence_init);
>>>>>>> +
>>>>>>> +/**
>>>>>>> + * dma_fence_init64 - Initialize a custom fence with 64-bit seqno support.
>>>>>>> + * @fence: the fence to initialize
>>>>>>> + * @ops: the dma_fence_ops for operations on this fence
>>>>>>> + * @lock: the irqsafe spinlock to use for locking this fence
>>>>>>> + * @context: the execution context this fence is run on
>>>>>>> + * @seqno: a linear increasing sequence number for this context
>>>>>>> + *
>>>>>>> + * Initializes an allocated fence, the caller doesn't have to keep its
>>>>>>> + * refcount after committing with this fence, but it will need to hold a
>>>>>>> + * refcount again if &dma_fence_ops.enable_signaling gets called.
>>>>>>> + *
>>>>>>> + * Context and seqno are used for easy comparison between fences, allowing
>>>>>>> + * to check which fence is later by simply using dma_fence_later().
>>>>>>> + */
>>>>>>> +void
>>>>>>> +dma_fence_init64(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>>>> + spinlock_t *lock, u64 context, u64 seqno)
>>>>>>> +{
>>>>>>> + __dma_fence_init(fence, ops, lock, context, seqno,
>>>>>>> + BIT(DMA_FENCE_FLAG_SEQNO64_BIT));
>>>>>>> +}
>>>>>>> +EXPORT_SYMBOL(dma_fence_init64);
>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_eviction_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_eviction_fence.c
>>>>>>> index 1a7469543db5..79713421bffe 100644
>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_eviction_fence.c
>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_eviction_fence.c
>>>>>>> @@ -134,7 +134,6 @@ static bool amdgpu_eviction_fence_enable_signaling(struct dma_fence *f)
>>>>>>> }
>>>>>>> static const struct dma_fence_ops amdgpu_eviction_fence_ops = {
>>>>>>> - .use_64bit_seqno = true,
>>>>>>> .get_driver_name = amdgpu_eviction_fence_get_driver_name,
>>>>>>> .get_timeline_name = amdgpu_eviction_fence_get_timeline_name,
>>>>>>> .enable_signaling = amdgpu_eviction_fence_enable_signaling,
>>>>>>> @@ -160,9 +159,9 @@ amdgpu_eviction_fence_create(struct amdgpu_eviction_fence_mgr *evf_mgr)
>>>>>>> ev_fence->evf_mgr = evf_mgr;
>>>>>>> get_task_comm(ev_fence->timeline_name, current);
>>>>>>> spin_lock_init(&ev_fence->lock);
>>>>>>> - dma_fence_init(&ev_fence->base, &amdgpu_eviction_fence_ops,
>>>>>>> - &ev_fence->lock, evf_mgr->ev_fence_ctx,
>>>>>>> - atomic_inc_return(&evf_mgr->ev_fence_seq));
>>>>>>> + dma_fence_init64(&ev_fence->base, &amdgpu_eviction_fence_ops,
>>>>>>> + &ev_fence->lock, evf_mgr->ev_fence_ctx,
>>>>>>> + atomic_inc_return(&evf_mgr->ev_fence_seq));
>>>>>>> return ev_fence;
>>>>>>> }
>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c
>>>>>>> index 029cb24c28b3..5e92d00a591f 100644
>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c
>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c
>>>>>>> @@ -239,8 +239,8 @@ static int amdgpu_userq_fence_create(struct amdgpu_usermode_queue *userq,
>>>>>>> fence = &userq_fence->base;
>>>>>>> userq_fence->fence_drv = fence_drv;
>>>>>>> - dma_fence_init(fence, &amdgpu_userq_fence_ops, &userq_fence->lock,
>>>>>>> - fence_drv->context, seq);
>>>>>>> + dma_fence_init64(fence, &amdgpu_userq_fence_ops, &userq_fence->lock,
>>>>>>> + fence_drv->context, seq);
>>>>>>> amdgpu_userq_fence_driver_get(fence_drv);
>>>>>>> dma_fence_get(fence);
>>>>>>> @@ -334,7 +334,6 @@ static void amdgpu_userq_fence_release(struct dma_fence *f)
>>>>>>> }
>>>>>>> static const struct dma_fence_ops amdgpu_userq_fence_ops = {
>>>>>>> - .use_64bit_seqno = true,
>>>>>>> .get_driver_name = amdgpu_userq_fence_get_driver_name,
>>>>>>> .get_timeline_name = amdgpu_userq_fence_get_timeline_name,
>>>>>>> .signaled = amdgpu_userq_fence_signaled,
>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_tlb_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_tlb_fence.c
>>>>>>> index 51cddfa3f1e8..5d26797356a3 100644
>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_tlb_fence.c
>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_tlb_fence.c
>>>>>>> @@ -71,7 +71,6 @@ static void amdgpu_tlb_fence_work(struct work_struct *work)
>>>>>>> }
>>>>>>> static const struct dma_fence_ops amdgpu_tlb_fence_ops = {
>>>>>>> - .use_64bit_seqno = true,
>>>>>>> .get_driver_name = amdgpu_tlb_fence_get_driver_name,
>>>>>>> .get_timeline_name = amdgpu_tlb_fence_get_timeline_name
>>>>>>> };
>>>>>>> @@ -101,8 +100,8 @@ void amdgpu_vm_tlb_fence_create(struct amdgpu_device *adev, struct amdgpu_vm *vm
>>>>>>> INIT_WORK(&f->work, amdgpu_tlb_fence_work);
>>>>>>> spin_lock_init(&f->lock);
>>>>>>> - dma_fence_init(&f->base, &amdgpu_tlb_fence_ops, &f->lock,
>>>>>>> - vm->tlb_fence_context, atomic64_read(&vm->tlb_seq));
>>>>>>> + dma_fence_init64(&f->base, &amdgpu_tlb_fence_ops, &f->lock,
>>>>>>> + vm->tlb_fence_context, atomic64_read(&vm->tlb_seq));
>>>>>>> /* TODO: We probably need a separate wq here */
>>>>>>> dma_fence_get(&f->base);
>>>>>>> diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
>>>>>>> index 48b5202c531d..a34a0dcdc446 100644
>>>>>>> --- a/include/linux/dma-fence.h
>>>>>>> +++ b/include/linux/dma-fence.h
>>>>>>> @@ -97,6 +97,7 @@ struct dma_fence {
>>>>>>> };
>>>>>>> enum dma_fence_flag_bits {
>>>>>>> + DMA_FENCE_FLAG_SEQNO64_BIT,
>>>>>>> DMA_FENCE_FLAG_SIGNALED_BIT,
>>>>>>> DMA_FENCE_FLAG_TIMESTAMP_BIT,
>>>>>>> DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
>>>>>>> @@ -124,14 +125,6 @@ struct dma_fence_cb {
>>>>>>> *
>>>>>>> */
>>>>>>> struct dma_fence_ops {
>>>>>>> - /**
>>>>>>> - * @use_64bit_seqno:
>>>>>>> - *
>>>>>>> - * True if this dma_fence implementation uses 64bit seqno, false
>>>>>>> - * otherwise.
>>>>>>> - */
>>>>>>> - bool use_64bit_seqno;
>>>>>>> -
>>>>>>> /**
>>>>>>> * @get_driver_name:
>>>>>>> *
>>>>>>> @@ -262,6 +255,9 @@ struct dma_fence_ops {
>>>>>>> void dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>>>> spinlock_t *lock, u64 context, u64 seqno);
>>>>>>> +void dma_fence_init64(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>>>> + spinlock_t *lock, u64 context, u64 seqno);
>>>>>>> +
>>>>>>> void dma_fence_release(struct kref *kref);
>>>>>>> void dma_fence_free(struct dma_fence *fence);
>>>>>>> void dma_fence_describe(struct dma_fence *fence, struct seq_file *seq);
>>>>>>> @@ -454,7 +450,7 @@ static inline bool __dma_fence_is_later(struct dma_fence *fence, u64 f1, u64 f2)
>>>>>>> * 32bit sequence numbers. Use a 64bit compare when the driver says to
>>>>>>> * do so.
>>>>>>> */
>>>>>>> - if (fence->ops->use_64bit_seqno)
>>>>>>> + if (test_bit(DMA_FENCE_FLAG_SEQNO64_BIT, &fence->flags))
>>>>>>> return f1 > f2;
>>>>>>> return (int)(lower_32_bits(f1) - lower_32_bits(f2)) > 0;
>>>>>>
>>>>>
>>>>
>>>
>>
>
On Tue, Jun 03, 2025 at 02:20:51PM +0800, Xu Yilun wrote:
> > Wouldn’t it be simpler to skip the reference count increment altogether
> > and just call tsm_unbind in the virtual device’s destroy callback?
> > (iommufd_vdevice_destroy())
>
> The vdevice refcount is the main concern, there is also an IOMMU_DESTROY
> ioctl. User could just free the vdevice instance if no refcount, while VFIO
> is still in bound state. That seems not the correct free order.
Freeing the vdevice should automatically unbind it..
Jason
On 6/2/25 17:25, Tvrtko Ursulin wrote:
>
> On 02/06/2025 15:42, Christian König wrote:
>> On 6/2/25 15:05, Tvrtko Ursulin wrote:
>>>
>>> Hi,
>>>
>>> On 15/05/2025 14:15, Christian König wrote:
>>>> Hey drm-misc maintainers,
>>>>
>>>> can you guys please backmerge drm-next into drm-misc-next?
>>>>
>>>> I want to push this patch here but it depends on changes which are partially in drm-next and partially in drm-misc-next.
>>>
>>> Looks like the backmerge is still pending?
>>
>> Yes, @Maarten, @Maxime and @Thomas ping on this.
>>
>>> In the meantime, Christian, any chance you will have some bandwith to think about the tail end of the series? Specifically patch 6 and how that is used onward.
>>
>> Well the RCU grace period is quite a nifty hack. I wanted to go over it again after merging the first patches from this series.
>>
>> In general looks like a good idea to me, I just don't like that we explicitely need to expose dma_fence_access_begin() and dma_fence_access_end().
>>
>> Especially we can't do that while calling fence->ops->release.
>
> Hm why not? You think something will take offence of the rcu_read_lock()?
Yes, especially it is perfectly legitimate to call synchronize_rcu() or lock semaphores/mutexes from that callback.
Either keep the RCU critical section only for the trace or even better come up with some different approach, e.g. copying the string under the RCU lock or something like that.
Regards,
Christian.
>
> Regards,
>
> Tvrtko
>
>>>> On 5/15/25 11:49, Tvrtko Ursulin wrote:
>>>>> With the goal of reducing the need for drivers to touch (and dereference)
>>>>> fence->ops, we move the 64-bit seqnos flag from struct dma_fence_ops to
>>>>> the fence->flags.
>>>>>
>>>>> Drivers which were setting this flag are changed to use new
>>>>> dma_fence_init64() instead of dma_fence_init().
>>>>>
>>>>> v2:
>>>>> * Streamlined init and added kerneldoc.
>>>>> * Rebase for amdgpu userq which landed since.
>>>>>
>>>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin(a)igalia.com>
>>>>> Reviewed-by: Christian König <christian.koenig(a)amd.com> # v1
>>>>> ---
>>>>> drivers/dma-buf/dma-fence-chain.c | 5 +-
>>>>> drivers/dma-buf/dma-fence.c | 69 ++++++++++++++-----
>>>>> .../drm/amd/amdgpu/amdgpu_eviction_fence.c | 7 +-
>>>>> .../gpu/drm/amd/amdgpu/amdgpu_userq_fence.c | 5 +-
>>>>> .../gpu/drm/amd/amdgpu/amdgpu_vm_tlb_fence.c | 5 +-
>>>>> include/linux/dma-fence.h | 14 ++--
>>>>> 6 files changed, 64 insertions(+), 41 deletions(-)
>>>>>
>>>>> diff --git a/drivers/dma-buf/dma-fence-chain.c b/drivers/dma-buf/dma-fence-chain.c
>>>>> index 90424f23fd73..a8a90acf4f34 100644
>>>>> --- a/drivers/dma-buf/dma-fence-chain.c
>>>>> +++ b/drivers/dma-buf/dma-fence-chain.c
>>>>> @@ -218,7 +218,6 @@ static void dma_fence_chain_set_deadline(struct dma_fence *fence,
>>>>> }
>>>>> const struct dma_fence_ops dma_fence_chain_ops = {
>>>>> - .use_64bit_seqno = true,
>>>>> .get_driver_name = dma_fence_chain_get_driver_name,
>>>>> .get_timeline_name = dma_fence_chain_get_timeline_name,
>>>>> .enable_signaling = dma_fence_chain_enable_signaling,
>>>>> @@ -262,8 +261,8 @@ void dma_fence_chain_init(struct dma_fence_chain *chain,
>>>>> seqno = max(prev->seqno, seqno);
>>>>> }
>>>>> - dma_fence_init(&chain->base, &dma_fence_chain_ops,
>>>>> - &chain->lock, context, seqno);
>>>>> + dma_fence_init64(&chain->base, &dma_fence_chain_ops, &chain->lock,
>>>>> + context, seqno);
>>>>> /*
>>>>> * Chaining dma_fence_chain container together is only allowed through
>>>>> diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
>>>>> index f0cdd3e99d36..705b59787731 100644
>>>>> --- a/drivers/dma-buf/dma-fence.c
>>>>> +++ b/drivers/dma-buf/dma-fence.c
>>>>> @@ -989,24 +989,9 @@ void dma_fence_describe(struct dma_fence *fence, struct seq_file *seq)
>>>>> }
>>>>> EXPORT_SYMBOL(dma_fence_describe);
>>>>> -/**
>>>>> - * dma_fence_init - Initialize a custom fence.
>>>>> - * @fence: the fence to initialize
>>>>> - * @ops: the dma_fence_ops for operations on this fence
>>>>> - * @lock: the irqsafe spinlock to use for locking this fence
>>>>> - * @context: the execution context this fence is run on
>>>>> - * @seqno: a linear increasing sequence number for this context
>>>>> - *
>>>>> - * Initializes an allocated fence, the caller doesn't have to keep its
>>>>> - * refcount after committing with this fence, but it will need to hold a
>>>>> - * refcount again if &dma_fence_ops.enable_signaling gets called.
>>>>> - *
>>>>> - * context and seqno are used for easy comparison between fences, allowing
>>>>> - * to check which fence is later by simply using dma_fence_later().
>>>>> - */
>>>>> -void
>>>>> -dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>> - spinlock_t *lock, u64 context, u64 seqno)
>>>>> +static void
>>>>> +__dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>> + spinlock_t *lock, u64 context, u64 seqno, unsigned long flags)
>>>>> {
>>>>> BUG_ON(!lock);
>>>>> BUG_ON(!ops || !ops->get_driver_name || !ops->get_timeline_name);
>>>>> @@ -1017,9 +1002,55 @@ dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>> fence->lock = lock;
>>>>> fence->context = context;
>>>>> fence->seqno = seqno;
>>>>> - fence->flags = 0UL;
>>>>> + fence->flags = flags;
>>>>> fence->error = 0;
>>>>> trace_dma_fence_init(fence);
>>>>> }
>>>>> +
>>>>> +/**
>>>>> + * dma_fence_init - Initialize a custom fence.
>>>>> + * @fence: the fence to initialize
>>>>> + * @ops: the dma_fence_ops for operations on this fence
>>>>> + * @lock: the irqsafe spinlock to use for locking this fence
>>>>> + * @context: the execution context this fence is run on
>>>>> + * @seqno: a linear increasing sequence number for this context
>>>>> + *
>>>>> + * Initializes an allocated fence, the caller doesn't have to keep its
>>>>> + * refcount after committing with this fence, but it will need to hold a
>>>>> + * refcount again if &dma_fence_ops.enable_signaling gets called.
>>>>> + *
>>>>> + * context and seqno are used for easy comparison between fences, allowing
>>>>> + * to check which fence is later by simply using dma_fence_later().
>>>>> + */
>>>>> +void
>>>>> +dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>> + spinlock_t *lock, u64 context, u64 seqno)
>>>>> +{
>>>>> + __dma_fence_init(fence, ops, lock, context, seqno, 0UL);
>>>>> +}
>>>>> EXPORT_SYMBOL(dma_fence_init);
>>>>> +
>>>>> +/**
>>>>> + * dma_fence_init64 - Initialize a custom fence with 64-bit seqno support.
>>>>> + * @fence: the fence to initialize
>>>>> + * @ops: the dma_fence_ops for operations on this fence
>>>>> + * @lock: the irqsafe spinlock to use for locking this fence
>>>>> + * @context: the execution context this fence is run on
>>>>> + * @seqno: a linear increasing sequence number for this context
>>>>> + *
>>>>> + * Initializes an allocated fence, the caller doesn't have to keep its
>>>>> + * refcount after committing with this fence, but it will need to hold a
>>>>> + * refcount again if &dma_fence_ops.enable_signaling gets called.
>>>>> + *
>>>>> + * Context and seqno are used for easy comparison between fences, allowing
>>>>> + * to check which fence is later by simply using dma_fence_later().
>>>>> + */
>>>>> +void
>>>>> +dma_fence_init64(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>> + spinlock_t *lock, u64 context, u64 seqno)
>>>>> +{
>>>>> + __dma_fence_init(fence, ops, lock, context, seqno,
>>>>> + BIT(DMA_FENCE_FLAG_SEQNO64_BIT));
>>>>> +}
>>>>> +EXPORT_SYMBOL(dma_fence_init64);
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_eviction_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_eviction_fence.c
>>>>> index 1a7469543db5..79713421bffe 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_eviction_fence.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_eviction_fence.c
>>>>> @@ -134,7 +134,6 @@ static bool amdgpu_eviction_fence_enable_signaling(struct dma_fence *f)
>>>>> }
>>>>> static const struct dma_fence_ops amdgpu_eviction_fence_ops = {
>>>>> - .use_64bit_seqno = true,
>>>>> .get_driver_name = amdgpu_eviction_fence_get_driver_name,
>>>>> .get_timeline_name = amdgpu_eviction_fence_get_timeline_name,
>>>>> .enable_signaling = amdgpu_eviction_fence_enable_signaling,
>>>>> @@ -160,9 +159,9 @@ amdgpu_eviction_fence_create(struct amdgpu_eviction_fence_mgr *evf_mgr)
>>>>> ev_fence->evf_mgr = evf_mgr;
>>>>> get_task_comm(ev_fence->timeline_name, current);
>>>>> spin_lock_init(&ev_fence->lock);
>>>>> - dma_fence_init(&ev_fence->base, &amdgpu_eviction_fence_ops,
>>>>> - &ev_fence->lock, evf_mgr->ev_fence_ctx,
>>>>> - atomic_inc_return(&evf_mgr->ev_fence_seq));
>>>>> + dma_fence_init64(&ev_fence->base, &amdgpu_eviction_fence_ops,
>>>>> + &ev_fence->lock, evf_mgr->ev_fence_ctx,
>>>>> + atomic_inc_return(&evf_mgr->ev_fence_seq));
>>>>> return ev_fence;
>>>>> }
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c
>>>>> index 029cb24c28b3..5e92d00a591f 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c
>>>>> @@ -239,8 +239,8 @@ static int amdgpu_userq_fence_create(struct amdgpu_usermode_queue *userq,
>>>>> fence = &userq_fence->base;
>>>>> userq_fence->fence_drv = fence_drv;
>>>>> - dma_fence_init(fence, &amdgpu_userq_fence_ops, &userq_fence->lock,
>>>>> - fence_drv->context, seq);
>>>>> + dma_fence_init64(fence, &amdgpu_userq_fence_ops, &userq_fence->lock,
>>>>> + fence_drv->context, seq);
>>>>> amdgpu_userq_fence_driver_get(fence_drv);
>>>>> dma_fence_get(fence);
>>>>> @@ -334,7 +334,6 @@ static void amdgpu_userq_fence_release(struct dma_fence *f)
>>>>> }
>>>>> static const struct dma_fence_ops amdgpu_userq_fence_ops = {
>>>>> - .use_64bit_seqno = true,
>>>>> .get_driver_name = amdgpu_userq_fence_get_driver_name,
>>>>> .get_timeline_name = amdgpu_userq_fence_get_timeline_name,
>>>>> .signaled = amdgpu_userq_fence_signaled,
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_tlb_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_tlb_fence.c
>>>>> index 51cddfa3f1e8..5d26797356a3 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_tlb_fence.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_tlb_fence.c
>>>>> @@ -71,7 +71,6 @@ static void amdgpu_tlb_fence_work(struct work_struct *work)
>>>>> }
>>>>> static const struct dma_fence_ops amdgpu_tlb_fence_ops = {
>>>>> - .use_64bit_seqno = true,
>>>>> .get_driver_name = amdgpu_tlb_fence_get_driver_name,
>>>>> .get_timeline_name = amdgpu_tlb_fence_get_timeline_name
>>>>> };
>>>>> @@ -101,8 +100,8 @@ void amdgpu_vm_tlb_fence_create(struct amdgpu_device *adev, struct amdgpu_vm *vm
>>>>> INIT_WORK(&f->work, amdgpu_tlb_fence_work);
>>>>> spin_lock_init(&f->lock);
>>>>> - dma_fence_init(&f->base, &amdgpu_tlb_fence_ops, &f->lock,
>>>>> - vm->tlb_fence_context, atomic64_read(&vm->tlb_seq));
>>>>> + dma_fence_init64(&f->base, &amdgpu_tlb_fence_ops, &f->lock,
>>>>> + vm->tlb_fence_context, atomic64_read(&vm->tlb_seq));
>>>>> /* TODO: We probably need a separate wq here */
>>>>> dma_fence_get(&f->base);
>>>>> diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
>>>>> index 48b5202c531d..a34a0dcdc446 100644
>>>>> --- a/include/linux/dma-fence.h
>>>>> +++ b/include/linux/dma-fence.h
>>>>> @@ -97,6 +97,7 @@ struct dma_fence {
>>>>> };
>>>>> enum dma_fence_flag_bits {
>>>>> + DMA_FENCE_FLAG_SEQNO64_BIT,
>>>>> DMA_FENCE_FLAG_SIGNALED_BIT,
>>>>> DMA_FENCE_FLAG_TIMESTAMP_BIT,
>>>>> DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
>>>>> @@ -124,14 +125,6 @@ struct dma_fence_cb {
>>>>> *
>>>>> */
>>>>> struct dma_fence_ops {
>>>>> - /**
>>>>> - * @use_64bit_seqno:
>>>>> - *
>>>>> - * True if this dma_fence implementation uses 64bit seqno, false
>>>>> - * otherwise.
>>>>> - */
>>>>> - bool use_64bit_seqno;
>>>>> -
>>>>> /**
>>>>> * @get_driver_name:
>>>>> *
>>>>> @@ -262,6 +255,9 @@ struct dma_fence_ops {
>>>>> void dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>> spinlock_t *lock, u64 context, u64 seqno);
>>>>> +void dma_fence_init64(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>> + spinlock_t *lock, u64 context, u64 seqno);
>>>>> +
>>>>> void dma_fence_release(struct kref *kref);
>>>>> void dma_fence_free(struct dma_fence *fence);
>>>>> void dma_fence_describe(struct dma_fence *fence, struct seq_file *seq);
>>>>> @@ -454,7 +450,7 @@ static inline bool __dma_fence_is_later(struct dma_fence *fence, u64 f1, u64 f2)
>>>>> * 32bit sequence numbers. Use a 64bit compare when the driver says to
>>>>> * do so.
>>>>> */
>>>>> - if (fence->ops->use_64bit_seqno)
>>>>> + if (test_bit(DMA_FENCE_FLAG_SEQNO64_BIT, &fence->flags))
>>>>> return f1 > f2;
>>>>> return (int)(lower_32_bits(f1) - lower_32_bits(f2)) > 0;
>>>>
>>>
>>
>
6.6-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Zimmermann <tzimmermann(a)suse.de>
commit 8260731ccad0451207b45844bb66eb161a209218 upstream.
Test struct drm_gem_object.import_attach to detect imported objects.
During object clenanup, the dma_buf field might be NULL. Testing it in
an object's free callback then incorrectly does a cleanup as for native
objects. Happens for calls to drm_mode_destroy_dumb_ioctl() that
clears the dma_buf field in drm_gem_object_exported_dma_buf_free().
v3:
- only test for import_attach (Boris)
v2:
- use import_attach.dmabuf instead of dma_buf (Christian)
Signed-off-by: Thomas Zimmermann <tzimmermann(a)suse.de>
Fixes: b57aa47d39e9 ("drm/gem: Test for imported GEM buffers with helper")
Reported-by: Andy Yan <andyshrk(a)163.com>
Closes: https://lore.kernel.org/dri-devel/38d09d34.4354.196379aa560.Coremail.andysh…
Tested-by: Andy Yan <andyshrk(a)163.com>
Cc: Thomas Zimmermann <tzimmermann(a)suse.de>
Cc: Anusha Srivatsa <asrivats(a)redhat.com>
Cc: Christian König <christian.koenig(a)amd.com>
Cc: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
Cc: Maxime Ripard <mripard(a)kernel.org>
Cc: David Airlie <airlied(a)gmail.com>
Cc: Simona Vetter <simona(a)ffwll.ch>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: "Christian König" <christian.koenig(a)amd.com>
Cc: dri-devel(a)lists.freedesktop.org
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
Reviewed-by: Boris Brezillon <boris.brezillon(a)collabora.com>
Reviewed-by: Simona Vetter <simona.vetter(a)ffwll.ch>
Link: https://lore.kernel.org/r/20250416065820.26076-1-tzimmermann@suse.de
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
include/drm/drm_gem.h | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -567,8 +567,7 @@ static inline bool drm_gem_object_is_sha
*/
static inline bool drm_gem_is_imported(const struct drm_gem_object *obj)
{
- /* The dma-buf's priv field points to the original GEM object. */
- return obj->dma_buf && (obj->dma_buf->priv != obj);
+ return !!obj->import_attach;
}
#ifdef CONFIG_LOCKDEP
On Thu, May 29, 2025 at 01:34:43PM +0800, Xu Yilun wrote:
> This series has 3 sections:
I really think this is too big to try to progress, even in RFC
form.
> Patch 1 - 11 deal with the private MMIO mapping in KVM MMU via DMABUF.
> Leverage Jason & Vivek's latest VFIO dmabuf series [3], see Patch 2 - 4.
> The concern for get_pfn() kAPI [4] is not addressed so are marked as
> HACK, will investigate later.
I would probably split this out entirely into its own topic. It
doesn't seem directly related to TSM as KVM can use DMABUF for good
reasons independently .
> Patch 12 - 22 is about TSM Bind/Unbind/Guest request management in VFIO
> & IOMMUFD. Picks some of Shameer's patch in [5], see Patch 12 & 14.
This is some reasonable topic on its own after Dan's series
> Patch 23 - 30 is a solution to meet the TDX specific sequence
> enforcement on various device Unbind cases, including converting device
> back to shared, hot unplug, TD destroy. Start with a tdx_tsm driver
> prototype and finally implement the Unbind enforcement inside the
> driver. To be honest it is still awkward to me, but I need help.
Then you have a series or two to implement TDX using the infrastructure.
Jason
On Thu, May 29, 2025 at 01:34:53PM +0800, Xu Yilun wrote:
> Export vfio dma-buf specific info by attaching vfio_dma_buf_data in
> struct dma_buf::priv. Provide a helper vfio_dma_buf_get_data() for
> importers to fetch these data. Exporters identify VFIO dma-buf by
> successfully getting these data.
>
> VFIO dma-buf supports disabling host access to these exported MMIO
> regions when the device is converted to private. Exporters like KVM
> need to identify this type of dma-buf to decide if it is good to use.
> KVM only allows host unaccessible MMIO regions been mapped in private
> roots.
>
> Export struct kvm * handler attached to the vfio device. This
> allows KVM to do another sanity check. MMIO should only be assigned to
> a CoCo VM if its owner device is already assigned to the same VM.
This doesn't seem right, it should be encapsulated into the standard
DMABUF API in some way.
Jason
On Thu, May 29, 2025 at 10:41:15PM +0800, Xu Yilun wrote:
> > On AMD, the host can "revoke" at any time, at worst it'll see RMP
> > events from IOMMU. Thanks,
>
> Is the RMP event firstly detected by host or guest? If by host,
> host could fool guest by just suppress the event. Guest thought the
> DMA writting is successful but it is not and may cause security issue.
Is that in scope of the threat model though? Host must not be able to
change DMAs or target them to different memory, but the host can stop
DMA and loose it, surely?
Host controls the PCI memory enable bit, doesn't it?
Jason
On Tue, May 20, 2025 at 5:27 AM Tomeu Vizoso <tomeu(a)tomeuvizoso.net> wrote:
>
> Add the bindings for the Neural Processing Unit IP from Rockchip.
>
> v2:
> - Adapt to new node structure (one node per core, each with its own
> IOMMU)
> - Several misc. fixes from Sebastian Reichel
>
> v3:
> - Split register block in its constituent subblocks, and only require
> the ones that the kernel would ever use (Nicolas Frattaroli)
> - Group supplies (Rob Herring)
> - Explain the way in which the top core is special (Rob Herring)
>
> v4:
> - Change required node name to npu@ (Rob Herring and Krzysztof Kozlowski)
> - Remove unneeded items: (Krzysztof Kozlowski)
> - Fix use of minItems/maxItems (Krzysztof Kozlowski)
> - Add reg-names to list of required properties (Krzysztof Kozlowski)
> - Fix example (Krzysztof Kozlowski)
>
> v5:
> - Rename file to rockchip,rk3588-rknn-core.yaml (Krzysztof Kozlowski)
> - Streamline compatible property (Krzysztof Kozlowski)
>
> Signed-off-by: Sebastian Reichel <sebastian.reichel(a)collabora.com>
> Signed-off-by: Tomeu Vizoso <tomeu(a)tomeuvizoso.net>
> ---
> .../bindings/npu/rockchip,rk3588-rknn-core.yaml | 147 +++++++++++++++++++++
> 1 file changed, 147 insertions(+)
>
> diff --git a/Documentation/devicetree/bindings/npu/rockchip,rk3588-rknn-core.yaml b/Documentation/devicetree/bindings/npu/rockchip,rk3588-rknn-core.yaml
> new file mode 100644
> index 0000000000000000000000000000000000000000..9eb426367afcbc03c387d43c4b8250cdd1b9ee86
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/npu/rockchip,rk3588-rknn-core.yaml
> @@ -0,0 +1,147 @@
> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/npu/rockchip,rk3588-rknn-core.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: Neural Processing Unit IP from Rockchip
> +
> +maintainers:
> + - Tomeu Vizoso <tomeu(a)tomeuvizoso.net>
> +
> +description:
> + Rockchip IP for accelerating inference of neural networks, based on NVIDIA's
> + open source NVDLA IP.
> +
> + There is to be a node per each core in the NPU. In Rockchip's design there
> + will be one core that is special and needs to be powered on before any of the
> + other cores can be used. This special core is called the top core and should
> + have the compatible string that corresponds to top cores.
Is this really a distinction in the h/w? If you change which core is
the top one in the DT, does it still work?
> +
> +properties:
> + $nodename:
> + pattern: '^npu@[a-f0-9]+$'
> +
> + compatible:
> + enum:
> + - rockchip,rk3588-rknn-core-top
> + - rockchip,rk3588-rknn-core
> +
> + reg:
> + maxItems: 3
> +
> + reg-names:
> + items:
> + - const: pc
> + - const: cna
> + - const: core
> +
> + clocks:
> + minItems: 2
> + maxItems: 4
> +
> + clock-names:
> + items:
> + - const: aclk
> + - const: hclk
> + - const: npu
> + - const: pclk
> + minItems: 2
It is odd that the non-top cores only have bus clocks and no module
clock. But based on the clock names, I'm guessing the aclk/hclk are
not shared, but the npu and pclk are shared. Since you make the top
core probe first, then it will enable the shared clocks and the
non-top cores don't have to worry about them. If so, that is wrong as
it is letting the software design define the bindings.
Rob
Until CONFIG_DMABUF_SYSFS_STATS was added [1] it was only possible to
perform per-buffer accounting with debugfs which is not suitable for
production environments. Eventually we discovered the overhead with
per-buffer sysfs file creation/removal was significantly impacting
allocation and free times, and exacerbated kernfs lock contention. [2]
dma_buf_stats_setup() is responsible for 39% of single-page buffer
creation duration, or 74% of single-page dma_buf_export() duration when
stressing dmabuf allocations and frees.
I prototyped a change from per-buffer to per-exporter statistics with a
RCU protected list of exporter allocations that accommodates most (but
not all) of our use-cases and avoids almost all of the sysfs overhead.
While that adds less overhead than per-buffer sysfs, and less even than
the maintenance of the dmabuf debugfs_list, it's still *additional*
overhead on top of the debugfs_list and doesn't give us per-buffer info.
This series uses the existing dmabuf debugfs_list to implement a BPF
dmabuf iterator, which adds no overhead to buffer allocation/free and
provides per-buffer info. The list has been moved outside of
CONFIG_DEBUG_FS scope so that it is always populated. The BPF program
loaded by userspace that extracts per-buffer information gets to define
its own interface which avoids the lack of ABI stability with debugfs.
This will allow us to replace our use of CONFIG_DMABUF_SYSFS_STATS, and
the plan is to remove it from the kernel after the next longterm stable
release.
[1] https://lore.kernel.org/linux-media/20201210044400.1080308-1-hridya@google.…
[2] https://lore.kernel.org/all/20220516171315.2400578-1-tjmercier@google.com
v1: https://lore.kernel.org/all/20250414225227.3642618-1-tjmercier@google.com
v1 -> v2:
Make the DMA buffer list independent of CONFIG_DEBUG_FS per Christian
König
Add CONFIG_DMA_SHARED_BUFFER check to kernel/bpf/Makefile per kernel
test robot
Use BTF_ID_LIST_SINGLE instead of BTF_ID_LIST_GLOBAL_SINGLE per Song Liu
Fixup comment style, mixing code/declarations, and use ASSERT_OK_FD in
selftest per Song Liu
Add BPF_ITER_RESCHED feature to bpf_dmabuf_reg_info per Alexei
Starovoitov
Add open-coded iterator and selftest per Alexei Starovoitov
Add a second test buffer from the system dmabuf heap to selftests
Use the BPF program we'll use in production for selftest per Alexei
Starovoitov
https://r.android.com/c/platform/system/bpfprogs/+/3616123/2/dmabufIter.chttps://r.android.com/c/platform/system/memory/libmeminfo/+/3614259/1/libdm…
v2: https://lore.kernel.org/all/20250504224149.1033867-1-tjmercier@google.com
v2 -> v3:
Rebase onto bpf-next/master
Move get_next_dmabuf() into drivers/dma-buf/dma-buf.c, along with the
new get_first_dmabuf(). This avoids having to expose the dmabuf list
and mutex to the rest of the kernel, and keeps the dmabuf mutex
operations near each other in the same file. (Christian König)
Add Christian's RB to dma-buf: Rename debugfs symbols
Drop RFC: dma-buf: Remove DMA-BUF statistics
v3: https://lore.kernel.org/all/20250507001036.2278781-1-tjmercier@google.com
v3 -> v4:
Fix selftest BPF program comment style (not kdoc) per Alexei Starovoitov
Fix dma-buf.c kdoc comment style per Alexei Starovoitov
Rename get_first_dmabuf / get_next_dmabuf to dma_buf_iter_begin /
dma_buf_iter_next per Christian König
Add Christian's RB to bpf: Add dmabuf iterator
v4: https://lore.kernel.org/all/20250508182025.2961555-1-tjmercier@google.com
v4 -> v5:
Add Christian's Acks to all patches
Add Song Liu's Acks
Move BTF_ID_LIST_SINGLE and DEFINE_BPF_ITER_FUNC closer to usage per
Song Liu
Fix open-coded iterator comment style per Song Liu
Move iterator termination check to its own subtest per Song Liu
Rework selftest buffer creation per Song Liu
Fix spacing in sanitize_string per BPF CI
v5: https://lore.kernel.org/all/20250512174036.266796-1-tjmercier@google.com
v5 -> v6:
Song Liu:
Init test buffer FDs to -1
Zero-init udmabuf_create for future proofing
Bail early for iterator fd/FILE creation failure
Dereference char ptr to check for NUL in sanitize_string()
Move map insertion from create_test_buffers() to test_dmabuf_iter()
Add ACK to selftests/bpf: Add test for open coded dmabuf_iter
v6: https://lore.kernel.org/all/20250513163601.812317-1-tjmercier@google.com
v6 -> v7:
Zero uninitialized name bytes following the end of name strings per
s390x BPF CI
Reorder sanitize_string bounds checks per Song Liu
Add Song's Ack to: selftests/bpf: Add test for dmabuf_iter
Rebase onto bpf-next/master per BPF CI
T.J. Mercier (5):
dma-buf: Rename debugfs symbols
bpf: Add dmabuf iterator
bpf: Add open coded dmabuf iterator
selftests/bpf: Add test for dmabuf_iter
selftests/bpf: Add test for open coded dmabuf_iter
drivers/dma-buf/dma-buf.c | 98 ++++--
include/linux/dma-buf.h | 4 +-
kernel/bpf/Makefile | 3 +
kernel/bpf/dmabuf_iter.c | 150 +++++++++
kernel/bpf/helpers.c | 5 +
.../testing/selftests/bpf/bpf_experimental.h | 5 +
tools/testing/selftests/bpf/config | 3 +
.../selftests/bpf/prog_tests/dmabuf_iter.c | 285 ++++++++++++++++++
.../testing/selftests/bpf/progs/dmabuf_iter.c | 101 +++++++
9 files changed, 632 insertions(+), 22 deletions(-)
create mode 100644 kernel/bpf/dmabuf_iter.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/dmabuf_iter.c
create mode 100644 tools/testing/selftests/bpf/progs/dmabuf_iter.c
base-commit: 6888a036cfc3d617d0843ecc9bd8504e91fb9de6
--
2.49.0.1151.ga128411c76-goog
6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Zimmermann <tzimmermann(a)suse.de>
commit 8260731ccad0451207b45844bb66eb161a209218 upstream.
Test struct drm_gem_object.import_attach to detect imported objects.
During object clenanup, the dma_buf field might be NULL. Testing it in
an object's free callback then incorrectly does a cleanup as for native
objects. Happens for calls to drm_mode_destroy_dumb_ioctl() that
clears the dma_buf field in drm_gem_object_exported_dma_buf_free().
v3:
- only test for import_attach (Boris)
v2:
- use import_attach.dmabuf instead of dma_buf (Christian)
Signed-off-by: Thomas Zimmermann <tzimmermann(a)suse.de>
Fixes: b57aa47d39e9 ("drm/gem: Test for imported GEM buffers with helper")
Reported-by: Andy Yan <andyshrk(a)163.com>
Closes: https://lore.kernel.org/dri-devel/38d09d34.4354.196379aa560.Coremail.andysh…
Tested-by: Andy Yan <andyshrk(a)163.com>
Cc: Thomas Zimmermann <tzimmermann(a)suse.de>
Cc: Anusha Srivatsa <asrivats(a)redhat.com>
Cc: Christian König <christian.koenig(a)amd.com>
Cc: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
Cc: Maxime Ripard <mripard(a)kernel.org>
Cc: David Airlie <airlied(a)gmail.com>
Cc: Simona Vetter <simona(a)ffwll.ch>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: "Christian König" <christian.koenig(a)amd.com>
Cc: dri-devel(a)lists.freedesktop.org
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
Reviewed-by: Boris Brezillon <boris.brezillon(a)collabora.com>
Reviewed-by: Simona Vetter <simona.vetter(a)ffwll.ch>
Link: https://lore.kernel.org/r/20250416065820.26076-1-tzimmermann@suse.de
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
include/drm/drm_gem.h | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -580,8 +580,7 @@ static inline bool drm_gem_object_is_sha
*/
static inline bool drm_gem_is_imported(const struct drm_gem_object *obj)
{
- /* The dma-buf's priv field points to the original GEM object. */
- return obj->dma_buf && (obj->dma_buf->priv != obj);
+ return !!obj->import_attach;
}
#ifdef CONFIG_LOCKDEP
6.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Zimmermann <tzimmermann(a)suse.de>
[ Upstream commit 8260731ccad0451207b45844bb66eb161a209218 ]
Test struct drm_gem_object.import_attach to detect imported objects.
During object clenanup, the dma_buf field might be NULL. Testing it in
an object's free callback then incorrectly does a cleanup as for native
objects. Happens for calls to drm_mode_destroy_dumb_ioctl() that
clears the dma_buf field in drm_gem_object_exported_dma_buf_free().
v3:
- only test for import_attach (Boris)
v2:
- use import_attach.dmabuf instead of dma_buf (Christian)
Signed-off-by: Thomas Zimmermann <tzimmermann(a)suse.de>
Fixes: b57aa47d39e9 ("drm/gem: Test for imported GEM buffers with helper")
Reported-by: Andy Yan <andyshrk(a)163.com>
Closes: https://lore.kernel.org/dri-devel/38d09d34.4354.196379aa560.Coremail.andysh…
Tested-by: Andy Yan <andyshrk(a)163.com>
Cc: Thomas Zimmermann <tzimmermann(a)suse.de>
Cc: Anusha Srivatsa <asrivats(a)redhat.com>
Cc: Christian König <christian.koenig(a)amd.com>
Cc: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
Cc: Maxime Ripard <mripard(a)kernel.org>
Cc: David Airlie <airlied(a)gmail.com>
Cc: Simona Vetter <simona(a)ffwll.ch>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: "Christian König" <christian.koenig(a)amd.com>
Cc: dri-devel(a)lists.freedesktop.org
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
Reviewed-by: Boris Brezillon <boris.brezillon(a)collabora.com>
Reviewed-by: Simona Vetter <simona.vetter(a)ffwll.ch>
Link: https://lore.kernel.org/r/20250416065820.26076-1-tzimmermann@suse.de
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
include/drm/drm_gem.h | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index 2bf893eabb4b2..bcd54020d6ba5 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -585,8 +585,7 @@ static inline bool drm_gem_object_is_shared_for_memory_stats(struct drm_gem_obje
*/
static inline bool drm_gem_is_imported(const struct drm_gem_object *obj)
{
- /* The dma-buf's priv field points to the original GEM object. */
- return obj->dma_buf && (obj->dma_buf->priv != obj);
+ return !!obj->import_attach;
}
#ifdef CONFIG_LOCKDEP
--
2.39.5
This is a note to let you know that I've just added the patch titled
drm/gem: Internally test import_attach for imported objects
to the 6.6-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
drm-gem-internally-test-import_attach-for-imported-objects.patch
and it can be found in the queue-6.6 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
From 8260731ccad0451207b45844bb66eb161a209218 Mon Sep 17 00:00:00 2001
From: Thomas Zimmermann <tzimmermann(a)suse.de>
Date: Wed, 16 Apr 2025 08:57:45 +0200
Subject: drm/gem: Internally test import_attach for imported objects
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
From: Thomas Zimmermann <tzimmermann(a)suse.de>
commit 8260731ccad0451207b45844bb66eb161a209218 upstream.
Test struct drm_gem_object.import_attach to detect imported objects.
During object clenanup, the dma_buf field might be NULL. Testing it in
an object's free callback then incorrectly does a cleanup as for native
objects. Happens for calls to drm_mode_destroy_dumb_ioctl() that
clears the dma_buf field in drm_gem_object_exported_dma_buf_free().
v3:
- only test for import_attach (Boris)
v2:
- use import_attach.dmabuf instead of dma_buf (Christian)
Signed-off-by: Thomas Zimmermann <tzimmermann(a)suse.de>
Fixes: b57aa47d39e9 ("drm/gem: Test for imported GEM buffers with helper")
Reported-by: Andy Yan <andyshrk(a)163.com>
Closes: https://lore.kernel.org/dri-devel/38d09d34.4354.196379aa560.Coremail.andysh…
Tested-by: Andy Yan <andyshrk(a)163.com>
Cc: Thomas Zimmermann <tzimmermann(a)suse.de>
Cc: Anusha Srivatsa <asrivats(a)redhat.com>
Cc: Christian König <christian.koenig(a)amd.com>
Cc: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
Cc: Maxime Ripard <mripard(a)kernel.org>
Cc: David Airlie <airlied(a)gmail.com>
Cc: Simona Vetter <simona(a)ffwll.ch>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: "Christian König" <christian.koenig(a)amd.com>
Cc: dri-devel(a)lists.freedesktop.org
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
Reviewed-by: Boris Brezillon <boris.brezillon(a)collabora.com>
Reviewed-by: Simona Vetter <simona.vetter(a)ffwll.ch>
Link: https://lore.kernel.org/r/20250416065820.26076-1-tzimmermann@suse.de
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
include/drm/drm_gem.h | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -567,8 +567,7 @@ static inline bool drm_gem_object_is_sha
*/
static inline bool drm_gem_is_imported(const struct drm_gem_object *obj)
{
- /* The dma-buf's priv field points to the original GEM object. */
- return obj->dma_buf && (obj->dma_buf->priv != obj);
+ return !!obj->import_attach;
}
#ifdef CONFIG_LOCKDEP
Patches currently in stable-queue which might be from tzimmermann(a)suse.de are
queue-6.6/drm-gem-internally-test-import_attach-for-imported-objects.patch
queue-6.6/drm-ast-find-vbios-mode-from-regular-display-size.patch
queue-6.6/drm-gem-test-for-imported-gem-buffers-with-helper.patch
queue-6.6/drm-atomic-clarify-the-rules-around-drm_atomic_state.patch
This is a note to let you know that I've just added the patch titled
drm/gem: Internally test import_attach for imported objects
to the 6.12-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
drm-gem-internally-test-import_attach-for-imported-objects.patch
and it can be found in the queue-6.12 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
From 8260731ccad0451207b45844bb66eb161a209218 Mon Sep 17 00:00:00 2001
From: Thomas Zimmermann <tzimmermann(a)suse.de>
Date: Wed, 16 Apr 2025 08:57:45 +0200
Subject: drm/gem: Internally test import_attach for imported objects
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
From: Thomas Zimmermann <tzimmermann(a)suse.de>
commit 8260731ccad0451207b45844bb66eb161a209218 upstream.
Test struct drm_gem_object.import_attach to detect imported objects.
During object clenanup, the dma_buf field might be NULL. Testing it in
an object's free callback then incorrectly does a cleanup as for native
objects. Happens for calls to drm_mode_destroy_dumb_ioctl() that
clears the dma_buf field in drm_gem_object_exported_dma_buf_free().
v3:
- only test for import_attach (Boris)
v2:
- use import_attach.dmabuf instead of dma_buf (Christian)
Signed-off-by: Thomas Zimmermann <tzimmermann(a)suse.de>
Fixes: b57aa47d39e9 ("drm/gem: Test for imported GEM buffers with helper")
Reported-by: Andy Yan <andyshrk(a)163.com>
Closes: https://lore.kernel.org/dri-devel/38d09d34.4354.196379aa560.Coremail.andysh…
Tested-by: Andy Yan <andyshrk(a)163.com>
Cc: Thomas Zimmermann <tzimmermann(a)suse.de>
Cc: Anusha Srivatsa <asrivats(a)redhat.com>
Cc: Christian König <christian.koenig(a)amd.com>
Cc: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
Cc: Maxime Ripard <mripard(a)kernel.org>
Cc: David Airlie <airlied(a)gmail.com>
Cc: Simona Vetter <simona(a)ffwll.ch>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: "Christian König" <christian.koenig(a)amd.com>
Cc: dri-devel(a)lists.freedesktop.org
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
Reviewed-by: Boris Brezillon <boris.brezillon(a)collabora.com>
Reviewed-by: Simona Vetter <simona.vetter(a)ffwll.ch>
Link: https://lore.kernel.org/r/20250416065820.26076-1-tzimmermann@suse.de
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
include/drm/drm_gem.h | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -580,8 +580,7 @@ static inline bool drm_gem_object_is_sha
*/
static inline bool drm_gem_is_imported(const struct drm_gem_object *obj)
{
- /* The dma-buf's priv field points to the original GEM object. */
- return obj->dma_buf && (obj->dma_buf->priv != obj);
+ return !!obj->import_attach;
}
#ifdef CONFIG_LOCKDEP
Patches currently in stable-queue which might be from tzimmermann(a)suse.de are
queue-6.12/drm-gem-internally-test-import_attach-for-imported-objects.patch
queue-6.12/drm-ast-find-vbios-mode-from-regular-display-size.patch
queue-6.12/drm-gem-test-for-imported-gem-buffers-with-helper.patch
queue-6.12/drm-atomic-clarify-the-rules-around-drm_atomic_state.patch
On 5/27/25 16:35, wangtao wrote:
>> -----Original Message-----
>> From: Christian König <christian.koenig(a)amd.com>
>> Sent: Thursday, May 22, 2025 7:58 PM
>> To: wangtao <tao.wangtao(a)honor.com>; T.J. Mercier
>> <tjmercier(a)google.com>
>> Cc: sumit.semwal(a)linaro.org; benjamin.gaignard(a)collabora.com;
>> Brian.Starkey(a)arm.com; jstultz(a)google.com; linux-media(a)vger.kernel.org;
>> dri-devel(a)lists.freedesktop.org; linaro-mm-sig(a)lists.linaro.org; linux-
>> kernel(a)vger.kernel.org; wangbintian(BintianWang)
>> <bintian.wang(a)honor.com>; yipengxiang <yipengxiang(a)honor.com>; liulu
>> 00013167 <liulu.liu(a)honor.com>; hanfeng 00012985 <feng.han(a)honor.com>;
>> amir73il(a)gmail.com
>> Subject: Re: [PATCH 2/2] dmabuf/heaps: implement
>> DMA_BUF_IOCTL_RW_FILE for system_heap
>>
>> On 5/22/25 10:02, wangtao wrote:
>>>> -----Original Message-----
>>>> From: Christian König <christian.koenig(a)amd.com>
>>>> Sent: Wednesday, May 21, 2025 7:57 PM
>>>> To: wangtao <tao.wangtao(a)honor.com>; T.J. Mercier
>>>> <tjmercier(a)google.com>
>>>> Cc: sumit.semwal(a)linaro.org; benjamin.gaignard(a)collabora.com;
>>>> Brian.Starkey(a)arm.com; jstultz(a)google.com;
>>>> linux-media(a)vger.kernel.org; dri-devel(a)lists.freedesktop.org;
>>>> linaro-mm-sig(a)lists.linaro.org; linux- kernel(a)vger.kernel.org;
>>>> wangbintian(BintianWang) <bintian.wang(a)honor.com>; yipengxiang
>>>> <yipengxiang(a)honor.com>; liulu
>>>> 00013167 <liulu.liu(a)honor.com>; hanfeng 00012985
>>>> <feng.han(a)honor.com>; amir73il(a)gmail.com
>>>> Subject: Re: [PATCH 2/2] dmabuf/heaps: implement
>>>> DMA_BUF_IOCTL_RW_FILE for system_heap
>>>>
>>>> On 5/21/25 12:25, wangtao wrote:
>>>>> [wangtao] I previously explained that
>>>>> read/sendfile/splice/copy_file_range
>>>>> syscalls can't achieve dmabuf direct IO zero-copy.
>>>>
>>>> And why can't you work on improving those syscalls instead of
>>>> creating a new IOCTL?
>>>>
>>> [wangtao] As I mentioned in previous emails, these syscalls cannot
>>> achieve dmabuf zero-copy due to technical constraints.
>>
>> Yeah, and why can't you work on removing those technical constrains?
>>
>> What is blocking you from improving the sendfile system call or proposing a
>> patch to remove the copy_file_range restrictions?
> [wangtao] Since sendfile/splice can't eliminate CPU copies, I skipped cross-FS checks
> in copy_file_range when copying memory/disk files.
It will probably be a longer discussion, but I think that having the FS people take a look as well is clearly mandatory.
If Linus or anybody else of those maintainers then say that this isn't going to fly either we can still look into alternatives.
Thanks,
Christian.
> Will send new patches after completing shmem/udmabuf callback.
> Thank you for your attention to this issue.
>
> UFS 4.0 device @4GB/s, Arm64 CPU @1GHz:
> | Metrics |Creat(us)|Close(us)| I/O(us) |I/O(MB/s)| Vs.%
> |--------------------------|---------|---------|---------|---------|-------
> | 0) dmabuf buffer read | 46898 | 4804 | 1173661 | 914 | 100%
> | 1) udmabuf buffer read | 593844 | 337111 | 2144681 | 500 | 54%
> | 2) memfd buffer read | 1029 | 305322 | 2215859 | 484 | 52%
> | 3) memfd direct read | 562 | 295239 | 1019913 | 1052 | 115%
> | 4) memfd buffer sendfile | 785 | 299026 | 1431304 | 750 | 82%
> | 5) memfd direct sendfile | 718 | 296307 | 2622270 | 409 | 44%
> | 6) memfd buffer splice | 981 | 299694 | 1573710 | 682 | 74%
> | 7) memfd direct splice | 890 | 302509 | 1269757 | 845 | 92%
> | 8) memfd buffer c_f_r | 33 | 4432 | N/A | N/A | N/A
> | 9) memfd direct c_f_r | 27 | 4421 | N/A | N/A | N/A
> |10) memfd buffer sendfile | 595797 | 423105 | 1242494 | 864 | 94%
> |11) memfd direct sendfile | 593758 | 357921 | 2344001 | 458 | 50%
> |12) memfd buffer splice | 623221 | 356212 | 1117507 | 960 | 105%
> |13) memfd direct splice | 587059 | 345484 | 857103 | 1252 | 136%
> |14) udmabuf buffer c_f_r | 22725 | 10248 | N/A | N/A | N/A
> |15) udmabuf direct c_f_r | 20120 | 9952 | N/A | N/A | N/A
> |16) dmabuf buffer c_f_r | 46517 | 4708 | 857587 | 1252 | 136%
> |17) dmabuf direct c_f_r | 47339 | 4661 | 284023 | 3780 | 413%
>
>>
>> Regards,
>> Christian.
>>
>> Could you
>>> specify the technical points, code, or principles that need
>>> optimization?
>>>
>>> Let me explain again why these syscalls can't work:
>>> 1. read() syscall
>>> - dmabuf fops lacks read callback implementation. Even if implemented,
>>> file_fd info cannot be transferred
>>> - read(file_fd, dmabuf_ptr, len) with remap_pfn_range-based mmap
>>> cannot access dmabuf_buf pages, forcing buffer-mode reads
>>>
>>> 2. sendfile() syscall
>>> - Requires CPU copy from page cache to memory file(tmpfs/shmem):
>>> [DISK] --DMA--> [page cache] --CPU copy--> [MEMORY file]
>>> - CPU overhead (both buffer/direct modes involve copies):
>>> 55.08% do_sendfile
>>> |- 55.08% do_splice_direct
>>> |-|- 55.08% splice_direct_to_actor
>>> |-|-|- 22.51% copy_splice_read
>>> |-|-|-|- 16.57% f2fs_file_read_iter
>>> |-|-|-|-|- 15.12% __iomap_dio_rw
>>> |-|-|- 32.33% direct_splice_actor
>>> |-|-|-|- 32.11% iter_file_splice_write
>>> |-|-|-|-|- 28.42% vfs_iter_write
>>> |-|-|-|-|-|- 28.42% do_iter_write
>>> |-|-|-|-|-|-|- 28.39% shmem_file_write_iter
>>> |-|-|-|-|-|-|-|- 24.62% generic_perform_write
>>> |-|-|-|-|-|-|-|-|- 18.75% __pi_memmove
>>>
>>> 3. splice() requires one end to be a pipe, incompatible with regular files or
>> dmabuf.
>>>
>>> 4. copy_file_range()
>>> - Blocked by cross-FS restrictions (Amir's commit 868f9f2f8e00)
>>> - Even without this restriction, Even without restrictions, implementing
>>> the copy_file_range callback in dmabuf fops would only allow dmabuf
>> read
>>> from regular files. This is because copy_file_range relies on
>>> file_out->f_op->copy_file_range, which cannot support dmabuf
>> write
>>> operations to regular files.
>>>
>>> Test results confirm these limitations:
>>> T.J. Mercier's 1G from ext4 on 6.12.20 | read/sendfile (ms) w/ 3 >
>>> drop_caches
>>> ------------------------|-------------------
>>> udmabuf buffer read | 1210
>>> udmabuf direct read | 671
>>> udmabuf buffer sendfile | 1096
>>> udmabuf direct sendfile | 2340
>>>
>>> My 3GHz CPU tests (cache cleared):
>>> Method | alloc | read | vs. (%)
>>> -----------------------------------------------
>>> udmabuf buffer read | 135 | 546 | 180%
>>> udmabuf direct read | 159 | 300 | 99%
>>> udmabuf buffer sendfile | 134 | 303 | 100%
>>> udmabuf direct sendfile | 141 | 912 | 301%
>>> dmabuf buffer read | 22 | 362 | 119%
>>> my patch direct read | 29 | 265 | 87%
>>>
>>> My 1GHz CPU tests (cache cleared):
>>> Method | alloc | read | vs. (%)
>>> -----------------------------------------------
>>> udmabuf buffer read | 552 | 2067 | 198%
>>> udmabuf direct read | 540 | 627 | 60%
>>> udmabuf buffer sendfile | 497 | 1045 | 100% udmabuf direct sendfile |
>>> 527 | 2330 | 223%
>>> dmabuf buffer read | 40 | 1111 | 106%
>>> patch direct read | 44 | 310 | 30%
>>>
>>> Test observations align with expectations:
>>> 1. dmabuf buffer read requires slow CPU copies 2. udmabuf direct read
>>> achieves zero-copy but has page retrieval
>>> latency from vaddr
>>> 3. udmabuf buffer sendfile suffers CPU copy overhead 4. udmabuf direct
>>> sendfile combines CPU copies with frequent DMA
>>> operations due to small pipe buffers 5. dmabuf buffer read also
>>> requires CPU copies 6. My direct read patch enables zero-copy with
>>> better performance
>>> on low-power CPUs
>>> 7. udmabuf creation time remains problematic (as you’ve noted).
>>>
>>>>> My focus is enabling dmabuf direct I/O for [regular file] <--DMA-->
>>>>> [dmabuf] zero-copy.
>>>>
>>>> Yeah and that focus is wrong. You need to work on a general solution
>>>> to the issue and not specific to your problem.
>>>>
>>>>> Any API achieving this would work. Are there other uAPIs you think
>>>>> could help? Could you recommend experts who might offer suggestions?
>>>>
>>>> Well once more: Either work on sendfile or copy_file_range or
>>>> eventually splice to make it what you want to do.
>>>>
>>>> When that is done we can discuss with the VFS people if that approach
>>>> is feasible.
>>>>
>>>> But just bypassing the VFS review by implementing a DMA-buf specific
>>>> IOCTL is a NO-GO. That is clearly not something you can do in any way.
>>> [wangtao] The issue is that only dmabuf lacks Direct I/O zero-copy
>>> support. Tmpfs/shmem already work with Direct I/O zero-copy. As
>>> explained, existing syscalls or generic methods can't enable dmabuf
>>> direct I/O zero-copy, which is why I propose adding an IOCTL command.
>>>
>>> I respect your perspective. Could you clarify specific technical
>>> aspects, code requirements, or implementation principles for modifying
>>> sendfile() or copy_file_range()? This would help advance our discussion.
>>>
>>> Thank you for engaging in this dialogue.
>>>
>>>>
>>>> Regards,
>>>> Christian.
>
On Mon, May 26, 2025 at 10:09 AM Sumit Garg <sumit.garg(a)kernel.org> wrote:
>
> On Tue, May 20, 2025 at 05:16:51PM +0200, Jens Wiklander wrote:
> > Add support in the OP-TEE backend driver dynamic protected memory
> > allocation with FF-A.
> >
> > The protected memory pools for dynamically allocated protected memory
> > are instantiated when requested by user-space. This instantiation can
> > fail if OP-TEE doesn't support the requested use-case of protected
> > memory.
> >
> > Restricted memory pools based on a static carveout or dynamic allocation
> > can coexist for different use-cases. We use only dynamic allocation with
> > FF-A.
> >
> > Signed-off-by: Jens Wiklander <jens.wiklander(a)linaro.org>
> > ---
[...]
> > +static int optee_ffa_protmem_pool_init(struct optee *optee, u32 sec_caps)
> > +{
> > + enum tee_dma_heap_id id = TEE_DMA_HEAP_SECURE_VIDEO_PLAY;
> > + struct tee_protmem_pool *pool;
> > + int rc = 0;
> > +
> > + if (sec_caps & OPTEE_FFA_SEC_CAP_PROTMEM) {
> > + pool = optee_protmem_alloc_dyn_pool(optee, id);
> > + if (IS_ERR(pool))
> > + return PTR_ERR(pool);
> > +
> > + rc = tee_device_register_dma_heap(optee->teedev, id, pool);
> > + if (rc)
> > + pool->ops->destroy_pool(pool);
> > + }
> > +
> > + return rc;
> > +}
> > +
> > static int optee_ffa_probe(struct ffa_device *ffa_dev)
> > {
> > const struct ffa_notifier_ops *notif_ops;
> > @@ -918,7 +1057,7 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev)
> > optee);
> > if (IS_ERR(teedev)) {
> > rc = PTR_ERR(teedev);
> > - goto err_free_pool;
> > + goto err_free_shm_pool;
> > }
> > optee->teedev = teedev;
> >
> > @@ -965,6 +1104,9 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev)
> > rc);
> > }
> >
> > + if (optee_ffa_protmem_pool_init(optee, sec_caps))
>
> Let's add a Kconfig check for DMABUF heaps support here as well.
I prefer complaining in the log if there's something wrong with the
configuration.
>
> > + pr_info("Protected memory service not available\n");
> > +
[...]
> > +static int init_dyn_protmem(struct optee_protmem_dyn_pool *rp)
> > +{
> > + int rc;
> > +
> > + rp->protmem = tee_shm_alloc_dma_mem(rp->optee->ctx, rp->page_count);
> > + if (IS_ERR(rp->protmem)) {
> > + rc = PTR_ERR(rp->protmem);
> > + goto err_null_protmem;
> > + }
> > +
> > + /*
> > + * TODO unmap the memory range since the physical memory will
> > + * become inaccesible after the lend_protmem() call.
>
> Let's ellaborate this comment to also say that unmap isn't strictly
> needed here in case a platform supports hypervisor in EL2 which can
> perform unmapping as part for memory lending to secure world as that
> will avoid any cache pre-fetch of memory lent to secure world.
>
> With that I can live with this as a ToDo in kernel which can be
> implemented once we see platforms requiring this change to happen.
OK, I'll add something.
[...]
> > +
> > +struct tee_protmem_pool *optee_protmem_alloc_dyn_pool(struct optee *optee,
> > + enum tee_dma_heap_id id)
> > +{
> > + struct optee_protmem_dyn_pool *rp;
> > + u32 use_case = id;
>
> Here we can get rid of redundant extra variable with s/id/use_case/.
OK, I'll update.
Cheers,
Jens