On Thu, Nov 13, 2025 at 11:37:12AM -0700, Alex Williamson wrote:
> > The latest series for interconnect negotation to exchange a phys_addr is:
> > https://lore.kernel.org/r/20251027044712.1676175-1-vivek.kasireddy@intel.com
>
> If this is in development, why are we pursuing a vfio specific
> temporary "private interconnect" here rather than building on that
> work? What are the gaps/barriers/timeline?
I broadly don't expect to see an agreement on the above for probably
half a year, and I see no reason to hold this up for it. Many people
are asking for this P2P support to be completed in iommufd.
Further, I think the above will be easier to work on when we have this
merged as an example that can consume it in a different way. Right now
it is too theoretical, IMHO.
> I don't see any uAPI changes here, is there any visibility to userspace
> whether IOMMUFD supports this feature or is it simply a try and fail
> approach?
So far we haven't done discoverably things beyond try and fail.
I'd be happy if the userspace folks doing libvirt or whatever came
with some requests/patches for discoverability. It is not just this
feature, but things like nesting and IOMMU driver support and so on.
> The latter makes it difficult for management tools to select
> whether to choose a VM configuration based on IOMMUFD or legacy vfio if
> p2p DMA is a requirement. Thanks,
In alot of cases it isn't really a choice as you need iommufd to do an
accelerated vIOMMU.
But yes, it would be nice to eventually automatically use iommufd
whenever possible.
Thanks,
Jason
On 11/13/25 17:23, Philipp Stanner wrote:
> On Thu, 2025-11-13 at 15:51 +0100, Christian König wrote:
>> Using the inline lock is now the recommended way for dma_fence implementations.
>>
>> So use this approach for the scheduler fences as well just in case if
>> anybody uses this as blueprint for its own implementation.
>>
>> Also saves about 4 bytes for the external spinlock.
>
> So you changed your mind and want to keep this patch?
Actually it was you who changed my mind.
When we want to document that using the internal lock is now the norm and all implementations should switch to that if possible we should push as much as possible for using this in the driver common code as well.
Regards,
Christian.
>
> P.
>
On 11/13/25 17:20, Philipp Stanner wrote:
> On Thu, 2025-11-13 at 15:51 +0100, Christian König wrote:
>> Hi everyone,
>>
>> dma_fences have ever lived under the tyranny dictated by the module
>> lifetime of their issuer, leading to crashes should anybody still holding
>> a reference to a dma_fence when the module of the issuer was unloaded.
>>
>> The basic problem is that when buffer are shared between drivers
>> dma_fence objects can leak into external drivers and stay there even
>> after they are signaled. The dma_resv object for example only lazy releases
>> dma_fences.
>>
>> So what happens is that when the module who originally created the dma_fence
>> unloads the dma_fence_ops function table becomes unavailable as well and so
>> any attempt to release the fence crashes the system.
>>
>> Previously various approaches have been discussed, including changing the
>> locking semantics of the dma_fence callbacks (by me) as well as using the
>> drm scheduler as intermediate layer (by Sima) to disconnect dma_fences
>> from their actual users, but none of them are actually solving all problems.
>>
>> Tvrtko did some really nice prerequisite work by protecting the returned
>> strings of the dma_fence_ops by RCU. This way dma_fence creators where
>> able to just wait for an RCU grace period after fence signaling before
>> they could be save to free those data structures.
>>
>> Now this patch set here goes a step further and protects the whole
>> dma_fence_ops structure by RCU, so that after the fence signals the
>> pointer to the dma_fence_ops is set to NULL when there is no wait nor
>> release callback given. All functionality which use the dma_fence_ops
>> reference are put inside an RCU critical section, except for the
>> deprecated issuer specific wait and of course the optional release
>> callback.
>>
>> Additional to the RCU changes the lock protecting the dma_fence state
>> previously had to be allocated external. This set here now changes the
>> functionality to make that external lock optional and allows dma_fences
>> to use an inline lock and be self contained.
>>
>> This patch set addressed all previous code review comments and is based
>> on drm-tip, includes my changes for amdgpu as well as Mathew's patches for XE.
>>
>> Going to push the core DMA-buf changes to drm-misc-next as soon as I get
>> the appropriate rb. The driver specific changes can go upstream through
>> the driver channels as necessary.
>
> No changelog? :(
On the cover letter? For dma-buf patches we usually do that on the individual patches.
Christian.
>
> P.
>
>>
>> Please review and comment,
>> Christian.
>>
>>
>