On 6/9/25 11:32, wangtao wrote:
>> -----Original Message-----
>> From: Christoph Hellwig <hch(a)infradead.org>
>> Sent: Monday, June 9, 2025 12:35 PM
>> To: Christian König <christian.koenig(a)amd.com>
>> Cc: wangtao <tao.wangtao(a)honor.com>; Christoph Hellwig
>> <hch(a)infradead.org>; sumit.semwal(a)linaro.org; kraxel(a)redhat.com;
>> vivek.kasireddy(a)intel.com; viro(a)zeniv.linux.org.uk; brauner(a)kernel.org;
>> hughd(a)google.com; akpm(a)linux-foundation.org; amir73il(a)gmail.com;
>> benjamin.gaignard(a)collabora.com; Brian.Starkey(a)arm.com;
>> jstultz(a)google.com; tjmercier(a)google.com; jack(a)suse.cz;
>> baolin.wang(a)linux.alibaba.com; linux-media(a)vger.kernel.org; dri-
>> devel(a)lists.freedesktop.org; linaro-mm-sig(a)lists.linaro.org; linux-
>> kernel(a)vger.kernel.org; linux-fsdevel(a)vger.kernel.org; linux-
>> mm(a)kvack.org; wangbintian(BintianWang) <bintian.wang(a)honor.com>;
>> yipengxiang <yipengxiang(a)honor.com>; liulu 00013167
>> <liulu.liu(a)honor.com>; hanfeng 00012985 <feng.han(a)honor.com>
>> Subject: Re: [PATCH v4 0/4] Implement dmabuf direct I/O via
>> copy_file_range
>>
>> On Fri, Jun 06, 2025 at 01:20:48PM +0200, Christian König wrote:
>>>> dmabuf acts as a driver and shouldn't be handled by VFS, so I made
>>>> dmabuf implement copy_file_range callbacks to support direct I/O
>>>> zero-copy. I'm open to both approaches. What's the preference of VFS
>>>> experts?
>>>
>>> That would probably be illegal. Using the sg_table in the DMA-buf
>>> implementation turned out to be a mistake.
>>
>> Two thing here that should not be directly conflated. Using the sg_table was
>> a huge mistake, and we should try to move dmabuf to switch that to a pure
> I'm a bit confused: don't dmabuf importers need to traverse sg_table to
> access folios or dma_addr/len? Do you mean restricting sg_table access
> (e.g., only via iov_iter) or proposing alternative approaches?
No, accessing pages folios inside the sg_table of a DMA-buf is strictly forbidden.
We have removed most use cases of that over the years and push back on generating new ones.
>
>> dma_addr_t/len array now that the new DMA API supporting that has been
>> merged. Is there any chance the dma-buf maintainers could start to kick this
>> off? I'm of course happy to assist.
Work on that is already underway for some time.
Most GPU drivers already do sg_table -> DMA array conversion, I need to push on the remaining to clean up.
But there are also tons of other users of dma_buf_map_attachment() which needs to be converted.
>> But that notwithstanding, dma-buf is THE buffer sharing mechanism in the
>> kernel, and we should promote it instead of reinventing it badly.
>> And there is a use case for having a fully DMA mapped buffer in the block
>> layer and I/O path, especially on systems with an IOMMU.
>> So having an iov_iter backed by a dma-buf would be extremely helpful.
>> That's mostly lib/iov_iter.c code, not VFS, though.
> Are you suggesting adding an ITER_DMABUF type to iov_iter, or
> implementing dmabuf-to-iov_bvec conversion within iov_iter?
That would be rather nice to have, yeah.
>
>>
>>> The question Christoph raised was rather why is your CPU so slow that
>>> walking the page tables has a significant overhead compared to the
>>> actual I/O?
>>
>> Yes, that's really puzzling and should be addressed first.
> With high CPU performance (e.g., 3GHz), GUP (get_user_pages) overhead
> is relatively low (observed in 3GHz tests).
Even on a low end CPU walking the page tables and grabbing references shouldn't be that much of an overhead.
There must be some reason why you see so much CPU overhead. E.g. compound pages are broken up or similar which should not happen in the first place.
Regards,
Christian.
> | 32x32MB Read 1024MB |Creat-ms|Close-ms| I/O-ms|I/O-MB/s| I/O%
> |---------------------------|--------|--------|--------|--------|-----
> | 1) memfd direct R/W| 1 | 118 | 312 | 3448 | 100%
> | 2) u+memfd direct R/W| 196 | 123 | 295 | 3651 | 105%
> | 3) u+memfd direct sendfile| 175 | 102 | 976 | 1100 | 31%
> | 4) u+memfd direct splice| 173 | 103 | 443 | 2428 | 70%
> | 5) udmabuf buffer R/W| 183 | 100 | 453 | 2375 | 68%
> | 6) dmabuf buffer R/W| 34 | 4 | 427 | 2519 | 73%
> | 7) udmabuf direct c_f_r| 200 | 102 | 278 | 3874 | 112%
> | 8) dmabuf direct c_f_r| 36 | 5 | 269 | 4002 | 116%
>
> With lower CPU performance (e.g., 1GHz), GUP overhead becomes more
> significant (as seen in 1GHz tests).
> | 32x32MB Read 1024MB |Creat-ms|Close-ms| I/O-ms|I/O-MB/s| I/O%
> |---------------------------|--------|--------|--------|--------|-----
> | 1) memfd direct R/W| 2 | 393 | 969 | 1109 | 100%
> | 2) u+memfd direct R/W| 592 | 424 | 570 | 1884 | 169%
> | 3) u+memfd direct sendfile| 587 | 356 | 2229 | 481 | 43%
> | 4) u+memfd direct splice| 568 | 352 | 795 | 1350 | 121%
> | 5) udmabuf buffer R/W| 597 | 343 | 1238 | 867 | 78%
> | 6) dmabuf buffer R/W| 69 | 13 | 1128 | 952 | 85%
> | 7) udmabuf direct c_f_r| 595 | 345 | 372 | 2889 | 260%
> | 8) dmabuf direct c_f_r| 80 | 13 | 274 | 3929 | 354%
>
> Regards,
> Wangtao.
On 6/9/25 13:03, Tvrtko Ursulin wrote:
> Dma-fence objects currently suffer from a potential use after free problem
> where fences exported to userspace and other drivers can outlive the
> exporting driver, or the associated data structures.
>
> The discussion on how to address this concluded that adding reference
> counting to all the involved objects is not desirable, since it would need
> to be very wide reaching and could cause unloadable drivers if another
> entity would be holding onto a signaled fence reference potentially
> indefinitely.
>
> This patch enables the safe access by introducing and documenting a
> contract between fence exporters and users. It documents a set of
> contraints and adds helpers which a) drivers with potential to suffer from
> the use after free must use and b) users of the dma-fence API must use as
> well.
>
> Premise of the design has multiple sides:
>
> 1. Drivers (fence exporters) MUST ensure a RCU grace period between
> signalling a fence and freeing the driver private data associated with it.
>
> The grace period does not have to follow the signalling immediately but
> HAS to happen before data is freed.
>
> 2. Users of the dma-fence API marked with such requirement MUST contain
> the complete access to the data within a single code block guarded by the
> new dma_fence_access_begin() and dma_fence_access_end() helpers.
>
> The combination of the two ensures that whoever sees the
> DMA_FENCE_FLAG_SIGNALED_BIT not set is guaranteed to have access to a
> valid fence->lock and valid data potentially accessed by the fence->ops
> virtual functions, until the call to dma_fence_access_end().
>
> 3. Module unload (fence->ops) disappearing is for now explicitly not
> handled. That would required a more complex protection, possibly needing
> SRCU instead of RCU to handle callers such as dma_fence_release() and
> dma_fence_wait_timeout(), where race between
> dma_fence_enable_sw_signaling, signalling, and dereference of
> fence->ops->wait() would need a sleeping SRCU context.
>
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin(a)igalia.com>
> ---
> drivers/dma-buf/dma-fence.c | 82 +++++++++++++++++++++++++++++++-
> include/linux/dma-fence.h | 32 +++++++++----
> include/trace/events/dma_fence.h | 38 +++++++++++++--
> 3 files changed, 138 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
> index 74f9e4b665e3..36604d68bdc8 100644
> --- a/drivers/dma-buf/dma-fence.c
> +++ b/drivers/dma-buf/dma-fence.c
> @@ -511,12 +511,20 @@ dma_fence_wait_timeout(struct dma_fence *fence, bool intr, signed long timeout)
>
> dma_fence_enable_sw_signaling(fence);
>
> - trace_dma_fence_wait_start(fence);
> + if (trace_dma_fence_wait_start_enabled()) {
> + dma_fence_access_begin();
> + trace_dma_fence_wait_start(fence);
> + dma_fence_access_end();
> + }
> if (fence->ops->wait)
> ret = fence->ops->wait(fence, intr, timeout);
> else
> ret = dma_fence_default_wait(fence, intr, timeout);
> - trace_dma_fence_wait_end(fence);
> + if (trace_dma_fence_wait_end_enabled()) {
> + dma_fence_access_begin();
> + trace_dma_fence_wait_end(fence);
> + dma_fence_access_end();
> + }
> return ret;
> }
> EXPORT_SYMBOL(dma_fence_wait_timeout);
> @@ -533,6 +541,7 @@ void dma_fence_release(struct kref *kref)
> struct dma_fence *fence =
> container_of(kref, struct dma_fence, refcount);
>
> + dma_fence_access_begin();
> trace_dma_fence_destroy(fence);
>
> if (WARN(!list_empty(&fence->cb_list) &&
> @@ -556,10 +565,13 @@ void dma_fence_release(struct kref *kref)
> spin_unlock_irqrestore(fence->lock, flags);
> }
>
> + dma_fence_access_end();
> +
> if (fence->ops->release)
> fence->ops->release(fence);
> else
> dma_fence_free(fence);
> +
> }
> EXPORT_SYMBOL(dma_fence_release);
>
> @@ -982,11 +994,13 @@ EXPORT_SYMBOL(dma_fence_set_deadline);
> */
> void dma_fence_describe(struct dma_fence *fence, struct seq_file *seq)
> {
> + dma_fence_access_begin();
> seq_printf(seq, "%s %s seq %llu %ssignalled\n",
> dma_fence_driver_name(fence),
> dma_fence_timeline_name(fence),
> fence->seqno,
> dma_fence_is_signaled(fence) ? "" : "un");
> + dma_fence_access_end();
> }
> EXPORT_SYMBOL(dma_fence_describe);
>
> @@ -1055,3 +1069,67 @@ dma_fence_init64(struct dma_fence *fence, const struct dma_fence_ops *ops,
> BIT(DMA_FENCE_FLAG_SEQNO64_BIT));
> }
> EXPORT_SYMBOL(dma_fence_init64);
> +
> +/**
> + * dma_fence_driver_name - Access the driver name
> + * @fence: the fence to query
> + *
> + * Returns a driver name backing the dma-fence implementation.
> + *
> + * IMPORTANT CONSIDERATION:
> + * Dma-fence contract stipulates that access to driver provided data (data not
> + * directly embedded into the object itself), such as the &dma_fence.lock and
> + * memory potentially accessed by the &dma_fence.ops functions, is forbidden
> + * after the fence has been signalled. Drivers are allowed to free that data,
> + * and some do.
> + *
> + * To allow safe access drivers are mandated to guarantee a RCU grace period
> + * between signalling the fence and freeing said data.
> + *
> + * As such access to the driver name is only valid inside a RCU locked section.
> + * The pointer MUST be both queried and USED ONLY WITHIN a SINGLE block guarded
> + * by the &dma_fence_access_being and &dma_fence_access_end pair.
> + */
> +const char *dma_fence_driver_name(struct dma_fence *fence)
> +{
> + RCU_LOCKDEP_WARN(!rcu_read_lock_held(),
> + "dma_fence_access_begin/end() are required for safe access to returned string");
> +
> + if (!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
> + return fence->ops->get_driver_name(fence);
> + else
> + return "detached-driver";
> +}
> +EXPORT_SYMBOL(dma_fence_driver_name);
> +
> +/**
> + * dma_fence_timeline_name - Access the timeline name
> + * @fence: the fence to query
> + *
> + * Returns a timeline name provided by the dma-fence implementation.
> + *
> + * IMPORTANT CONSIDERATION:
> + * Dma-fence contract stipulates that access to driver provided data (data not
> + * directly embedded into the object itself), such as the &dma_fence.lock and
> + * memory potentially accessed by the &dma_fence.ops functions, is forbidden
> + * after the fence has been signalled. Drivers are allowed to free that data,
> + * and some do.
> + *
> + * To allow safe access drivers are mandated to guarantee a RCU grace period
> + * between signalling the fence and freeing said data.
> + *
> + * As such access to the driver name is only valid inside a RCU locked section.
> + * The pointer MUST be both queried and USED ONLY WITHIN a SINGLE block guarded
> + * by the &dma_fence_access_being and &dma_fence_access_end pair.
> + */
> +const char *dma_fence_timeline_name(struct dma_fence *fence)
> +{
> + RCU_LOCKDEP_WARN(!rcu_read_lock_held(),
> + "dma_fence_access_begin/end() are required for safe access to returned string");
> +
> + if (!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
> + return fence->ops->get_driver_name(fence);
> + else
> + return "signaled-timeline";
> +}
> +EXPORT_SYMBOL(dma_fence_timeline_name);
> diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
> index 10a849cb2d3f..366da956fb85 100644
> --- a/include/linux/dma-fence.h
> +++ b/include/linux/dma-fence.h
> @@ -378,15 +378,31 @@ bool dma_fence_remove_callback(struct dma_fence *fence,
> struct dma_fence_cb *cb);
> void dma_fence_enable_sw_signaling(struct dma_fence *fence);
>
> -static inline const char *dma_fence_driver_name(struct dma_fence *fence)
> -{
> - return fence->ops->get_driver_name(fence);
> -}
> +/**
> + * DOC: Safe external access to driver provided object members
> + *
> + * All data not stored directly in the dma-fence object, such as the
> + * &dma_fence.lock and memory potentially accessed by functions in the
> + * &dma_fence.ops table, MUST NOT be accessed after the fence has been signalled
> + * because after that point drivers are allowed to free it.
> + *
> + * All code accessing that data via the dma-fence API (or directly, which is
> + * discouraged), MUST make sure to contain the complete access within a
> + * &dma_fence_access_begin and &dma_fence_access_end pair.
> + *
> + * Some dma-fence API handles this automatically, while other, as for example
> + * &dma_fence_driver_name and &dma_fence_timeline_name, leave that
> + * responsibility to the caller.
> + *
> + * To enable this scheme to work drivers MUST ensure a RCU grace period elapses
> + * between signalling the fence and freeing the said data.
> + *
> + */
> +#define dma_fence_access_begin rcu_read_lock
> +#define dma_fence_access_end rcu_read_unlock
Please drop those in a favor of another change and use rcu_read_lock/unlock in the code directly.
But see below.
>
> -static inline const char *dma_fence_timeline_name(struct dma_fence *fence)
> -{
> - return fence->ops->get_timeline_name(fence);
> -}
> +const char *dma_fence_driver_name(struct dma_fence *fence);
> +const char *dma_fence_timeline_name(struct dma_fence *fence);
Please declare those as:
const char __rcu *dma_fence_driver_name(struct dma_fence *fence);
const char __rcu *dma_fence_timeline_name(struct dma_fence *fence);
And then use rcu_dereference() on the return value when calling those functions.
This allows the automated checker to complain when the functions/return value isn't used correctly.
Regards,
Christian.
>
> /**
> * dma_fence_is_signaled_locked - Return an indication if the fence
> diff --git a/include/trace/events/dma_fence.h b/include/trace/events/dma_fence.h
> index 84c83074ee81..4814a65b68dc 100644
> --- a/include/trace/events/dma_fence.h
> +++ b/include/trace/events/dma_fence.h
> @@ -34,14 +34,44 @@ DECLARE_EVENT_CLASS(dma_fence,
> __entry->seqno)
> );
>
> -DEFINE_EVENT(dma_fence, dma_fence_emit,
> +/*
> + * Safe only for call sites which are guaranteed to not race with fence
> + * signaling,holding the fence->lock and having checked for not signaled, or the
> + * signaling path itself.
> + */
> +DECLARE_EVENT_CLASS(dma_fence_unsignaled,
> +
> + TP_PROTO(struct dma_fence *fence),
> +
> + TP_ARGS(fence),
> +
> + TP_STRUCT__entry(
> + __string(driver, fence->ops->get_driver_name(fence))
> + __string(timeline, fence->ops->get_timeline_name(fence))
> + __field(unsigned int, context)
> + __field(unsigned int, seqno)
> + ),
> +
> + TP_fast_assign(
> + __assign_str(driver);
> + __assign_str(timeline);
> + __entry->context = fence->context;
> + __entry->seqno = fence->seqno;
> + ),
> +
> + TP_printk("driver=%s timeline=%s context=%u seqno=%u",
> + __get_str(driver), __get_str(timeline), __entry->context,
> + __entry->seqno)
> +);
> +
> +DEFINE_EVENT(dma_fence_unsignaled, dma_fence_emit,
>
> TP_PROTO(struct dma_fence *fence),
>
> TP_ARGS(fence)
> );
>
> -DEFINE_EVENT(dma_fence, dma_fence_init,
> +DEFINE_EVENT(dma_fence_unsignaled, dma_fence_init,
>
> TP_PROTO(struct dma_fence *fence),
>
> @@ -55,14 +85,14 @@ DEFINE_EVENT(dma_fence, dma_fence_destroy,
> TP_ARGS(fence)
> );
>
> -DEFINE_EVENT(dma_fence, dma_fence_enable_signal,
> +DEFINE_EVENT(dma_fence_unsignaled, dma_fence_enable_signal,
>
> TP_PROTO(struct dma_fence *fence),
>
> TP_ARGS(fence)
> );
>
> -DEFINE_EVENT(dma_fence, dma_fence_signaled,
> +DEFINE_EVENT(dma_fence_unsignaled, dma_fence_signaled,
>
> TP_PROTO(struct dma_fence *fence),
>
Hi,
This patch set allocates the protected DMA-bufs from a DMA-heap
instantiated from the TEE subsystem.
The TEE subsystem handles the DMA-buf allocations since it is the TEE
(OP-TEE, AMD-TEE, TS-TEE, or perhaps a future QTEE) which sets up the
protection for the memory used for the DMA-bufs.
The DMA-heap uses a protected memory pool provided by the backend TEE
driver, allowing it to choose how to allocate the protected physical
memory.
The allocated DMA-bufs must be imported with a new TEE_IOC_SHM_REGISTER_FD
before they can be passed as arguments when requesting services from the
secure world.
Three use-cases (Secure Video Playback, Trusted UI, and Secure Video
Recording) have been identified so far to serve as examples of what can be
expected. The use-cases have predefined DMA-heap names,
"protected,secure-video", "protected,trusted-ui", and
"protected,secure-video-record". The backend driver registers protected
memory pools for the use-cases it supports.
Each use-case has its own protected memory pool since different use-cases
require isolation from different parts of the system. A protected memory
pool can be based on a static carveout instantiated while probing the TEE
backend driver, or dynamically allocated from CMA (dma_alloc_pages()) and
made protected as needed by the TEE.
This can be tested on a RockPi 4B+ with the following steps:
repo init -u https://github.com/jenswi-linaro/manifest.git -m rockpi4.xml \
-b prototype/sdp-v10
repo sync -j8
cd build
make toolchains -j$(nproc)
make all -j$(nproc)
# Copy ../out/rockpi4.img to an SD card and boot the RockPi from that
# Connect a monitor to the RockPi
# login and at the prompt:
gst-launch-1.0 videotestsrc ! \
aesenc key=1f9423681beb9a79215820f6bda73d0f \
iv=e9aa8e834d8d70b7e0d254ff670dd718 serialize-iv=true ! \
aesdec key=1f9423681beb9a79215820f6bda73d0f ! \
kmssink
The aesdec module has been hacked to use an OP-TEE TA to decrypt the stream
into protected DMA-bufs which are consumed by the kmssink.
The primitive QEMU tests from previous patch sets can be tested on RockPi
in the same way using:
xtest --sdp-basic
The primitive tests are tested on QEMU with the following steps:
repo init -u https://github.com/jenswi-linaro/manifest.git -m qemu_v8.xml \
-b prototype/sdp-v10
repo sync -j8
cd build
make toolchains -j$(nproc)
make SPMC_AT_EL=1 all -j$(nproc)
make SPMC_AT_EL=1 run-only
# login and at the prompt:
xtest --sdp-basic
The SPMC_AT_EL=1 parameter configures the build with FF-A and an SPMC at
S-EL1 inside OP-TEE. The parameter can be changed to SPMC_AT_EL=n to test
without FF-A using the original SMC ABI instead. Please remember to do
%make arm-tf-clean
for TF-A to be rebuilt properly using the new configuration.
https://optee.readthedocs.io/en/latest/building/prerequisites.html
list dependencies required to build the above.
The primitive tests are pretty basic, mostly checking that a Trusted
Application in the secure world can access and manipulate the memory. There
are also some negative tests for out of bounds buffers, etc.
Thanks,
Jens
Changes since V9:
* Adding Sumit's R-B to "optee: sync secure world ABI headers"
* Update commit message as requested for "dma-buf: dma-heap: export
declared functions".
* In "tee: implement protected DMA-heap":
- add the hidden config option TEE_DMABUF_HEAPS to tell if the TEE
subsystem can support DMA heaps
- add a pfn_valid() to check that the passed physical address can be
used by __pfn_to_page() and friends
- remove the memremap() call, the caller is should do that instead if
needed
* In "tee: add tee_shm_alloc_dma_mem()" guard the calls to
dma_alloc_pages() and dma_free_pages() with TEE_DMABUF_HEAPS to avoid
linking errors in some configurations
* In "optee: support protected memory allocation":
- add the hidden config option OPTEE_STATIC_PROTMEM_POOL to tell if the
driver can support a static protected memory pool
- optee_protmem_pool_init() is slightly refactored to make the patches
that follow easier
- Call devm_memremap() before calling tee_protmem_static_pool_alloc()
Changes since V8:
* Using dma_alloc_pages() instead of cma_alloc() so the direct dependency on
CMA can be removed together with the patches
"cma: export cma_alloc() and cma_release()" and
"dma-contiguous: export dma_contiguous_default_area". The patch
* Renaming the patch "tee: add tee_shm_alloc_cma_phys_mem()" to
"tee: add tee_shm_alloc_dma_mem()"
* Setting DMA mask for the OP-TEE TEE device based on input from the secure
world instead of relying on the parent device so following patches are
removed: "tee: tee_device_alloc(): copy dma_mask from parent device" and
"optee: pass parent device to tee_device_alloc()".
* Adding Sumit Garg's R-B to "tee: refactor params_from_user()"
* In the patch "tee: implement protected DMA-heap", map the physical memory
passed to tee_protmem_static_pool_alloc().
Changes since V7:
* Adding "dma-buf: dma-heap: export declared functions",
"cma: export cma_alloc() and cma_release()", and
"dma-contiguous: export dma_contiguous_default_area" to export the symbols
needed to keep the TEE subsystem as a load module.
* Removing CONFIG_TEE_DMABUF_HEAP and CONFIG_TEE_CMA since they aren't
needed any longer.
* Addressing review comments in "optee: sync secure world ABI headers"
* Better align protected memory pool initialization between the smc-abi and
ffa-abi parts of the optee driver.
* Removing the patch "optee: account for direction while converting parameters"
Changes since V6:
* Restricted memory is now known as protected memory since to use the same
term as https://docs.vulkan.org/guide/latest/protected.html. Update all
patches to consistently use protected memory.
* In "tee: implement protected DMA-heap" add the hidden config option
TEE_DMABUF_HEAP to tell if the DMABUF_HEAPS functions are available
for the TEE subsystem
* Adding "tee: refactor params_from_user()", broken out from the patch
"tee: new ioctl to a register tee_shm from a dmabuf file descriptor"
* For "tee: new ioctl to a register tee_shm from a dmabuf file descriptor":
- Update commit message to mention protected memory
- Remove and open code tee_shm_get_parent_shm() in param_from_user_memref()
* In "tee: add tee_shm_alloc_cma_phys_mem" add the hidden config option
TEE_CMA to tell if the CMA functions are available for the TEE subsystem
* For "tee: tee_device_alloc(): copy dma_mask from parent device" and
"optee: pass parent device to tee_device_alloc", added
Reviewed-by: Sumit Garg <sumit.garg(a)kernel.org>
Changes since V5:
* Removing "tee: add restricted memory allocation" and
"tee: add TEE_IOC_RSTMEM_FD_INFO"
* Adding "tee: implement restricted DMA-heap",
"tee: new ioctl to a register tee_shm from a dmabuf file descriptor",
"tee: add tee_shm_alloc_cma_phys_mem()",
"optee: pass parent device to tee_device_alloc()", and
"tee: tee_device_alloc(): copy dma_mask from parent device"
* The two TEE driver OPs "rstmem_alloc()" and "rstmem_free()" are replaced
with a struct tee_rstmem_pool abstraction.
* Replaced the the TEE_IOC_RSTMEM_ALLOC user space API with the DMA-heap API
Changes since V4:
* Adding the patch "tee: add TEE_IOC_RSTMEM_FD_INFO" needed by the
GStreamer demo
* Removing the dummy CPU access and mmap functions from the dma_buf_ops
* Fixing a compile error in "optee: FF-A: dynamic restricted memory allocation"
reported by kernel test robot <lkp(a)intel.com>
Changes since V3:
* Make the use_case and flags field in struct tee_shm u32's instead of
u16's
* Add more description for TEE_IOC_RSTMEM_ALLOC in the header file
* Import namespace DMA_BUF in module tee, reported by lkp(a)intel.com
* Added a note in the commit message for "optee: account for direction
while converting parameters" why it's needed
* Factor out dynamic restricted memory allocation from
"optee: support restricted memory allocation" into two new commits
"optee: FF-A: dynamic restricted memory allocation" and
"optee: smc abi: dynamic restricted memory allocation"
* Guard CMA usage with #ifdef CONFIG_CMA, effectively disabling dynamic
restricted memory allocate if CMA isn't configured
Changes since the V2 RFC:
* Based on v6.12
* Replaced the flags for SVP and Trusted UID memory with a u32 field with
unique id for each use case
* Added dynamic allocation of restricted memory pools
* Added OP-TEE ABI both with and without FF-A for dynamic restricted memory
* Added support for FF-A with FFA_LEND
Changes since the V1 RFC:
* Based on v6.11
* Complete rewrite, replacing the restricted heap with TEE_IOC_RSTMEM_ALLOC
Changes since Olivier's post [2]:
* Based on Yong Wu's post [1] where much of dma-buf handling is done in
the generic restricted heap
* Simplifications and cleanup
* New commit message for "dma-buf: heaps: add Linaro restricted dmabuf heap
support"
* Replaced the word "secure" with "restricted" where applicable
Etienne Carriere (1):
tee: new ioctl to a register tee_shm from a dmabuf file descriptor
Jens Wiklander (8):
optee: sync secure world ABI headers
dma-buf: dma-heap: export declared functions
tee: implement protected DMA-heap
tee: refactor params_from_user()
tee: add tee_shm_alloc_dma_mem()
optee: support protected memory allocation
optee: FF-A: dynamic protected memory allocation
optee: smc abi: dynamic protected memory allocation
drivers/dma-buf/dma-heap.c | 3 +
drivers/tee/Kconfig | 5 +
drivers/tee/Makefile | 1 +
drivers/tee/optee/Kconfig | 5 +
drivers/tee/optee/Makefile | 1 +
drivers/tee/optee/core.c | 10 +
drivers/tee/optee/ffa_abi.c | 147 +++++++++-
drivers/tee/optee/optee_ffa.h | 27 +-
drivers/tee/optee/optee_msg.h | 84 +++++-
drivers/tee/optee/optee_private.h | 15 +-
drivers/tee/optee/optee_smc.h | 37 ++-
drivers/tee/optee/protmem.c | 335 +++++++++++++++++++++
drivers/tee/optee/smc_abi.c | 142 ++++++++-
drivers/tee/tee_core.c | 155 +++++++---
drivers/tee/tee_heap.c | 472 ++++++++++++++++++++++++++++++
drivers/tee/tee_private.h | 16 +
drivers/tee/tee_shm.c | 189 +++++++++++-
include/linux/tee_core.h | 71 +++++
include/linux/tee_drv.h | 10 +
include/uapi/linux/tee.h | 31 ++
20 files changed, 1689 insertions(+), 67 deletions(-)
create mode 100644 drivers/tee/optee/protmem.c
create mode 100644 drivers/tee/tee_heap.c
base-commit: 0ff41df1cb268fc69e703a08a57ee14ae967d0ca
--
2.43.0
On 6/6/25 11:52, wangtao wrote:
>
>
>> -----Original Message-----
>> From: Christoph Hellwig <hch(a)infradead.org>
>> Sent: Tuesday, June 3, 2025 9:20 PM
>> To: Christian König <christian.koenig(a)amd.com>
>> Cc: Christoph Hellwig <hch(a)infradead.org>; wangtao
>> <tao.wangtao(a)honor.com>; sumit.semwal(a)linaro.org; kraxel(a)redhat.com;
>> vivek.kasireddy(a)intel.com; viro(a)zeniv.linux.org.uk; brauner(a)kernel.org;
>> hughd(a)google.com; akpm(a)linux-foundation.org; amir73il(a)gmail.com;
>> benjamin.gaignard(a)collabora.com; Brian.Starkey(a)arm.com;
>> jstultz(a)google.com; tjmercier(a)google.com; jack(a)suse.cz;
>> baolin.wang(a)linux.alibaba.com; linux-media(a)vger.kernel.org; dri-
>> devel(a)lists.freedesktop.org; linaro-mm-sig(a)lists.linaro.org; linux-
>> kernel(a)vger.kernel.org; linux-fsdevel(a)vger.kernel.org; linux-
>> mm(a)kvack.org; wangbintian(BintianWang) <bintian.wang(a)honor.com>;
>> yipengxiang <yipengxiang(a)honor.com>; liulu 00013167
>> <liulu.liu(a)honor.com>; hanfeng 00012985 <feng.han(a)honor.com>
>> Subject: Re: [PATCH v4 0/4] Implement dmabuf direct I/O via
>> copy_file_range
>>
>> On Tue, Jun 03, 2025 at 03:14:20PM +0200, Christian König wrote:
>>> On 6/3/25 15:00, Christoph Hellwig wrote:
>>>> This is a really weird interface. No one has yet to explain why
>>>> dmabuf is so special that we can't support direct I/O to it when we
>>>> can support it to otherwise exotic mappings like PCI P2P ones.
>>>
>>> With udmabuf you can do direct I/O, it's just inefficient to walk the
>>> page tables for it when you already have an array of all the folios.
>>
>> Does it matter compared to the I/O in this case?
>>
>> Either way there has been talk (in case of networking implementations) that
>> use a dmabuf as a first class container for lower level I/O.
>> I'd much rather do that than adding odd side interfaces. I.e. have a version
>> of splice that doesn't bother with the pipe, but instead just uses in-kernel
>> direct I/O on one side and dmabuf-provided folios on the other.
> If the VFS layer recognizes dmabuf type and acquires its sg_table
> and folios, zero-copy could also be achieved. I initially thought
> dmabuf acts as a driver and shouldn't be handled by VFS, so I made
> dmabuf implement copy_file_range callbacks to support direct I/O
> zero-copy. I'm open to both approaches. What's the preference of
> VFS experts?
That would probably be illegal. Using the sg_table in the DMA-buf implementation turned out to be a mistake.
The question Christoph raised was rather why is your CPU so slow that walking the page tables has a significant overhead compared to the actual I/O?
Regards,
Christian.
>
> Regards,
> Wangtao.
>
On Fri, Jun 06, 2025 at 03:02:49PM +0530, Aneesh Kumar K.V wrote:
> Jason Gunthorpe <jgg(a)nvidia.com> writes:
>
> > On Thu, Jun 05, 2025 at 09:47:01PM +0530, Aneesh Kumar K.V wrote:
> >> Jason Gunthorpe <jgg(a)nvidia.com> writes:
> >>
> >> > On Thu, Jun 05, 2025 at 05:33:52PM +0530, Aneesh Kumar K.V wrote:
> >> >
> >> >> > +
> >> >> > + /* To ensure no host side MMIO access is possible */
> >> >> > + ret = pci_request_regions_exclusive(pdev, "vfio-pci-tsm");
> >> >> > + if (ret)
> >> >> > + goto out_unlock;
> >> >> > +
> >> >> >
> >> >>
> >> >> I am hitting failures here with similar changes. Can you share the Qemu
> >> >> changes needed to make this pci_request_regions_exclusive successful.
> >> >> Also after the TDI is unbound, we want the region ownership backto
> >> >> "vfio-pci" so that things continue to work as non-secure device. I don't
> >> >> see we doing that. I could add a pci_bar_deactivate/pci_bar_activate in
> >> >> userspace which will result in vfio_unmap()/vfio_map(). But that doesn't
> >> >> release the region ownership.
> >> >
> >> > Again, IMHO, we should not be doing this dynamically. VFIO should do
> >> > pci_request_regions_exclusive() once at the very start and it should
> >> > stay that way.
> >> >
> >> > There is no reason to change it dynamically.
> >> >
> >> > The only decision to make is if all vfio should switch to exclusive
> >> > mode or if we need to make it optional for userspace.
> >>
> >> We only need the exclusive mode when the device is operating in secure
> >> mode, correct? That suggests we’ll need to dynamically toggle this
> >> setting based on the device’s security state.
> >
> > No, if the decision is that VFIO should allow this to be controlled by
> > userspace then userspace will tell iommufd to run in regions_exclusive
> > mode prior to opening the vfio cdev and VFIO will still do it once at
> > open time and never change it.
>
> So this will be handled by setting
> vdevice::flags = IOMMUFD_PCI_REGION_EXCLUSIVE in
Not like that.. I would suggest a global vfio sysfs or module parameter, or
maybe a iommufd ictx global option:
IOMMU_OPTION(IOMMU_OPTION_OP_SET, IOMMU_OPTION_EXCLUSIVE_RANGES)
You want something simple here, not tied to vdevice or very dynamic.
The use cases for non-exclusive ranges are very narrow, IMHO
> and vfio_pci_core_mmap() will do
>
> if (!vdev->barmap[index]) {
>
> if (core_vdev->iommufd_device &&
> iommufd_vdevice_region_exclusive(core_vdev->iommufd_device))
> ret = pci_request_selected_regions_exclusive(pdev,
> 1 << index, "vfio-pci");
> else
> ret = pci_request_selected_regions(pdev,
> 1 << index, "vfio-pci");
And IMHO, these should be moved to probe time or at least FD open
time, not at mmap time...
Jason
On Thu, Jun 05, 2025 at 09:47:01PM +0530, Aneesh Kumar K.V wrote:
> Jason Gunthorpe <jgg(a)nvidia.com> writes:
>
> > On Thu, Jun 05, 2025 at 05:33:52PM +0530, Aneesh Kumar K.V wrote:
> >
> >> > +
> >> > + /* To ensure no host side MMIO access is possible */
> >> > + ret = pci_request_regions_exclusive(pdev, "vfio-pci-tsm");
> >> > + if (ret)
> >> > + goto out_unlock;
> >> > +
> >> >
> >>
> >> I am hitting failures here with similar changes. Can you share the Qemu
> >> changes needed to make this pci_request_regions_exclusive successful.
> >> Also after the TDI is unbound, we want the region ownership backto
> >> "vfio-pci" so that things continue to work as non-secure device. I don't
> >> see we doing that. I could add a pci_bar_deactivate/pci_bar_activate in
> >> userspace which will result in vfio_unmap()/vfio_map(). But that doesn't
> >> release the region ownership.
> >
> > Again, IMHO, we should not be doing this dynamically. VFIO should do
> > pci_request_regions_exclusive() once at the very start and it should
> > stay that way.
> >
> > There is no reason to change it dynamically.
> >
> > The only decision to make is if all vfio should switch to exclusive
> > mode or if we need to make it optional for userspace.
>
> We only need the exclusive mode when the device is operating in secure
> mode, correct? That suggests we’ll need to dynamically toggle this
> setting based on the device’s security state.
No, if the decision is that VFIO should allow this to be controlled by
userspace then userspace will tell iommufd to run in regions_exclusive
mode prior to opening the vfio cdev and VFIO will still do it once at
open time and never change it.
The only thing request_regions does is block other drivers outside
vfio from using this memory space. There is no reason at all to change
this dynamically. A CC VMM using VFIO will never use a driver outside
VFIO to touch the VFIO controlled memory.
Jason
On Thu, Jun 05, 2025 at 05:33:52PM +0530, Aneesh Kumar K.V wrote:
> > +
> > + /* To ensure no host side MMIO access is possible */
> > + ret = pci_request_regions_exclusive(pdev, "vfio-pci-tsm");
> > + if (ret)
> > + goto out_unlock;
> > +
> >
>
> I am hitting failures here with similar changes. Can you share the Qemu
> changes needed to make this pci_request_regions_exclusive successful.
> Also after the TDI is unbound, we want the region ownership backto
> "vfio-pci" so that things continue to work as non-secure device. I don't
> see we doing that. I could add a pci_bar_deactivate/pci_bar_activate in
> userspace which will result in vfio_unmap()/vfio_map(). But that doesn't
> release the region ownership.
Again, IMHO, we should not be doing this dynamically. VFIO should do
pci_request_regions_exclusive() once at the very start and it should
stay that way.
There is no reason to change it dynamically.
The only decision to make is if all vfio should switch to exclusive
mode or if we need to make it optional for userspace.
Jason
On Thu, Jun 05, 2025 at 05:41:17PM +0800, Xu Yilun wrote:
> No, this is not device side TDISP requirement. It is host side
> requirement to fix DMA silent drop issue. TDX enforces CPU S2 PT share
> with IOMMU S2 PT (does ARM do the same?), so unmap CPU S2 PT in KVM equals
> unmap IOMMU S2 PT.
>
> If we allow IOMMU S2 PT unmapped when TDI is running, host could fool
> guest by just unmap some PT entry and suppress the fault event. Guest
> thought a DMA writting is successful but it is not and may cause
> data integrity issue.
So, TDX prevents *any* unmap, even of normal memory, from the S2 while
a guest is running? Seems extreme?
MMIO isn't special, if you have a rule like that for such a security
reason it should cover all of the S2.
> This is not a TDX specific problem, but different vendors has different
> mechanisms for this. For TDX, firmware fails the MMIO unmap for S2. For
> AMD, will trigger some HW protection called "ASID fence" [1]. Not sure
> how ARM handles this?
This seems even more extreme, if the guest gets a bad DMA address into
the device then the entire device gets killed? No chance to debug it?
Jason
[ Since Daniel made me look... ]
On 2025-06-04 8:57 am, Tomeu Vizoso wrote:
[...]
> diff --git a/drivers/accel/rocket/Kconfig b/drivers/accel/rocket/Kconfig
> new file mode 100644
> index 0000000000000000000000000000000000000000..9a59c6c61bf4d6460d8008b16331f001c97de67d
> --- /dev/null
> +++ b/drivers/accel/rocket/Kconfig
> @@ -0,0 +1,25 @@
> +# SPDX-License-Identifier: GPL-2.0-only
> +
> +config DRM_ACCEL_ROCKET
> + tristate "Rocket (support for Rockchip NPUs)"
> + depends on DRM
> + depends on ARM64 || COMPILE_TEST
Best make that "(ARCH_ROCKCHIP && ARM64) || COMPILE_TEST" now before
someone else inevitably does. Or perhaps just a pre-emptive
"ARCH_ROCKCHIP || COMPILE_TEST" if this is the same NPU that's in RV1126
etc.
> + depends on MMU
> + select DRM_SCHED
> + select IOMMU_SUPPORT
Selecting user-visible symbols is often considered bad form, but this
one isn't even functional - all you're doing here is forcing the
top-level availability of all the IOMMU driver/API options.
If you really want to nanny the user and dissuade them from building a
config which is unlikely to be useful in practice, then at best maybe
"depends on ROCKCHIP_IOMMU || COMPILE_TEST", but TBH I wouldn't even
bother with that. Even if you want to rely on using the IOMMU client API
unconditionally, it'll fail decisively enough at runtime if there's no
IOMMU present (or the API is stubbed out entirely).
> + select IOMMU_IO_PGTABLE_LPAE
And I have no idea what this might think it's here for :/
Thanks,
Robin.
> + select DRM_GEM_SHMEM_HELPER
> + help
> + Choose this option if you have a Rockchip SoC that contains a
> + compatible Neural Processing Unit (NPU), such as the RK3588. Called by
> + Rockchip either RKNN or RKNPU, it accelerates inference of neural
> + networks.
> +
> + The interface exposed to userspace is described in
> + include/uapi/drm/rocket_accel.h and is used by the Rocket userspace
> + driver in Mesa3D.
> +
> + If unsure, say N.
> +
> + To compile this driver as a module, choose M here: the
> + module will be called rocket.
On Wed, Jun 04, 2025 at 02:10:43PM +0530, Aneesh Kumar K.V wrote:
> Jason Gunthorpe <jgg(a)nvidia.com> writes:
>
> > On Tue, Jun 03, 2025 at 02:20:51PM +0800, Xu Yilun wrote:
> >> > Wouldn’t it be simpler to skip the reference count increment altogether
> >> > and just call tsm_unbind in the virtual device’s destroy callback?
> >> > (iommufd_vdevice_destroy())
> >>
> >> The vdevice refcount is the main concern, there is also an IOMMU_DESTROY
> >> ioctl. User could just free the vdevice instance if no refcount, while VFIO
> >> is still in bound state. That seems not the correct free order.
> >
> > Freeing the vdevice should automatically unbind it..
> >
>
> One challenge I ran into during implementation was the dependency of
> vfio on iommufd_device. When vfio needs to perform a tsm_unbind,
> it only has access to an iommufd_device.
VFIO should never do that except by destroying the idevice..
> However, TSM operations like binding and unbinding are handled at the
> iommufd_vdevice level. The issue? There’s no direct link from
> iommufd_device back to iommufd_vdevice.
Yes.
> To address this, I modified the following structures:
>
> modified drivers/iommu/iommufd/iommufd_private.h
> @@ -428,6 +428,7 @@ struct iommufd_device {
> /* protect iopf_enabled counter */
> struct mutex iopf_lock;
> unsigned int iopf_enabled;
> + struct iommufd_vdevice *vdev;
> };
Locking will be painful:
> Updating vdevice->idev requires holding vdev->mutex (vdev_lock).
> Updating device->vdev requires idev->igroup->lock (idev_lock).
I wonder if that can work on the destory paths..
You also have to prevent more than one vdevice from being created for
an idevice, I don't think we do that today.
> tsm_unbind in vdevice_destroy:
>
> vdevice_destroy() ends up calling tsm_unbind() while holding only the
> vdev_lock. At first glance, this seems unsafe. But in practice, it's
> fine because the corresponding iommufd_device has already been destroyed
> when the VFIO device file descriptor was closed—triggering
> vfio_df_iommufd_unbind().
This needs some kind of fixing the idevice should destroy the vdevices
during idevice destruction so we don't get this out of order where the
idevice is destroyed before the vdevice.
This should be a separate patch as it is an immediate bug fix..
Jason
On Tue, May 20, 2025 at 12:27:00PM +0200, Tomeu Vizoso wrote:
> Using the DRM GPU scheduler infrastructure, with a scheduler for each
> core.
>
> Userspace can decide for a series of tasks to be executed sequentially
> in the same core, so SRAM locality can be taken advantage of.
>
> The job submission code was initially based on Panfrost.
>
> v2:
> - Remove hardcoded number of cores
> - Misc. style fixes (Jeffrey Hugo)
> - Repack IOCTL struct (Jeffrey Hugo)
>
> v3:
> - Adapt to a split of the register block in the DT bindings (Nicolas
> Frattaroli)
> - Make use of GPL-2.0-only for the copyright notice (Jeff Hugo)
> - Use drm_* logging functions (Thomas Zimmermann)
> - Rename reg i/o macros (Thomas Zimmermann)
> - Add padding to ioctls and check for zero (Jeff Hugo)
> - Improve error handling (Nicolas Frattaroli)
>
> Signed-off-by: Tomeu Vizoso <tomeu(a)tomeuvizoso.net>
> diff --git a/drivers/accel/rocket/rocket_job.c b/drivers/accel/rocket/rocket_job.c
> new file mode 100644
> index 0000000000000000000000000000000000000000..aee6ebdb2bd227439449fdfcab3ce7d1e39cd4c4
> --- /dev/null
> +++ b/drivers/accel/rocket/rocket_job.c
> @@ -0,0 +1,723 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/* Copyright 2019 Linaro, Ltd, Rob Herring <robh(a)kernel.org> */
> +/* Copyright 2019 Collabora ltd. */
> +/* Copyright 2024-2025 Tomeu Vizoso <tomeu(a)tomeuvizoso.net> */
> +
> +#include <drm/drm_print.h>
> +#include <drm/drm_file.h>
> +#include <drm/drm_gem.h>
> +#include <drm/rocket_accel.h>
> +#include <linux/interrupt.h>
> +#include <linux/platform_device.h>
> +#include <linux/pm_runtime.h>
> +
> +#include "rocket_core.h"
> +#include "rocket_device.h"
> +#include "rocket_drv.h"
> +#include "rocket_job.h"
> +#include "rocket_registers.h"
> +
> +#define JOB_TIMEOUT_MS 500
> +
> +static struct rocket_job *
> +to_rocket_job(struct drm_sched_job *sched_job)
> +{
> + return container_of(sched_job, struct rocket_job, base);
> +}
> +
> +struct rocket_fence {
> + struct dma_fence base;
> + struct drm_device *dev;
> + /* rocket seqno for signaled() test */
> + u64 seqno;
> + int queue;
AFAICT, you are not using any of the elements here. So you can just drop
rocket_fence and use dma_fence.
Rob
On 6/3/25 17:00, Tvrtko Ursulin wrote:
>
> On 03/06/2025 14:13, Maxime Ripard wrote:
>> Hi,
>>
>> On Mon, Jun 02, 2025 at 04:42:27PM +0200, Christian König wrote:
>>> On 6/2/25 15:05, Tvrtko Ursulin wrote:
>>>> On 15/05/2025 14:15, Christian König wrote:
>>>>> Hey drm-misc maintainers,
>>>>>
>>>>> can you guys please backmerge drm-next into drm-misc-next?
>>>>>
>>>>> I want to push this patch here but it depends on changes which are partially in drm-next and partially in drm-misc-next.
>>>>
>>>> Looks like the backmerge is still pending?
>>>
>>> Yes, @Maarten, @Maxime and @Thomas ping on this.
>>
>> It's done
>
> Thanks Maxime!
>
> Christian, I can merge 2-5 to take some load off you if you want?
Sure, go ahead.
Then I can call it a day for today :)
Cheers,
Christian.
>
> Regards,
>
> Tvrtko
>
This is a really weird interface. No one has yet to explain why dmabuf
is so special that we can't support direct I/O to it when we can support
it to otherwise exotic mappings like PCI P2P ones.
On 6/3/25 14:48, Tvrtko Ursulin wrote:
>
> On 03/06/2025 13:40, Christian König wrote:
>> On 6/3/25 13:30, Tvrtko Ursulin wrote:
>>>
>>> On 02/06/2025 19:00, Christian König wrote:
>>>> On 6/2/25 17:25, Tvrtko Ursulin wrote:
>>>>>
>>>>> On 02/06/2025 15:42, Christian König wrote:
>>>>>> On 6/2/25 15:05, Tvrtko Ursulin wrote:
>>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> On 15/05/2025 14:15, Christian König wrote:
>>>>>>>> Hey drm-misc maintainers,
>>>>>>>>
>>>>>>>> can you guys please backmerge drm-next into drm-misc-next?
>>>>>>>>
>>>>>>>> I want to push this patch here but it depends on changes which are partially in drm-next and partially in drm-misc-next.
>>>>>>>
>>>>>>> Looks like the backmerge is still pending?
>>>>>>
>>>>>> Yes, @Maarten, @Maxime and @Thomas ping on this.
>>>>>>
>>>>>>> In the meantime, Christian, any chance you will have some bandwith to think about the tail end of the series? Specifically patch 6 and how that is used onward.
>>>>>>
>>>>>> Well the RCU grace period is quite a nifty hack. I wanted to go over it again after merging the first patches from this series.
>>>>>>
>>>>>> In general looks like a good idea to me, I just don't like that we explicitely need to expose dma_fence_access_begin() and dma_fence_access_end().
>>>>>>
>>>>>> Especially we can't do that while calling fence->ops->release.
>>>>>
>>>>> Hm why not? You think something will take offence of the rcu_read_lock()?
>>>>
>>>> Yes, especially it is perfectly legitimate to call synchronize_rcu() or lock semaphores/mutexes from that callback.
>>>>
>>>> Either keep the RCU critical section only for the trace or even better come up with some different approach, e.g. copying the string under the RCU lock or something like that.
>>>
>>> Hmm but the kerneldoc explicity says callback can be called from irq context:
>>>
>>> /**
>>> * @release:
>>> *
>>> * Called on destruction of fence to release additional resources.
>>> * Can be called from irq context. This callback is optional. If it is
>>> * NULL, then dma_fence_free() is instead called as the default
>>> * implementation.
>>> */
>>> void (*release)(struct dma_fence *fence);
>>
>> Ah, right. I mixed that up with the dma-buf object.
>>
>> Yeah in that case that is probably harmless. We delegate the final free to a work item if necessary anyway.
>>
>> But I would still like to avoid having the RCU cover the release as well. Or why is there any reason why we would explicitely want to do this?
>
> I can't remember there was a particular reason. Obviously the driver/timeline name vfunc access I needed a dma_fence_access_begin/end() block so maybe I was just sloppy and put the end at the end of the function instead of at the end of the block which can dereference them.
Yeah that's the next topic I would rather like to improve. We are kind of hiding that the returned strings are using RCU protection.
In other words it would be nicer if we could add an __rcu tag to the get_driver_name/get_timeline_name callbacks and let the automated tools complain if somebody isn't doing the proper RCU handling.
The problem is that as far as I know that is not supported by the automated tools (would be cool if somebody could double check that).
+We would need to convert the get_timeline/get_timeline_name function to something like func(struct dma_fence *fence, const char __rcu **out) to make that work.
Regards,
Christian.
>
> I will pull it earlier for the next respin, assuming no gotchas get discovered in the process.
>
> Regards,
>
> Tvrtko
>
>>
>> Regards,
>> Christian.
>>
>>>
>>>
>>> Regards,
>>>
>>> Tvrtko
>>>
>>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>>>
>>>>> Regards,
>>>>>
>>>>> Tvrtko
>>>>>
>>>>>>>> On 5/15/25 11:49, Tvrtko Ursulin wrote:
>>>>>>>>> With the goal of reducing the need for drivers to touch (and dereference)
>>>>>>>>> fence->ops, we move the 64-bit seqnos flag from struct dma_fence_ops to
>>>>>>>>> the fence->flags.
>>>>>>>>>
>>>>>>>>> Drivers which were setting this flag are changed to use new
>>>>>>>>> dma_fence_init64() instead of dma_fence_init().
>>>>>>>>>
>>>>>>>>> v2:
>>>>>>>>> * Streamlined init and added kerneldoc.
>>>>>>>>> * Rebase for amdgpu userq which landed since.
>>>>>>>>>
>>>>>>>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin(a)igalia.com>
>>>>>>>>> Reviewed-by: Christian König <christian.koenig(a)amd.com> # v1
>>>>>>>>> ---
>>>>>>>>> drivers/dma-buf/dma-fence-chain.c | 5 +-
>>>>>>>>> drivers/dma-buf/dma-fence.c | 69 ++++++++++++++-----
>>>>>>>>> .../drm/amd/amdgpu/amdgpu_eviction_fence.c | 7 +-
>>>>>>>>> .../gpu/drm/amd/amdgpu/amdgpu_userq_fence.c | 5 +-
>>>>>>>>> .../gpu/drm/amd/amdgpu/amdgpu_vm_tlb_fence.c | 5 +-
>>>>>>>>> include/linux/dma-fence.h | 14 ++--
>>>>>>>>> 6 files changed, 64 insertions(+), 41 deletions(-)
>>>>>>>>>
>>>>>>>>> diff --git a/drivers/dma-buf/dma-fence-chain.c b/drivers/dma-buf/dma-fence-chain.c
>>>>>>>>> index 90424f23fd73..a8a90acf4f34 100644
>>>>>>>>> --- a/drivers/dma-buf/dma-fence-chain.c
>>>>>>>>> +++ b/drivers/dma-buf/dma-fence-chain.c
>>>>>>>>> @@ -218,7 +218,6 @@ static void dma_fence_chain_set_deadline(struct dma_fence *fence,
>>>>>>>>> }
>>>>>>>>> const struct dma_fence_ops dma_fence_chain_ops = {
>>>>>>>>> - .use_64bit_seqno = true,
>>>>>>>>> .get_driver_name = dma_fence_chain_get_driver_name,
>>>>>>>>> .get_timeline_name = dma_fence_chain_get_timeline_name,
>>>>>>>>> .enable_signaling = dma_fence_chain_enable_signaling,
>>>>>>>>> @@ -262,8 +261,8 @@ void dma_fence_chain_init(struct dma_fence_chain *chain,
>>>>>>>>> seqno = max(prev->seqno, seqno);
>>>>>>>>> }
>>>>>>>>> - dma_fence_init(&chain->base, &dma_fence_chain_ops,
>>>>>>>>> - &chain->lock, context, seqno);
>>>>>>>>> + dma_fence_init64(&chain->base, &dma_fence_chain_ops, &chain->lock,
>>>>>>>>> + context, seqno);
>>>>>>>>> /*
>>>>>>>>> * Chaining dma_fence_chain container together is only allowed through
>>>>>>>>> diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
>>>>>>>>> index f0cdd3e99d36..705b59787731 100644
>>>>>>>>> --- a/drivers/dma-buf/dma-fence.c
>>>>>>>>> +++ b/drivers/dma-buf/dma-fence.c
>>>>>>>>> @@ -989,24 +989,9 @@ void dma_fence_describe(struct dma_fence *fence, struct seq_file *seq)
>>>>>>>>> }
>>>>>>>>> EXPORT_SYMBOL(dma_fence_describe);
>>>>>>>>> -/**
>>>>>>>>> - * dma_fence_init - Initialize a custom fence.
>>>>>>>>> - * @fence: the fence to initialize
>>>>>>>>> - * @ops: the dma_fence_ops for operations on this fence
>>>>>>>>> - * @lock: the irqsafe spinlock to use for locking this fence
>>>>>>>>> - * @context: the execution context this fence is run on
>>>>>>>>> - * @seqno: a linear increasing sequence number for this context
>>>>>>>>> - *
>>>>>>>>> - * Initializes an allocated fence, the caller doesn't have to keep its
>>>>>>>>> - * refcount after committing with this fence, but it will need to hold a
>>>>>>>>> - * refcount again if &dma_fence_ops.enable_signaling gets called.
>>>>>>>>> - *
>>>>>>>>> - * context and seqno are used for easy comparison between fences, allowing
>>>>>>>>> - * to check which fence is later by simply using dma_fence_later().
>>>>>>>>> - */
>>>>>>>>> -void
>>>>>>>>> -dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>>>>>> - spinlock_t *lock, u64 context, u64 seqno)
>>>>>>>>> +static void
>>>>>>>>> +__dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>>>>>> + spinlock_t *lock, u64 context, u64 seqno, unsigned long flags)
>>>>>>>>> {
>>>>>>>>> BUG_ON(!lock);
>>>>>>>>> BUG_ON(!ops || !ops->get_driver_name || !ops->get_timeline_name);
>>>>>>>>> @@ -1017,9 +1002,55 @@ dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>>>>>> fence->lock = lock;
>>>>>>>>> fence->context = context;
>>>>>>>>> fence->seqno = seqno;
>>>>>>>>> - fence->flags = 0UL;
>>>>>>>>> + fence->flags = flags;
>>>>>>>>> fence->error = 0;
>>>>>>>>> trace_dma_fence_init(fence);
>>>>>>>>> }
>>>>>>>>> +
>>>>>>>>> +/**
>>>>>>>>> + * dma_fence_init - Initialize a custom fence.
>>>>>>>>> + * @fence: the fence to initialize
>>>>>>>>> + * @ops: the dma_fence_ops for operations on this fence
>>>>>>>>> + * @lock: the irqsafe spinlock to use for locking this fence
>>>>>>>>> + * @context: the execution context this fence is run on
>>>>>>>>> + * @seqno: a linear increasing sequence number for this context
>>>>>>>>> + *
>>>>>>>>> + * Initializes an allocated fence, the caller doesn't have to keep its
>>>>>>>>> + * refcount after committing with this fence, but it will need to hold a
>>>>>>>>> + * refcount again if &dma_fence_ops.enable_signaling gets called.
>>>>>>>>> + *
>>>>>>>>> + * context and seqno are used for easy comparison between fences, allowing
>>>>>>>>> + * to check which fence is later by simply using dma_fence_later().
>>>>>>>>> + */
>>>>>>>>> +void
>>>>>>>>> +dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>>>>>> + spinlock_t *lock, u64 context, u64 seqno)
>>>>>>>>> +{
>>>>>>>>> + __dma_fence_init(fence, ops, lock, context, seqno, 0UL);
>>>>>>>>> +}
>>>>>>>>> EXPORT_SYMBOL(dma_fence_init);
>>>>>>>>> +
>>>>>>>>> +/**
>>>>>>>>> + * dma_fence_init64 - Initialize a custom fence with 64-bit seqno support.
>>>>>>>>> + * @fence: the fence to initialize
>>>>>>>>> + * @ops: the dma_fence_ops for operations on this fence
>>>>>>>>> + * @lock: the irqsafe spinlock to use for locking this fence
>>>>>>>>> + * @context: the execution context this fence is run on
>>>>>>>>> + * @seqno: a linear increasing sequence number for this context
>>>>>>>>> + *
>>>>>>>>> + * Initializes an allocated fence, the caller doesn't have to keep its
>>>>>>>>> + * refcount after committing with this fence, but it will need to hold a
>>>>>>>>> + * refcount again if &dma_fence_ops.enable_signaling gets called.
>>>>>>>>> + *
>>>>>>>>> + * Context and seqno are used for easy comparison between fences, allowing
>>>>>>>>> + * to check which fence is later by simply using dma_fence_later().
>>>>>>>>> + */
>>>>>>>>> +void
>>>>>>>>> +dma_fence_init64(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>>>>>> + spinlock_t *lock, u64 context, u64 seqno)
>>>>>>>>> +{
>>>>>>>>> + __dma_fence_init(fence, ops, lock, context, seqno,
>>>>>>>>> + BIT(DMA_FENCE_FLAG_SEQNO64_BIT));
>>>>>>>>> +}
>>>>>>>>> +EXPORT_SYMBOL(dma_fence_init64);
>>>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_eviction_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_eviction_fence.c
>>>>>>>>> index 1a7469543db5..79713421bffe 100644
>>>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_eviction_fence.c
>>>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_eviction_fence.c
>>>>>>>>> @@ -134,7 +134,6 @@ static bool amdgpu_eviction_fence_enable_signaling(struct dma_fence *f)
>>>>>>>>> }
>>>>>>>>> static const struct dma_fence_ops amdgpu_eviction_fence_ops = {
>>>>>>>>> - .use_64bit_seqno = true,
>>>>>>>>> .get_driver_name = amdgpu_eviction_fence_get_driver_name,
>>>>>>>>> .get_timeline_name = amdgpu_eviction_fence_get_timeline_name,
>>>>>>>>> .enable_signaling = amdgpu_eviction_fence_enable_signaling,
>>>>>>>>> @@ -160,9 +159,9 @@ amdgpu_eviction_fence_create(struct amdgpu_eviction_fence_mgr *evf_mgr)
>>>>>>>>> ev_fence->evf_mgr = evf_mgr;
>>>>>>>>> get_task_comm(ev_fence->timeline_name, current);
>>>>>>>>> spin_lock_init(&ev_fence->lock);
>>>>>>>>> - dma_fence_init(&ev_fence->base, &amdgpu_eviction_fence_ops,
>>>>>>>>> - &ev_fence->lock, evf_mgr->ev_fence_ctx,
>>>>>>>>> - atomic_inc_return(&evf_mgr->ev_fence_seq));
>>>>>>>>> + dma_fence_init64(&ev_fence->base, &amdgpu_eviction_fence_ops,
>>>>>>>>> + &ev_fence->lock, evf_mgr->ev_fence_ctx,
>>>>>>>>> + atomic_inc_return(&evf_mgr->ev_fence_seq));
>>>>>>>>> return ev_fence;
>>>>>>>>> }
>>>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c
>>>>>>>>> index 029cb24c28b3..5e92d00a591f 100644
>>>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c
>>>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c
>>>>>>>>> @@ -239,8 +239,8 @@ static int amdgpu_userq_fence_create(struct amdgpu_usermode_queue *userq,
>>>>>>>>> fence = &userq_fence->base;
>>>>>>>>> userq_fence->fence_drv = fence_drv;
>>>>>>>>> - dma_fence_init(fence, &amdgpu_userq_fence_ops, &userq_fence->lock,
>>>>>>>>> - fence_drv->context, seq);
>>>>>>>>> + dma_fence_init64(fence, &amdgpu_userq_fence_ops, &userq_fence->lock,
>>>>>>>>> + fence_drv->context, seq);
>>>>>>>>> amdgpu_userq_fence_driver_get(fence_drv);
>>>>>>>>> dma_fence_get(fence);
>>>>>>>>> @@ -334,7 +334,6 @@ static void amdgpu_userq_fence_release(struct dma_fence *f)
>>>>>>>>> }
>>>>>>>>> static const struct dma_fence_ops amdgpu_userq_fence_ops = {
>>>>>>>>> - .use_64bit_seqno = true,
>>>>>>>>> .get_driver_name = amdgpu_userq_fence_get_driver_name,
>>>>>>>>> .get_timeline_name = amdgpu_userq_fence_get_timeline_name,
>>>>>>>>> .signaled = amdgpu_userq_fence_signaled,
>>>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_tlb_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_tlb_fence.c
>>>>>>>>> index 51cddfa3f1e8..5d26797356a3 100644
>>>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_tlb_fence.c
>>>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_tlb_fence.c
>>>>>>>>> @@ -71,7 +71,6 @@ static void amdgpu_tlb_fence_work(struct work_struct *work)
>>>>>>>>> }
>>>>>>>>> static const struct dma_fence_ops amdgpu_tlb_fence_ops = {
>>>>>>>>> - .use_64bit_seqno = true,
>>>>>>>>> .get_driver_name = amdgpu_tlb_fence_get_driver_name,
>>>>>>>>> .get_timeline_name = amdgpu_tlb_fence_get_timeline_name
>>>>>>>>> };
>>>>>>>>> @@ -101,8 +100,8 @@ void amdgpu_vm_tlb_fence_create(struct amdgpu_device *adev, struct amdgpu_vm *vm
>>>>>>>>> INIT_WORK(&f->work, amdgpu_tlb_fence_work);
>>>>>>>>> spin_lock_init(&f->lock);
>>>>>>>>> - dma_fence_init(&f->base, &amdgpu_tlb_fence_ops, &f->lock,
>>>>>>>>> - vm->tlb_fence_context, atomic64_read(&vm->tlb_seq));
>>>>>>>>> + dma_fence_init64(&f->base, &amdgpu_tlb_fence_ops, &f->lock,
>>>>>>>>> + vm->tlb_fence_context, atomic64_read(&vm->tlb_seq));
>>>>>>>>> /* TODO: We probably need a separate wq here */
>>>>>>>>> dma_fence_get(&f->base);
>>>>>>>>> diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
>>>>>>>>> index 48b5202c531d..a34a0dcdc446 100644
>>>>>>>>> --- a/include/linux/dma-fence.h
>>>>>>>>> +++ b/include/linux/dma-fence.h
>>>>>>>>> @@ -97,6 +97,7 @@ struct dma_fence {
>>>>>>>>> };
>>>>>>>>> enum dma_fence_flag_bits {
>>>>>>>>> + DMA_FENCE_FLAG_SEQNO64_BIT,
>>>>>>>>> DMA_FENCE_FLAG_SIGNALED_BIT,
>>>>>>>>> DMA_FENCE_FLAG_TIMESTAMP_BIT,
>>>>>>>>> DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
>>>>>>>>> @@ -124,14 +125,6 @@ struct dma_fence_cb {
>>>>>>>>> *
>>>>>>>>> */
>>>>>>>>> struct dma_fence_ops {
>>>>>>>>> - /**
>>>>>>>>> - * @use_64bit_seqno:
>>>>>>>>> - *
>>>>>>>>> - * True if this dma_fence implementation uses 64bit seqno, false
>>>>>>>>> - * otherwise.
>>>>>>>>> - */
>>>>>>>>> - bool use_64bit_seqno;
>>>>>>>>> -
>>>>>>>>> /**
>>>>>>>>> * @get_driver_name:
>>>>>>>>> *
>>>>>>>>> @@ -262,6 +255,9 @@ struct dma_fence_ops {
>>>>>>>>> void dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>>>>>> spinlock_t *lock, u64 context, u64 seqno);
>>>>>>>>> +void dma_fence_init64(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>>>>>> + spinlock_t *lock, u64 context, u64 seqno);
>>>>>>>>> +
>>>>>>>>> void dma_fence_release(struct kref *kref);
>>>>>>>>> void dma_fence_free(struct dma_fence *fence);
>>>>>>>>> void dma_fence_describe(struct dma_fence *fence, struct seq_file *seq);
>>>>>>>>> @@ -454,7 +450,7 @@ static inline bool __dma_fence_is_later(struct dma_fence *fence, u64 f1, u64 f2)
>>>>>>>>> * 32bit sequence numbers. Use a 64bit compare when the driver says to
>>>>>>>>> * do so.
>>>>>>>>> */
>>>>>>>>> - if (fence->ops->use_64bit_seqno)
>>>>>>>>> + if (test_bit(DMA_FENCE_FLAG_SEQNO64_BIT, &fence->flags))
>>>>>>>>> return f1 > f2;
>>>>>>>>> return (int)(lower_32_bits(f1) - lower_32_bits(f2)) > 0;
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
On 6/3/25 13:30, Tvrtko Ursulin wrote:
>
> On 02/06/2025 19:00, Christian König wrote:
>> On 6/2/25 17:25, Tvrtko Ursulin wrote:
>>>
>>> On 02/06/2025 15:42, Christian König wrote:
>>>> On 6/2/25 15:05, Tvrtko Ursulin wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>> On 15/05/2025 14:15, Christian König wrote:
>>>>>> Hey drm-misc maintainers,
>>>>>>
>>>>>> can you guys please backmerge drm-next into drm-misc-next?
>>>>>>
>>>>>> I want to push this patch here but it depends on changes which are partially in drm-next and partially in drm-misc-next.
>>>>>
>>>>> Looks like the backmerge is still pending?
>>>>
>>>> Yes, @Maarten, @Maxime and @Thomas ping on this.
>>>>
>>>>> In the meantime, Christian, any chance you will have some bandwith to think about the tail end of the series? Specifically patch 6 and how that is used onward.
>>>>
>>>> Well the RCU grace period is quite a nifty hack. I wanted to go over it again after merging the first patches from this series.
>>>>
>>>> In general looks like a good idea to me, I just don't like that we explicitely need to expose dma_fence_access_begin() and dma_fence_access_end().
>>>>
>>>> Especially we can't do that while calling fence->ops->release.
>>>
>>> Hm why not? You think something will take offence of the rcu_read_lock()?
>>
>> Yes, especially it is perfectly legitimate to call synchronize_rcu() or lock semaphores/mutexes from that callback.
>>
>> Either keep the RCU critical section only for the trace or even better come up with some different approach, e.g. copying the string under the RCU lock or something like that.
>
> Hmm but the kerneldoc explicity says callback can be called from irq context:
>
> /**
> * @release:
> *
> * Called on destruction of fence to release additional resources.
> * Can be called from irq context. This callback is optional. If it is
> * NULL, then dma_fence_free() is instead called as the default
> * implementation.
> */
> void (*release)(struct dma_fence *fence);
Ah, right. I mixed that up with the dma-buf object.
Yeah in that case that is probably harmless. We delegate the final free to a work item if necessary anyway.
But I would still like to avoid having the RCU cover the release as well. Or why is there any reason why we would explicitely want to do this?
Regards,
Christian.
>
>
> Regards,
>
> Tvrtko
>
>>
>> Regards,
>> Christian.
>>
>>>
>>> Regards,
>>>
>>> Tvrtko
>>>
>>>>>> On 5/15/25 11:49, Tvrtko Ursulin wrote:
>>>>>>> With the goal of reducing the need for drivers to touch (and dereference)
>>>>>>> fence->ops, we move the 64-bit seqnos flag from struct dma_fence_ops to
>>>>>>> the fence->flags.
>>>>>>>
>>>>>>> Drivers which were setting this flag are changed to use new
>>>>>>> dma_fence_init64() instead of dma_fence_init().
>>>>>>>
>>>>>>> v2:
>>>>>>> * Streamlined init and added kerneldoc.
>>>>>>> * Rebase for amdgpu userq which landed since.
>>>>>>>
>>>>>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin(a)igalia.com>
>>>>>>> Reviewed-by: Christian König <christian.koenig(a)amd.com> # v1
>>>>>>> ---
>>>>>>> drivers/dma-buf/dma-fence-chain.c | 5 +-
>>>>>>> drivers/dma-buf/dma-fence.c | 69 ++++++++++++++-----
>>>>>>> .../drm/amd/amdgpu/amdgpu_eviction_fence.c | 7 +-
>>>>>>> .../gpu/drm/amd/amdgpu/amdgpu_userq_fence.c | 5 +-
>>>>>>> .../gpu/drm/amd/amdgpu/amdgpu_vm_tlb_fence.c | 5 +-
>>>>>>> include/linux/dma-fence.h | 14 ++--
>>>>>>> 6 files changed, 64 insertions(+), 41 deletions(-)
>>>>>>>
>>>>>>> diff --git a/drivers/dma-buf/dma-fence-chain.c b/drivers/dma-buf/dma-fence-chain.c
>>>>>>> index 90424f23fd73..a8a90acf4f34 100644
>>>>>>> --- a/drivers/dma-buf/dma-fence-chain.c
>>>>>>> +++ b/drivers/dma-buf/dma-fence-chain.c
>>>>>>> @@ -218,7 +218,6 @@ static void dma_fence_chain_set_deadline(struct dma_fence *fence,
>>>>>>> }
>>>>>>> const struct dma_fence_ops dma_fence_chain_ops = {
>>>>>>> - .use_64bit_seqno = true,
>>>>>>> .get_driver_name = dma_fence_chain_get_driver_name,
>>>>>>> .get_timeline_name = dma_fence_chain_get_timeline_name,
>>>>>>> .enable_signaling = dma_fence_chain_enable_signaling,
>>>>>>> @@ -262,8 +261,8 @@ void dma_fence_chain_init(struct dma_fence_chain *chain,
>>>>>>> seqno = max(prev->seqno, seqno);
>>>>>>> }
>>>>>>> - dma_fence_init(&chain->base, &dma_fence_chain_ops,
>>>>>>> - &chain->lock, context, seqno);
>>>>>>> + dma_fence_init64(&chain->base, &dma_fence_chain_ops, &chain->lock,
>>>>>>> + context, seqno);
>>>>>>> /*
>>>>>>> * Chaining dma_fence_chain container together is only allowed through
>>>>>>> diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
>>>>>>> index f0cdd3e99d36..705b59787731 100644
>>>>>>> --- a/drivers/dma-buf/dma-fence.c
>>>>>>> +++ b/drivers/dma-buf/dma-fence.c
>>>>>>> @@ -989,24 +989,9 @@ void dma_fence_describe(struct dma_fence *fence, struct seq_file *seq)
>>>>>>> }
>>>>>>> EXPORT_SYMBOL(dma_fence_describe);
>>>>>>> -/**
>>>>>>> - * dma_fence_init - Initialize a custom fence.
>>>>>>> - * @fence: the fence to initialize
>>>>>>> - * @ops: the dma_fence_ops for operations on this fence
>>>>>>> - * @lock: the irqsafe spinlock to use for locking this fence
>>>>>>> - * @context: the execution context this fence is run on
>>>>>>> - * @seqno: a linear increasing sequence number for this context
>>>>>>> - *
>>>>>>> - * Initializes an allocated fence, the caller doesn't have to keep its
>>>>>>> - * refcount after committing with this fence, but it will need to hold a
>>>>>>> - * refcount again if &dma_fence_ops.enable_signaling gets called.
>>>>>>> - *
>>>>>>> - * context and seqno are used for easy comparison between fences, allowing
>>>>>>> - * to check which fence is later by simply using dma_fence_later().
>>>>>>> - */
>>>>>>> -void
>>>>>>> -dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>>>> - spinlock_t *lock, u64 context, u64 seqno)
>>>>>>> +static void
>>>>>>> +__dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>>>> + spinlock_t *lock, u64 context, u64 seqno, unsigned long flags)
>>>>>>> {
>>>>>>> BUG_ON(!lock);
>>>>>>> BUG_ON(!ops || !ops->get_driver_name || !ops->get_timeline_name);
>>>>>>> @@ -1017,9 +1002,55 @@ dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>>>> fence->lock = lock;
>>>>>>> fence->context = context;
>>>>>>> fence->seqno = seqno;
>>>>>>> - fence->flags = 0UL;
>>>>>>> + fence->flags = flags;
>>>>>>> fence->error = 0;
>>>>>>> trace_dma_fence_init(fence);
>>>>>>> }
>>>>>>> +
>>>>>>> +/**
>>>>>>> + * dma_fence_init - Initialize a custom fence.
>>>>>>> + * @fence: the fence to initialize
>>>>>>> + * @ops: the dma_fence_ops for operations on this fence
>>>>>>> + * @lock: the irqsafe spinlock to use for locking this fence
>>>>>>> + * @context: the execution context this fence is run on
>>>>>>> + * @seqno: a linear increasing sequence number for this context
>>>>>>> + *
>>>>>>> + * Initializes an allocated fence, the caller doesn't have to keep its
>>>>>>> + * refcount after committing with this fence, but it will need to hold a
>>>>>>> + * refcount again if &dma_fence_ops.enable_signaling gets called.
>>>>>>> + *
>>>>>>> + * context and seqno are used for easy comparison between fences, allowing
>>>>>>> + * to check which fence is later by simply using dma_fence_later().
>>>>>>> + */
>>>>>>> +void
>>>>>>> +dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>>>> + spinlock_t *lock, u64 context, u64 seqno)
>>>>>>> +{
>>>>>>> + __dma_fence_init(fence, ops, lock, context, seqno, 0UL);
>>>>>>> +}
>>>>>>> EXPORT_SYMBOL(dma_fence_init);
>>>>>>> +
>>>>>>> +/**
>>>>>>> + * dma_fence_init64 - Initialize a custom fence with 64-bit seqno support.
>>>>>>> + * @fence: the fence to initialize
>>>>>>> + * @ops: the dma_fence_ops for operations on this fence
>>>>>>> + * @lock: the irqsafe spinlock to use for locking this fence
>>>>>>> + * @context: the execution context this fence is run on
>>>>>>> + * @seqno: a linear increasing sequence number for this context
>>>>>>> + *
>>>>>>> + * Initializes an allocated fence, the caller doesn't have to keep its
>>>>>>> + * refcount after committing with this fence, but it will need to hold a
>>>>>>> + * refcount again if &dma_fence_ops.enable_signaling gets called.
>>>>>>> + *
>>>>>>> + * Context and seqno are used for easy comparison between fences, allowing
>>>>>>> + * to check which fence is later by simply using dma_fence_later().
>>>>>>> + */
>>>>>>> +void
>>>>>>> +dma_fence_init64(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>>>> + spinlock_t *lock, u64 context, u64 seqno)
>>>>>>> +{
>>>>>>> + __dma_fence_init(fence, ops, lock, context, seqno,
>>>>>>> + BIT(DMA_FENCE_FLAG_SEQNO64_BIT));
>>>>>>> +}
>>>>>>> +EXPORT_SYMBOL(dma_fence_init64);
>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_eviction_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_eviction_fence.c
>>>>>>> index 1a7469543db5..79713421bffe 100644
>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_eviction_fence.c
>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_eviction_fence.c
>>>>>>> @@ -134,7 +134,6 @@ static bool amdgpu_eviction_fence_enable_signaling(struct dma_fence *f)
>>>>>>> }
>>>>>>> static const struct dma_fence_ops amdgpu_eviction_fence_ops = {
>>>>>>> - .use_64bit_seqno = true,
>>>>>>> .get_driver_name = amdgpu_eviction_fence_get_driver_name,
>>>>>>> .get_timeline_name = amdgpu_eviction_fence_get_timeline_name,
>>>>>>> .enable_signaling = amdgpu_eviction_fence_enable_signaling,
>>>>>>> @@ -160,9 +159,9 @@ amdgpu_eviction_fence_create(struct amdgpu_eviction_fence_mgr *evf_mgr)
>>>>>>> ev_fence->evf_mgr = evf_mgr;
>>>>>>> get_task_comm(ev_fence->timeline_name, current);
>>>>>>> spin_lock_init(&ev_fence->lock);
>>>>>>> - dma_fence_init(&ev_fence->base, &amdgpu_eviction_fence_ops,
>>>>>>> - &ev_fence->lock, evf_mgr->ev_fence_ctx,
>>>>>>> - atomic_inc_return(&evf_mgr->ev_fence_seq));
>>>>>>> + dma_fence_init64(&ev_fence->base, &amdgpu_eviction_fence_ops,
>>>>>>> + &ev_fence->lock, evf_mgr->ev_fence_ctx,
>>>>>>> + atomic_inc_return(&evf_mgr->ev_fence_seq));
>>>>>>> return ev_fence;
>>>>>>> }
>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c
>>>>>>> index 029cb24c28b3..5e92d00a591f 100644
>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c
>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c
>>>>>>> @@ -239,8 +239,8 @@ static int amdgpu_userq_fence_create(struct amdgpu_usermode_queue *userq,
>>>>>>> fence = &userq_fence->base;
>>>>>>> userq_fence->fence_drv = fence_drv;
>>>>>>> - dma_fence_init(fence, &amdgpu_userq_fence_ops, &userq_fence->lock,
>>>>>>> - fence_drv->context, seq);
>>>>>>> + dma_fence_init64(fence, &amdgpu_userq_fence_ops, &userq_fence->lock,
>>>>>>> + fence_drv->context, seq);
>>>>>>> amdgpu_userq_fence_driver_get(fence_drv);
>>>>>>> dma_fence_get(fence);
>>>>>>> @@ -334,7 +334,6 @@ static void amdgpu_userq_fence_release(struct dma_fence *f)
>>>>>>> }
>>>>>>> static const struct dma_fence_ops amdgpu_userq_fence_ops = {
>>>>>>> - .use_64bit_seqno = true,
>>>>>>> .get_driver_name = amdgpu_userq_fence_get_driver_name,
>>>>>>> .get_timeline_name = amdgpu_userq_fence_get_timeline_name,
>>>>>>> .signaled = amdgpu_userq_fence_signaled,
>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_tlb_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_tlb_fence.c
>>>>>>> index 51cddfa3f1e8..5d26797356a3 100644
>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_tlb_fence.c
>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_tlb_fence.c
>>>>>>> @@ -71,7 +71,6 @@ static void amdgpu_tlb_fence_work(struct work_struct *work)
>>>>>>> }
>>>>>>> static const struct dma_fence_ops amdgpu_tlb_fence_ops = {
>>>>>>> - .use_64bit_seqno = true,
>>>>>>> .get_driver_name = amdgpu_tlb_fence_get_driver_name,
>>>>>>> .get_timeline_name = amdgpu_tlb_fence_get_timeline_name
>>>>>>> };
>>>>>>> @@ -101,8 +100,8 @@ void amdgpu_vm_tlb_fence_create(struct amdgpu_device *adev, struct amdgpu_vm *vm
>>>>>>> INIT_WORK(&f->work, amdgpu_tlb_fence_work);
>>>>>>> spin_lock_init(&f->lock);
>>>>>>> - dma_fence_init(&f->base, &amdgpu_tlb_fence_ops, &f->lock,
>>>>>>> - vm->tlb_fence_context, atomic64_read(&vm->tlb_seq));
>>>>>>> + dma_fence_init64(&f->base, &amdgpu_tlb_fence_ops, &f->lock,
>>>>>>> + vm->tlb_fence_context, atomic64_read(&vm->tlb_seq));
>>>>>>> /* TODO: We probably need a separate wq here */
>>>>>>> dma_fence_get(&f->base);
>>>>>>> diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
>>>>>>> index 48b5202c531d..a34a0dcdc446 100644
>>>>>>> --- a/include/linux/dma-fence.h
>>>>>>> +++ b/include/linux/dma-fence.h
>>>>>>> @@ -97,6 +97,7 @@ struct dma_fence {
>>>>>>> };
>>>>>>> enum dma_fence_flag_bits {
>>>>>>> + DMA_FENCE_FLAG_SEQNO64_BIT,
>>>>>>> DMA_FENCE_FLAG_SIGNALED_BIT,
>>>>>>> DMA_FENCE_FLAG_TIMESTAMP_BIT,
>>>>>>> DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
>>>>>>> @@ -124,14 +125,6 @@ struct dma_fence_cb {
>>>>>>> *
>>>>>>> */
>>>>>>> struct dma_fence_ops {
>>>>>>> - /**
>>>>>>> - * @use_64bit_seqno:
>>>>>>> - *
>>>>>>> - * True if this dma_fence implementation uses 64bit seqno, false
>>>>>>> - * otherwise.
>>>>>>> - */
>>>>>>> - bool use_64bit_seqno;
>>>>>>> -
>>>>>>> /**
>>>>>>> * @get_driver_name:
>>>>>>> *
>>>>>>> @@ -262,6 +255,9 @@ struct dma_fence_ops {
>>>>>>> void dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>>>> spinlock_t *lock, u64 context, u64 seqno);
>>>>>>> +void dma_fence_init64(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>>>> + spinlock_t *lock, u64 context, u64 seqno);
>>>>>>> +
>>>>>>> void dma_fence_release(struct kref *kref);
>>>>>>> void dma_fence_free(struct dma_fence *fence);
>>>>>>> void dma_fence_describe(struct dma_fence *fence, struct seq_file *seq);
>>>>>>> @@ -454,7 +450,7 @@ static inline bool __dma_fence_is_later(struct dma_fence *fence, u64 f1, u64 f2)
>>>>>>> * 32bit sequence numbers. Use a 64bit compare when the driver says to
>>>>>>> * do so.
>>>>>>> */
>>>>>>> - if (fence->ops->use_64bit_seqno)
>>>>>>> + if (test_bit(DMA_FENCE_FLAG_SEQNO64_BIT, &fence->flags))
>>>>>>> return f1 > f2;
>>>>>>> return (int)(lower_32_bits(f1) - lower_32_bits(f2)) > 0;
>>>>>>
>>>>>
>>>>
>>>
>>
>
On Tue, Jun 03, 2025 at 02:20:51PM +0800, Xu Yilun wrote:
> > Wouldn’t it be simpler to skip the reference count increment altogether
> > and just call tsm_unbind in the virtual device’s destroy callback?
> > (iommufd_vdevice_destroy())
>
> The vdevice refcount is the main concern, there is also an IOMMU_DESTROY
> ioctl. User could just free the vdevice instance if no refcount, while VFIO
> is still in bound state. That seems not the correct free order.
Freeing the vdevice should automatically unbind it..
Jason
On 6/2/25 17:25, Tvrtko Ursulin wrote:
>
> On 02/06/2025 15:42, Christian König wrote:
>> On 6/2/25 15:05, Tvrtko Ursulin wrote:
>>>
>>> Hi,
>>>
>>> On 15/05/2025 14:15, Christian König wrote:
>>>> Hey drm-misc maintainers,
>>>>
>>>> can you guys please backmerge drm-next into drm-misc-next?
>>>>
>>>> I want to push this patch here but it depends on changes which are partially in drm-next and partially in drm-misc-next.
>>>
>>> Looks like the backmerge is still pending?
>>
>> Yes, @Maarten, @Maxime and @Thomas ping on this.
>>
>>> In the meantime, Christian, any chance you will have some bandwith to think about the tail end of the series? Specifically patch 6 and how that is used onward.
>>
>> Well the RCU grace period is quite a nifty hack. I wanted to go over it again after merging the first patches from this series.
>>
>> In general looks like a good idea to me, I just don't like that we explicitely need to expose dma_fence_access_begin() and dma_fence_access_end().
>>
>> Especially we can't do that while calling fence->ops->release.
>
> Hm why not? You think something will take offence of the rcu_read_lock()?
Yes, especially it is perfectly legitimate to call synchronize_rcu() or lock semaphores/mutexes from that callback.
Either keep the RCU critical section only for the trace or even better come up with some different approach, e.g. copying the string under the RCU lock or something like that.
Regards,
Christian.
>
> Regards,
>
> Tvrtko
>
>>>> On 5/15/25 11:49, Tvrtko Ursulin wrote:
>>>>> With the goal of reducing the need for drivers to touch (and dereference)
>>>>> fence->ops, we move the 64-bit seqnos flag from struct dma_fence_ops to
>>>>> the fence->flags.
>>>>>
>>>>> Drivers which were setting this flag are changed to use new
>>>>> dma_fence_init64() instead of dma_fence_init().
>>>>>
>>>>> v2:
>>>>> * Streamlined init and added kerneldoc.
>>>>> * Rebase for amdgpu userq which landed since.
>>>>>
>>>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin(a)igalia.com>
>>>>> Reviewed-by: Christian König <christian.koenig(a)amd.com> # v1
>>>>> ---
>>>>> drivers/dma-buf/dma-fence-chain.c | 5 +-
>>>>> drivers/dma-buf/dma-fence.c | 69 ++++++++++++++-----
>>>>> .../drm/amd/amdgpu/amdgpu_eviction_fence.c | 7 +-
>>>>> .../gpu/drm/amd/amdgpu/amdgpu_userq_fence.c | 5 +-
>>>>> .../gpu/drm/amd/amdgpu/amdgpu_vm_tlb_fence.c | 5 +-
>>>>> include/linux/dma-fence.h | 14 ++--
>>>>> 6 files changed, 64 insertions(+), 41 deletions(-)
>>>>>
>>>>> diff --git a/drivers/dma-buf/dma-fence-chain.c b/drivers/dma-buf/dma-fence-chain.c
>>>>> index 90424f23fd73..a8a90acf4f34 100644
>>>>> --- a/drivers/dma-buf/dma-fence-chain.c
>>>>> +++ b/drivers/dma-buf/dma-fence-chain.c
>>>>> @@ -218,7 +218,6 @@ static void dma_fence_chain_set_deadline(struct dma_fence *fence,
>>>>> }
>>>>> const struct dma_fence_ops dma_fence_chain_ops = {
>>>>> - .use_64bit_seqno = true,
>>>>> .get_driver_name = dma_fence_chain_get_driver_name,
>>>>> .get_timeline_name = dma_fence_chain_get_timeline_name,
>>>>> .enable_signaling = dma_fence_chain_enable_signaling,
>>>>> @@ -262,8 +261,8 @@ void dma_fence_chain_init(struct dma_fence_chain *chain,
>>>>> seqno = max(prev->seqno, seqno);
>>>>> }
>>>>> - dma_fence_init(&chain->base, &dma_fence_chain_ops,
>>>>> - &chain->lock, context, seqno);
>>>>> + dma_fence_init64(&chain->base, &dma_fence_chain_ops, &chain->lock,
>>>>> + context, seqno);
>>>>> /*
>>>>> * Chaining dma_fence_chain container together is only allowed through
>>>>> diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
>>>>> index f0cdd3e99d36..705b59787731 100644
>>>>> --- a/drivers/dma-buf/dma-fence.c
>>>>> +++ b/drivers/dma-buf/dma-fence.c
>>>>> @@ -989,24 +989,9 @@ void dma_fence_describe(struct dma_fence *fence, struct seq_file *seq)
>>>>> }
>>>>> EXPORT_SYMBOL(dma_fence_describe);
>>>>> -/**
>>>>> - * dma_fence_init - Initialize a custom fence.
>>>>> - * @fence: the fence to initialize
>>>>> - * @ops: the dma_fence_ops for operations on this fence
>>>>> - * @lock: the irqsafe spinlock to use for locking this fence
>>>>> - * @context: the execution context this fence is run on
>>>>> - * @seqno: a linear increasing sequence number for this context
>>>>> - *
>>>>> - * Initializes an allocated fence, the caller doesn't have to keep its
>>>>> - * refcount after committing with this fence, but it will need to hold a
>>>>> - * refcount again if &dma_fence_ops.enable_signaling gets called.
>>>>> - *
>>>>> - * context and seqno are used for easy comparison between fences, allowing
>>>>> - * to check which fence is later by simply using dma_fence_later().
>>>>> - */
>>>>> -void
>>>>> -dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>> - spinlock_t *lock, u64 context, u64 seqno)
>>>>> +static void
>>>>> +__dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>> + spinlock_t *lock, u64 context, u64 seqno, unsigned long flags)
>>>>> {
>>>>> BUG_ON(!lock);
>>>>> BUG_ON(!ops || !ops->get_driver_name || !ops->get_timeline_name);
>>>>> @@ -1017,9 +1002,55 @@ dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>> fence->lock = lock;
>>>>> fence->context = context;
>>>>> fence->seqno = seqno;
>>>>> - fence->flags = 0UL;
>>>>> + fence->flags = flags;
>>>>> fence->error = 0;
>>>>> trace_dma_fence_init(fence);
>>>>> }
>>>>> +
>>>>> +/**
>>>>> + * dma_fence_init - Initialize a custom fence.
>>>>> + * @fence: the fence to initialize
>>>>> + * @ops: the dma_fence_ops for operations on this fence
>>>>> + * @lock: the irqsafe spinlock to use for locking this fence
>>>>> + * @context: the execution context this fence is run on
>>>>> + * @seqno: a linear increasing sequence number for this context
>>>>> + *
>>>>> + * Initializes an allocated fence, the caller doesn't have to keep its
>>>>> + * refcount after committing with this fence, but it will need to hold a
>>>>> + * refcount again if &dma_fence_ops.enable_signaling gets called.
>>>>> + *
>>>>> + * context and seqno are used for easy comparison between fences, allowing
>>>>> + * to check which fence is later by simply using dma_fence_later().
>>>>> + */
>>>>> +void
>>>>> +dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>> + spinlock_t *lock, u64 context, u64 seqno)
>>>>> +{
>>>>> + __dma_fence_init(fence, ops, lock, context, seqno, 0UL);
>>>>> +}
>>>>> EXPORT_SYMBOL(dma_fence_init);
>>>>> +
>>>>> +/**
>>>>> + * dma_fence_init64 - Initialize a custom fence with 64-bit seqno support.
>>>>> + * @fence: the fence to initialize
>>>>> + * @ops: the dma_fence_ops for operations on this fence
>>>>> + * @lock: the irqsafe spinlock to use for locking this fence
>>>>> + * @context: the execution context this fence is run on
>>>>> + * @seqno: a linear increasing sequence number for this context
>>>>> + *
>>>>> + * Initializes an allocated fence, the caller doesn't have to keep its
>>>>> + * refcount after committing with this fence, but it will need to hold a
>>>>> + * refcount again if &dma_fence_ops.enable_signaling gets called.
>>>>> + *
>>>>> + * Context and seqno are used for easy comparison between fences, allowing
>>>>> + * to check which fence is later by simply using dma_fence_later().
>>>>> + */
>>>>> +void
>>>>> +dma_fence_init64(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>> + spinlock_t *lock, u64 context, u64 seqno)
>>>>> +{
>>>>> + __dma_fence_init(fence, ops, lock, context, seqno,
>>>>> + BIT(DMA_FENCE_FLAG_SEQNO64_BIT));
>>>>> +}
>>>>> +EXPORT_SYMBOL(dma_fence_init64);
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_eviction_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_eviction_fence.c
>>>>> index 1a7469543db5..79713421bffe 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_eviction_fence.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_eviction_fence.c
>>>>> @@ -134,7 +134,6 @@ static bool amdgpu_eviction_fence_enable_signaling(struct dma_fence *f)
>>>>> }
>>>>> static const struct dma_fence_ops amdgpu_eviction_fence_ops = {
>>>>> - .use_64bit_seqno = true,
>>>>> .get_driver_name = amdgpu_eviction_fence_get_driver_name,
>>>>> .get_timeline_name = amdgpu_eviction_fence_get_timeline_name,
>>>>> .enable_signaling = amdgpu_eviction_fence_enable_signaling,
>>>>> @@ -160,9 +159,9 @@ amdgpu_eviction_fence_create(struct amdgpu_eviction_fence_mgr *evf_mgr)
>>>>> ev_fence->evf_mgr = evf_mgr;
>>>>> get_task_comm(ev_fence->timeline_name, current);
>>>>> spin_lock_init(&ev_fence->lock);
>>>>> - dma_fence_init(&ev_fence->base, &amdgpu_eviction_fence_ops,
>>>>> - &ev_fence->lock, evf_mgr->ev_fence_ctx,
>>>>> - atomic_inc_return(&evf_mgr->ev_fence_seq));
>>>>> + dma_fence_init64(&ev_fence->base, &amdgpu_eviction_fence_ops,
>>>>> + &ev_fence->lock, evf_mgr->ev_fence_ctx,
>>>>> + atomic_inc_return(&evf_mgr->ev_fence_seq));
>>>>> return ev_fence;
>>>>> }
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c
>>>>> index 029cb24c28b3..5e92d00a591f 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c
>>>>> @@ -239,8 +239,8 @@ static int amdgpu_userq_fence_create(struct amdgpu_usermode_queue *userq,
>>>>> fence = &userq_fence->base;
>>>>> userq_fence->fence_drv = fence_drv;
>>>>> - dma_fence_init(fence, &amdgpu_userq_fence_ops, &userq_fence->lock,
>>>>> - fence_drv->context, seq);
>>>>> + dma_fence_init64(fence, &amdgpu_userq_fence_ops, &userq_fence->lock,
>>>>> + fence_drv->context, seq);
>>>>> amdgpu_userq_fence_driver_get(fence_drv);
>>>>> dma_fence_get(fence);
>>>>> @@ -334,7 +334,6 @@ static void amdgpu_userq_fence_release(struct dma_fence *f)
>>>>> }
>>>>> static const struct dma_fence_ops amdgpu_userq_fence_ops = {
>>>>> - .use_64bit_seqno = true,
>>>>> .get_driver_name = amdgpu_userq_fence_get_driver_name,
>>>>> .get_timeline_name = amdgpu_userq_fence_get_timeline_name,
>>>>> .signaled = amdgpu_userq_fence_signaled,
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_tlb_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_tlb_fence.c
>>>>> index 51cddfa3f1e8..5d26797356a3 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_tlb_fence.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_tlb_fence.c
>>>>> @@ -71,7 +71,6 @@ static void amdgpu_tlb_fence_work(struct work_struct *work)
>>>>> }
>>>>> static const struct dma_fence_ops amdgpu_tlb_fence_ops = {
>>>>> - .use_64bit_seqno = true,
>>>>> .get_driver_name = amdgpu_tlb_fence_get_driver_name,
>>>>> .get_timeline_name = amdgpu_tlb_fence_get_timeline_name
>>>>> };
>>>>> @@ -101,8 +100,8 @@ void amdgpu_vm_tlb_fence_create(struct amdgpu_device *adev, struct amdgpu_vm *vm
>>>>> INIT_WORK(&f->work, amdgpu_tlb_fence_work);
>>>>> spin_lock_init(&f->lock);
>>>>> - dma_fence_init(&f->base, &amdgpu_tlb_fence_ops, &f->lock,
>>>>> - vm->tlb_fence_context, atomic64_read(&vm->tlb_seq));
>>>>> + dma_fence_init64(&f->base, &amdgpu_tlb_fence_ops, &f->lock,
>>>>> + vm->tlb_fence_context, atomic64_read(&vm->tlb_seq));
>>>>> /* TODO: We probably need a separate wq here */
>>>>> dma_fence_get(&f->base);
>>>>> diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
>>>>> index 48b5202c531d..a34a0dcdc446 100644
>>>>> --- a/include/linux/dma-fence.h
>>>>> +++ b/include/linux/dma-fence.h
>>>>> @@ -97,6 +97,7 @@ struct dma_fence {
>>>>> };
>>>>> enum dma_fence_flag_bits {
>>>>> + DMA_FENCE_FLAG_SEQNO64_BIT,
>>>>> DMA_FENCE_FLAG_SIGNALED_BIT,
>>>>> DMA_FENCE_FLAG_TIMESTAMP_BIT,
>>>>> DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
>>>>> @@ -124,14 +125,6 @@ struct dma_fence_cb {
>>>>> *
>>>>> */
>>>>> struct dma_fence_ops {
>>>>> - /**
>>>>> - * @use_64bit_seqno:
>>>>> - *
>>>>> - * True if this dma_fence implementation uses 64bit seqno, false
>>>>> - * otherwise.
>>>>> - */
>>>>> - bool use_64bit_seqno;
>>>>> -
>>>>> /**
>>>>> * @get_driver_name:
>>>>> *
>>>>> @@ -262,6 +255,9 @@ struct dma_fence_ops {
>>>>> void dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>> spinlock_t *lock, u64 context, u64 seqno);
>>>>> +void dma_fence_init64(struct dma_fence *fence, const struct dma_fence_ops *ops,
>>>>> + spinlock_t *lock, u64 context, u64 seqno);
>>>>> +
>>>>> void dma_fence_release(struct kref *kref);
>>>>> void dma_fence_free(struct dma_fence *fence);
>>>>> void dma_fence_describe(struct dma_fence *fence, struct seq_file *seq);
>>>>> @@ -454,7 +450,7 @@ static inline bool __dma_fence_is_later(struct dma_fence *fence, u64 f1, u64 f2)
>>>>> * 32bit sequence numbers. Use a 64bit compare when the driver says to
>>>>> * do so.
>>>>> */
>>>>> - if (fence->ops->use_64bit_seqno)
>>>>> + if (test_bit(DMA_FENCE_FLAG_SEQNO64_BIT, &fence->flags))
>>>>> return f1 > f2;
>>>>> return (int)(lower_32_bits(f1) - lower_32_bits(f2)) > 0;
>>>>
>>>
>>
>