On 11/27/25 09:23, Viresh Kumar wrote:
> On 27-11-25, 09:07, Christian König wrote:
>> On 11/27/25 08:40, Viresh Kumar wrote:
>>> Move several dma-buf function declarations under
>>> CONFIG_DMA_SHARED_BUFFER and provide static inline no-op implementations
>>> for the disabled case to allow the callers to build when the feature is
>>> not compiled in.
>>
>> Good point, but which driver actually needs that?
>
> This broke some WIP stuff [1] which isn't posted upstream yet. That's why I
> didn't mention anything in the commit log, though I could have added a comment
> about that in the non-commit-log part.
Well then better send that out with the full patch set.
>> In other words there should be a concrete example of what breaks in the commit message.
>
> There is time for those changes to be posted and not sure if they will be
> accepted or not. But either way, this change made sense in general and so I
> thought there is nothing wrong to get this upstream right away.
Yeah when it is unused intermediately then that is usually a no-go even if I agree that it makes sense.
>>> +static inline struct dma_buf *dma_buf_get(int fd)
>>> +{
>>> + return NULL;
>>
>> And here ERR_PTR(-EINVAL).
>
> I am not really sure if this should be EINVAL in this case. EOPNOTSUPP still
> makes sense as the API isn't supported.
When the API isn't compiled in the fd can't be valid (because you can't create a dma_buf object in the first place).
So returning -EINVAL still makes a lot of sense.
Regards,
Christian.
>
>>> +static inline struct dma_buf *dma_buf_iter_begin(void)
>>> +{
>>> + return NULL;
>>> +}
>>> +
>>> +static inline struct dma_buf *dma_buf_iter_next(struct dma_buf *dmbuf)
>>> +{
>>> + return NULL;
>>> +}
>>
>> Those two are only for BPF and not driver use.
>
> Will drop them.
>
On 11/27/25 10:16, Philipp Stanner wrote:
> On Thu, 2025-11-27 at 09:11 +0100, Christian König wrote:
>> On 11/26/25 17:55, Matthew Brost wrote:
>>> On Wed, Nov 26, 2025 at 08:41:27AM -0800, Matthew Brost wrote:
>>>> On Wed, Nov 26, 2025 at 02:19:10PM +0100, Philipp Stanner wrote:
>>>>> The dma_fence framework checks at many places whether the signaled flag
>>>>> of a fence is already set. The code can be simplified and made more
>>>>> readable by providing a helper function for that.
>>>>>
>>>>> Add dma_fence_test_signaled_flag(), which only checks whether a fence is
>>>>> signaled. Use it internally.
>>>>>
>>>>> Suggested-by: Tvrtko Ursulin <tvrtko.ursulin(a)igalia.com>
>>>>> Signed-off-by: Philipp Stanner <phasta(a)kernel.org>
>>>>
>>>> This is a nice cleanp:
>>>> Reviewed-by: Matthew Brost <matthew.brost(a)intel.com>
>>>>
>>>>> ---
>>>>> drivers/dma-buf/dma-fence.c | 19 +++++++++----------
>>>>> include/linux/dma-fence.h | 24 ++++++++++++++++++++++--
>>>>> 2 files changed, 31 insertions(+), 12 deletions(-)
>>>>>
>>>>> diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
>>>>> index 39e6f93dc310..25117a906846 100644
>>>>> --- a/drivers/dma-buf/dma-fence.c
>>>>> +++ b/drivers/dma-buf/dma-fence.c
>>>>> @@ -372,8 +372,7 @@ int dma_fence_signal_timestamp_locked(struct dma_fence *fence,
>>>>>
>>>>> lockdep_assert_held(fence->lock);
>>>>>
>>>>> - if (unlikely(test_and_set_bit(DMA_FENCE_FLAG_SIGNALED_BIT,
>>>>> - &fence->flags)))
>>>
>>> I need to read a little better, I think this change isn't quite right.
>>> The original code is test and set, the updated code is test only (i.e.,
>>> you are missing the set step). So maybe just leave this line as is.
>>
>> Oh, good point! I've totally missed that as well.
>
> Oh dear; I also just saw it when opening the mail client ._.
>
>>
>> But that means that this patch set hasn't even been smoke tested.
>
> I've built it and did some basic testing with my Nouveau system. Any
> suggestions? Do you have a CI that one can trigger?
DMA-buf has CONFIG_DMABUF_SELFTESTS which should be able to catch things like that.
But even running Nouveau should have found this since basically no fence at would signal any more.
Regards,
Christian.
>
> Thx
> P.
On 11/26/25 17:55, Matthew Brost wrote:
> On Wed, Nov 26, 2025 at 08:41:27AM -0800, Matthew Brost wrote:
>> On Wed, Nov 26, 2025 at 02:19:10PM +0100, Philipp Stanner wrote:
>>> The dma_fence framework checks at many places whether the signaled flag
>>> of a fence is already set. The code can be simplified and made more
>>> readable by providing a helper function for that.
>>>
>>> Add dma_fence_test_signaled_flag(), which only checks whether a fence is
>>> signaled. Use it internally.
>>>
>>> Suggested-by: Tvrtko Ursulin <tvrtko.ursulin(a)igalia.com>
>>> Signed-off-by: Philipp Stanner <phasta(a)kernel.org>
>>
>> This is a nice cleanp:
>> Reviewed-by: Matthew Brost <matthew.brost(a)intel.com>
>>
>>> ---
>>> drivers/dma-buf/dma-fence.c | 19 +++++++++----------
>>> include/linux/dma-fence.h | 24 ++++++++++++++++++++++--
>>> 2 files changed, 31 insertions(+), 12 deletions(-)
>>>
>>> diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
>>> index 39e6f93dc310..25117a906846 100644
>>> --- a/drivers/dma-buf/dma-fence.c
>>> +++ b/drivers/dma-buf/dma-fence.c
>>> @@ -372,8 +372,7 @@ int dma_fence_signal_timestamp_locked(struct dma_fence *fence,
>>>
>>> lockdep_assert_held(fence->lock);
>>>
>>> - if (unlikely(test_and_set_bit(DMA_FENCE_FLAG_SIGNALED_BIT,
>>> - &fence->flags)))
>
> I need to read a little better, I think this change isn't quite right.
> The original code is test and set, the updated code is test only (i.e.,
> you are missing the set step). So maybe just leave this line as is.
Oh, good point! I've totally missed that as well.
But that means that this patch set hasn't even been smoke tested.
Regards,
Christian.
>
> Matt
>
>>> + if (unlikely(dma_fence_test_signaled_flag(fence)))
>>> return -EINVAL;
>>>
>>> /* Stash the cb_list before replacing it with the timestamp */
>>> @@ -545,7 +544,7 @@ void dma_fence_release(struct kref *kref)
>>> trace_dma_fence_destroy(fence);
>>>
>>> if (!list_empty(&fence->cb_list) &&
>>> - !test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
>>> + !dma_fence_test_signaled_flag(fence)) {
>>> const char __rcu *timeline;
>>> const char __rcu *driver;
>>> unsigned long flags;
>>> @@ -602,7 +601,7 @@ static bool __dma_fence_enable_signaling(struct dma_fence *fence)
>>> was_set = test_and_set_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
>>> &fence->flags);
>>>
>>> - if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>>> + if (dma_fence_test_signaled_flag(fence))
>>> return false;
>>>
>>> if (!was_set && fence->ops->enable_signaling) {
>>> @@ -666,7 +665,7 @@ int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
>>> if (WARN_ON(!fence || !func))
>>> return -EINVAL;
>>>
>>> - if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
>>> + if (dma_fence_test_signaled_flag(fence)) {
>>> INIT_LIST_HEAD(&cb->node);
>>> return -ENOENT;
>>> }
>>> @@ -783,7 +782,7 @@ dma_fence_default_wait(struct dma_fence *fence, bool intr, signed long timeout)
>>>
>>> spin_lock_irqsave(fence->lock, flags);
>>>
>>> - if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>>> + if (dma_fence_test_signaled_flag(fence))
>>> goto out;
>>>
>>> if (intr && signal_pending(current)) {
>>> @@ -800,7 +799,7 @@ dma_fence_default_wait(struct dma_fence *fence, bool intr, signed long timeout)
>>> cb.task = current;
>>> list_add(&cb.base.node, &fence->cb_list);
>>>
>>> - while (!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags) && ret > 0) {
>>> + while (!dma_fence_test_signaled_flag(fence) && ret > 0) {
>>> if (intr)
>>> __set_current_state(TASK_INTERRUPTIBLE);
>>> else
>>> @@ -832,7 +831,7 @@ dma_fence_test_signaled_any(struct dma_fence **fences, uint32_t count,
>>>
>>> for (i = 0; i < count; ++i) {
>>> struct dma_fence *fence = fences[i];
>>> - if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
>>> + if (dma_fence_test_signaled_flag(fence)) {
>>> if (idx)
>>> *idx = i;
>>> return true;
>>> @@ -1108,7 +1107,7 @@ const char __rcu *dma_fence_driver_name(struct dma_fence *fence)
>>> RCU_LOCKDEP_WARN(!rcu_read_lock_held(),
>>> "RCU protection is required for safe access to returned string");
>>>
>>> - if (!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>>> + if (!dma_fence_test_signaled_flag(fence))
>>> return fence->ops->get_driver_name(fence);
>>> else
>>> return "detached-driver";
>>> @@ -1140,7 +1139,7 @@ const char __rcu *dma_fence_timeline_name(struct dma_fence *fence)
>>> RCU_LOCKDEP_WARN(!rcu_read_lock_held(),
>>> "RCU protection is required for safe access to returned string");
>>>
>>> - if (!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>>> + if (!dma_fence_test_signaled_flag(fence))
>>> return fence->ops->get_timeline_name(fence);
>>> else
>>> return "signaled-timeline";
>>> diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
>>> index 64639e104110..19972f5d176f 100644
>>> --- a/include/linux/dma-fence.h
>>> +++ b/include/linux/dma-fence.h
>>> @@ -401,6 +401,26 @@ void dma_fence_enable_sw_signaling(struct dma_fence *fence);
>>> const char __rcu *dma_fence_driver_name(struct dma_fence *fence);
>>> const char __rcu *dma_fence_timeline_name(struct dma_fence *fence);
>>>
>>> +/*
>>> + * dma_fence_test_signaled_flag - Only check whether a fence is signaled yet.
>>> + * @fence: the fence to check
>>> + *
>>> + * This function just checks whether @fence is signaled, without interacting
>>> + * with the fence in any way. The user must, therefore, ensure through other
>>> + * means that fences get signaled eventually.
>>> + *
>>> + * This function uses test_bit(), which is thread-safe. Naturally, this function
>>> + * should be used opportunistically; a fence could get signaled at any moment
>>> + * after the check is done.
>>> + *
>>> + * Return: true if signaled, false otherwise.
>>> + */
>>> +static inline bool
>>> +dma_fence_test_signaled_flag(struct dma_fence *fence)
>>> +{
>>> + return test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags);
>>> +}
>>> +
>>> /**
>>> * dma_fence_is_signaled_locked - Return an indication if the fence
>>> * is signaled yet.
>>> @@ -418,7 +438,7 @@ const char __rcu *dma_fence_timeline_name(struct dma_fence *fence);
>>> static inline bool
>>> dma_fence_is_signaled_locked(struct dma_fence *fence)
>>> {
>>> - if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>>> + if (dma_fence_test_signaled_flag(fence))
>>> return true;
>>>
>>> if (fence->ops->signaled && fence->ops->signaled(fence)) {
>>> @@ -448,7 +468,7 @@ dma_fence_is_signaled_locked(struct dma_fence *fence)
>>> static inline bool
>>> dma_fence_is_signaled(struct dma_fence *fence)
>>> {
>>> - if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
>>> + if (dma_fence_test_signaled_flag(fence))
>>> return true;
>>>
>>> if (fence->ops->signaled && fence->ops->signaled(fence)) {
>>> --
>>> 2.49.0
>>>
On 2025-11-26 08:19, Philipp Stanner wrote:
> The return code of dma_fence_signal() is not really useful as there is
> nothing reasonable to do if a fence was already signaled. That return
> code shall be removed from the kernel.
>
> Ignore dma_fence_signal()'s return code.
I think this is not correct. Looking at the comment in
evict_process_worker, we use the return value to decide a race
conditions where multiple threads are trying to signal the eviction
fence. Only one of them should schedule the restore work. And the other
ones need to increment the reference count to keep evictions balanced.
Regards,
Felix
>
> Suggested-by: Christian König <christian.koenig(a)amd.com>
> Signed-off-by: Philipp Stanner <phasta(a)kernel.org>
> ---
> drivers/gpu/drm/amd/amdkfd/kfd_process.c | 5 ++---
> 1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
> index ddfe30c13e9d..950fafa4b3c3 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
> @@ -1986,7 +1986,6 @@ kfd_process_gpuid_from_node(struct kfd_process *p, struct kfd_node *node,
> static int signal_eviction_fence(struct kfd_process *p)
> {
> struct dma_fence *ef;
> - int ret;
>
> rcu_read_lock();
> ef = dma_fence_get_rcu_safe(&p->ef);
> @@ -1994,10 +1993,10 @@ static int signal_eviction_fence(struct kfd_process *p)
> if (!ef)
> return -EINVAL;
>
> - ret = dma_fence_signal(ef);
> + dma_fence_signal(ef);
> dma_fence_put(ef);
>
> - return ret;
> + return 0;
> }
>
> static void evict_process_worker(struct work_struct *work)
This series is the start of adding full DMABUF support to
iommufd. Currently it is limited to only work with VFIO's DMABUF exporter.
It sits on top of Leon's series to add a DMABUF exporter to VFIO:
https://lore.kernel.org/all/20251120-dmabuf-vfio-v9-0-d7f71607f371@nvidia.c…
The existing IOMMU_IOAS_MAP_FILE is enhanced to detect DMABUF fd's, but
otherwise works the same as it does today for a memfd. The user can select
a slice of the FD to map into the ioas and if the underliyng alignment
requirements are met it will be placed in the iommu_domain.
Though limited, it is enough to allow a VMM like QEMU to connect MMIO BAR
memory from VFIO to an iommu_domain controlled by iommufd. This is used
for PCI Peer to Peer support in VMs, and is the last feature that the VFIO
type 1 container has that iommufd couldn't do.
The VFIO type1 version extracts raw PFNs from VMAs, which has no lifetime
control and is a use-after-free security problem.
Instead iommufd relies on revokable DMABUFs. Whenever VFIO thinks there
should be no access to the MMIO it can shoot down the mapping in iommufd
which will unmap it from the iommu_domain. There is no automatic remap,
this is a safety protocol so the kernel doesn't get stuck. Userspace is
expected to know it is doing something that will revoke the dmabuf and
map/unmap it around the activity. Eg when QEMU goes to issue FLR it should
do the map/unmap to iommufd.
Since DMABUF is missing some key general features for this use case it
relies on a "private interconnect" between VFIO and iommufd via the
vfio_pci_dma_buf_iommufd_map() call.
The call confirms the DMABUF has revoke semantics and delivers a phys_addr
for the memory suitable for use with iommu_map().
Medium term there is a desire to expand the supported DMABUFs to include
GPU drivers to support DPDK/SPDK type use cases so future series will work
to add a general concept of revoke and a general negotiation of
interconnect to remove vfio_pci_dma_buf_iommufd_map().
I also plan another series to modify iommufd's vfio_compat to
transparently pull a dmabuf out of a VFIO VMA to emulate more of the uAPI
of type1.
The latest series for interconnect negotation to exchange a phys_addr is:
https://lore.kernel.org/r/20251027044712.1676175-1-vivek.kasireddy@intel.com
And the discussion for design of revoke is here:
https://lore.kernel.org/dri-devel/20250114173103.GE5556@nvidia.com/
This is on github: https://github.com/jgunthorpe/linux/commits/iommufd_dmabuf
v2:
- Rebase on Leon's v9
- Fix mislocking in an iopt_fill_domain() error path
- Revise the comments around how the sub page offset works
- Remove a useless WARN_ON in iopt_pages_rw_access()
- Fixed missed memory free in the selftest
v1: https://patch.msgid.link/r/0-v1-64bed2430cdb+31b-iommufd_dmabuf_jgg@nvidia.…
Jason Gunthorpe (9):
vfio/pci: Add vfio_pci_dma_buf_iommufd_map()
iommufd: Add DMABUF to iopt_pages
iommufd: Do not map/unmap revoked DMABUFs
iommufd: Allow a DMABUF to be revoked
iommufd: Allow MMIO pages in a batch
iommufd: Have pfn_reader process DMABUF iopt_pages
iommufd: Have iopt_map_file_pages convert the fd to a file
iommufd: Accept a DMABUF through IOMMU_IOAS_MAP_FILE
iommufd/selftest: Add some tests for the dmabuf flow
drivers/iommu/iommufd/io_pagetable.c | 78 +++-
drivers/iommu/iommufd/io_pagetable.h | 54 ++-
drivers/iommu/iommufd/ioas.c | 8 +-
drivers/iommu/iommufd/iommufd_private.h | 14 +-
drivers/iommu/iommufd/iommufd_test.h | 10 +
drivers/iommu/iommufd/main.c | 10 +
drivers/iommu/iommufd/pages.c | 414 ++++++++++++++++--
drivers/iommu/iommufd/selftest.c | 143 ++++++
drivers/vfio/pci/vfio_pci_dmabuf.c | 34 ++
include/linux/vfio_pci_core.h | 4 +
tools/testing/selftests/iommu/iommufd.c | 43 ++
tools/testing/selftests/iommu/iommufd_utils.h | 44 ++
12 files changed, 786 insertions(+), 70 deletions(-)
base-commit: f836737ed56db9e2d5b047c56a31e05af0f3f116
--
2.43.0
On Tue, Nov 25, 2025 at 05:11:18PM -0800, Alex Mastro wrote:
> fill_sg_entry() splits large DMA buffers into multiple scatter-gather
> entries, each holding up to UINT_MAX bytes. When calculating the DMA
> address for entries beyond the second one, the expression (i * UINT_MAX)
> causes integer overflow due to 32-bit arithmetic.
>
> This manifests when the input arg length >= 8 GiB results in looping for
> i >= 2.
>
> Fix by casting i to dma_addr_t before multiplication.
>
> Fixes: 3aa31a8bb11e ("dma-buf: provide phys_vec to scatter-gather mapping routine")
> Signed-off-by: Alex Mastro <amastro(a)fb.com>
> ---
> More color about how I discovered this in [1] for the commit at [2]:
>
> [1] https://lore.kernel.org/all/aSZHO6otK0Heh+Qj@devgpu015.cco6.facebook.com
> [2] https://lore.kernel.org/all/20251120-dmabuf-vfio-v9-6-d7f71607f371@nvidia.c…
> ---
> drivers/dma-buf/dma-buf-mapping.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
Reviewed-by: Jason Gunthorpe <jgg(a)nvidia.com>
AlexW, can you pick this up?
Thanks,
Jason
On Wed, Nov 26, 2025 at 08:08:24AM -0800, Alex Mastro wrote:
> On Wed, Nov 26, 2025 at 01:12:40PM +0000, Pranjal Shrivastava wrote:
> > On Tue, Nov 25, 2025 at 04:18:03PM -0800, Alex Mastro wrote:
> > > On Thu, Nov 20, 2025 at 11:28:25AM +0200, Leon Romanovsky wrote:
> > > > +static struct scatterlist *fill_sg_entry(struct scatterlist *sgl, size_t length,
> > > > + dma_addr_t addr)
> > > > +{
> > > > + unsigned int len, nents;
> > > > + int i;
> > > > +
> > > > + nents = DIV_ROUND_UP(length, UINT_MAX);
> > > > + for (i = 0; i < nents; i++) {
> > > > + len = min_t(size_t, length, UINT_MAX);
> > > > + length -= len;
> > > > + /*
> > > > + * DMABUF abuses scatterlist to create a scatterlist
> > > > + * that does not have any CPU list, only the DMA list.
> > > > + * Always set the page related values to NULL to ensure
> > > > + * importers can't use it. The phys_addr based DMA API
> > > > + * does not require the CPU list for mapping or unmapping.
> > > > + */
> > > > + sg_set_page(sgl, NULL, 0, 0);
> > > > + sg_dma_address(sgl) = addr + i * UINT_MAX;
> > >
> > > (i * UINT_MAX) happens in 32-bit before being promoted to dma_addr_t for
> > > addition with addr. Overflows for i >=2 when length >= 8 GiB. Needs a cast:
> > >
> > > sg_dma_address(sgl) = addr + (dma_addr_t)i * UINT_MAX;
Yeah, and i should not be signed.
> > > Discovered this while debugging why dma-buf import was failing for
> > > an 8 GiB dma-buf using my earlier toy program [1]. It was surfaced by
> > > ib_umem_find_best_pgsz() returning 0 due to malformed scatterlist, which bubbles
> > > up as an EINVAL.
> > >
> >
> > Thanks a lot for testing & reporting this!
> >
> > However, I believe the casting approach is a little fragile (and
> > potentially prone to issues depending on how dma_addr_t is sized on
> > different platforms). Thus, approaching this with accumulation seems
> > better as it avoids the multiplication logic entirely, maybe something
> > like the following (untested) diff ?
>
> If the function input range is well-formed, then all values in
> [addr..addr+length) must be expressible by dma_addr_t, so I don't think overflow
> after casting is possible as long as nents is valid.
It is probably not perfect, but validate_dmabuf_input() limits length
to a valid size_t
The signature is:
bool dma_iova_try_alloc(struct device *dev, struct dma_iova_state *state,
phys_addr_t phys, size_t size)
And that function should fail if size is too large. I think it mostly
does, but it looks like there are a few little misses:
iova_align(iovad, size + iova_off),
return ALIGN(size, iovad->granule);
etc are all unchecked math that could overflow.
> That said, `nents = DIV_ROUND_UP(length, UINT_MAX)` is simply broken on any
> system where size_t is 32b. I don't know if that's a practical consideration for
> these code paths though.
Yeah, that's a good point.
Casting to u64 will trigger 64 bit device errors on 32 bit too.
// DIV_ROUND_UP that is safe at the type limits
nents = size / UINT_MAX;
if (size % UINT_MAX)
nents++;
Compiler should turn the % into bit math.
Jason
Am Mittwoch, dem 26.11.2025 um 16:44 +0100 schrieb Philipp Stanner:
> On Wed, 2025-11-26 at 16:03 +0100, Christian König wrote:
> >
> >
> > On 11/26/25 13:37, Philipp Stanner wrote:
> > > On Wed, 2025-11-26 at 13:31 +0100, Christian König wrote:
> > > >
>
> […]
>
> > > > Well the question is how do you detect *reliable* that there is
> > > > still forward progress?
> > >
> > > My understanding is that that's impossible since the internals of
> > > command submissions are only really understood by userspace, who
> > > submits them.
> >
> > Right, but we can still try to do our best in the kernel to mitigate
> > the situation.
> >
> > I think for now amdgpu will implement something like checking if the
> > HW still makes progress after a timeout but only a limited number of
> > re-tries until we say that's it and reset anyway.
>
> Oh oh, isn't that our dear hang_limit? :)
Not really. The hang limit is the limit on how many times a hanging
submit might be retried.
Limiting the number of timeout extensions is more of a safety net
against a workloads which might appear to make progress to the kernel
driver but in reality are stuck. After all, the kernel driver can only
have limited knowledge of the GPU state and any progress check will
have limited precision with false positives/negatives being a part of
reality we have to deal with.
>
> We agree that you can never really now whether userspace just submitted
> a while(true) job, don't we? Even if some GPU register still indicates
> "progress".
Yea, this is really hardware dependent on what you can read at
runtime.
For etnaviv we define "progress" as the command frontend moving towards
the end of the command buffer. As a single draw call in valid workloads
can blow through our timeout we also use debug registers to look at the
current primitive ID within a draw call.
If userspace submits a workload that requires more than 500ms per
primitive to finish we consider this an invalid workload and go through
the reset/recovery motions.
Regards,
Lucas