Hi guys,
we are currently working an Freesync and direct scan out from system
memory on AMD APUs in A+A laptops.
On problem we stumbled over is that our display hardware needs to scan
out from uncached system memory and we currently don't have a way to
communicate that through DMA-buf.
For our specific use case at hand we are going to implement something
driver specific, but the question is should we have something more
generic for this?
After all the system memory access pattern is a PCIe extension and as
such something generic.
Regards,
Christian.
Hi Daniel,
just a gentle ping that you wanted to take a look at this.
Not much changed compared to the last version, only a minor bugfix in
the dma_resv_get_singleton error handling.
Regards,
Christian.
On Tue, Dec 07, 2021 at 01:34:05PM +0100, Christian König wrote:
> This change adds the dma_resv_usage enum and allows us to specify why a
> dma_resv object is queried for its containing fences.
>
> Additional to that a dma_resv_usage_rw() helper function is added to aid
> retrieving the fences for a read or write userspace submission.
>
> This is then deployed to the different query functions of the dma_resv
> object and all of their users. When the write paratermer was previously
> true we now use DMA_RESV_USAGE_WRITE and DMA_RESV_USAGE_READ otherwise.
>
> v2: add KERNEL/OTHER in separate patch
> v3: some kerneldoc suggestions by Daniel
>
> Signed-off-by: Christian König <christian.koenig(a)amd.com>
Just commenting on the kerneldoc here.
> diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
> index 40ac9d486f8f..d96d8ca9af56 100644
> --- a/include/linux/dma-resv.h
> +++ b/include/linux/dma-resv.h
> @@ -49,6 +49,49 @@ extern struct ww_class reservation_ww_class;
>
> struct dma_resv_list;
>
> +/**
> + * enum dma_resv_usage - how the fences from a dma_resv obj are used
> + *
> + * This enum describes the different use cases for a dma_resv object and
> + * controls which fences are returned when queried.
We need to link here to both dma_buf.resv and from there to here.
Also we had a fair amount of text in the old dma_resv fields which should
probably be included here.
> + */
> +enum dma_resv_usage {
> + /**
> + * @DMA_RESV_USAGE_WRITE: Implicit write synchronization.
> + *
> + * This should only be used for userspace command submissions which add
> + * an implicit write dependency.
> + */
> + DMA_RESV_USAGE_WRITE,
> +
> + /**
> + * @DMA_RESV_USAGE_READ: Implicit read synchronization.
> + *
> + * This should only be used for userspace command submissions which add
> + * an implicit read dependency.
I think the above would benefit from at least a link each to &dma_buf.resv
for further discusion.
Plus the READ flag needs a huge warning that in general it does _not_
guarantee that neither there's no writes possible, nor that the writes can
be assumed mistakes and dropped (on buffer moves e.g.).
Drivers can only make further assumptions for driver-internal dma_resv
objects (e.g. on vm/pagetables) or when the fences are all fences of the
same driver (e.g. the special sync rules amd has that takes the fence
owner into account).
We have this documented in the dma_buf.resv rules, but since it came up
again in a discussion with Thomas H. somewhere, it's better to hammer this
in a few more time. Specically in generally ignoring READ fences for
buffer moves (well the copy job, memory freeing still has to wait for all
of them) is a correctness bug.
Maybe include a big warning that really the difference between READ and
WRITE should only matter for implicit sync, and _not_ for anything else
the kernel does.
I'm assuming the actual replacement is all mechanical, so I skipped that
one for now, that's for next year :-)
-Daniel
> + */
> + DMA_RESV_USAGE_READ,
> +};
> +
> +/**
> + * dma_resv_usage_rw - helper for implicit sync
> + * @write: true if we create a new implicit sync write
> + *
> + * This returns the implicit synchronization usage for write or read accesses,
> + * see enum dma_resv_usage.
> + */
> +static inline enum dma_resv_usage dma_resv_usage_rw(bool write)
> +{
> + /* This looks confusing at first sight, but is indeed correct.
> + *
> + * The rational is that new write operations needs to wait for the
> + * existing read and write operations to finish.
> + * But a new read operation only needs to wait for the existing write
> + * operations to finish.
> + */
> + return write ? DMA_RESV_USAGE_READ : DMA_RESV_USAGE_WRITE;
> +}
> +
> /**
> * struct dma_resv - a reservation object manages fences for a buffer
> *
> @@ -147,8 +190,8 @@ struct dma_resv_iter {
> /** @obj: The dma_resv object we iterate over */
> struct dma_resv *obj;
>
> - /** @all_fences: If all fences should be returned */
> - bool all_fences;
> + /** @usage: Controls which fences are returned */
> + enum dma_resv_usage usage;
>
> /** @fence: the currently handled fence */
> struct dma_fence *fence;
> @@ -178,14 +221,14 @@ struct dma_fence *dma_resv_iter_next(struct dma_resv_iter *cursor);
> * dma_resv_iter_begin - initialize a dma_resv_iter object
> * @cursor: The dma_resv_iter object to initialize
> * @obj: The dma_resv object which we want to iterate over
> - * @all_fences: If all fences should be returned or just the exclusive one
> + * @usage: controls which fences to include, see enum dma_resv_usage.
> */
> static inline void dma_resv_iter_begin(struct dma_resv_iter *cursor,
> struct dma_resv *obj,
> - bool all_fences)
> + enum dma_resv_usage usage)
> {
> cursor->obj = obj;
> - cursor->all_fences = all_fences;
> + cursor->usage = usage;
> cursor->fence = NULL;
> }
>
> @@ -242,7 +285,7 @@ static inline bool dma_resv_iter_is_restarted(struct dma_resv_iter *cursor)
> * dma_resv_for_each_fence - fence iterator
> * @cursor: a struct dma_resv_iter pointer
> * @obj: a dma_resv object pointer
> - * @all_fences: true if all fences should be returned
> + * @usage: controls which fences to return
> * @fence: the current fence
> *
> * Iterate over the fences in a struct dma_resv object while holding the
> @@ -251,8 +294,8 @@ static inline bool dma_resv_iter_is_restarted(struct dma_resv_iter *cursor)
> * valid as long as the lock is held and so no extra reference to the fence is
> * taken.
> */
> -#define dma_resv_for_each_fence(cursor, obj, all_fences, fence) \
> - for (dma_resv_iter_begin(cursor, obj, all_fences), \
> +#define dma_resv_for_each_fence(cursor, obj, usage, fence) \
> + for (dma_resv_iter_begin(cursor, obj, usage), \
> fence = dma_resv_iter_first(cursor); fence; \
> fence = dma_resv_iter_next(cursor))
>
> @@ -419,14 +462,14 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence);
> void dma_resv_replace_fences(struct dma_resv *obj, uint64_t context,
> struct dma_fence *fence);
> void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence);
> -int dma_resv_get_fences(struct dma_resv *obj, bool write,
> +int dma_resv_get_fences(struct dma_resv *obj, enum dma_resv_usage usage,
> unsigned int *num_fences, struct dma_fence ***fences);
> -int dma_resv_get_singleton(struct dma_resv *obj, bool write,
> +int dma_resv_get_singleton(struct dma_resv *obj, enum dma_resv_usage usage,
> struct dma_fence **fence);
> int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src);
> -long dma_resv_wait_timeout(struct dma_resv *obj, bool wait_all, bool intr,
> - unsigned long timeout);
> -bool dma_resv_test_signaled(struct dma_resv *obj, bool test_all);
> +long dma_resv_wait_timeout(struct dma_resv *obj, enum dma_resv_usage usage,
> + bool intr, unsigned long timeout);
> +bool dma_resv_test_signaled(struct dma_resv *obj, enum dma_resv_usage usage);
> void dma_resv_describe(struct dma_resv *obj, struct seq_file *seq);
>
> #endif /* _LINUX_RESERVATION_H */
> --
> 2.25.1
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Hammer it a bit more in that iterators can be restarted and when that
matters, plus suggest to prefer the locked version whenver.
Also delete the two leftover kerneldoc for static functions plus
sprinkle some more links while at it.
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: "Christian König" <christian.koenig(a)amd.com>
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
---
drivers/dma-buf/dma-resv.c | 26 ++++++++++++--------------
include/linux/dma-resv.h | 13 ++++++++++++-
2 files changed, 24 insertions(+), 15 deletions(-)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
index 9eb2baa387d4..1453b664c405 100644
--- a/drivers/dma-buf/dma-resv.c
+++ b/drivers/dma-buf/dma-resv.c
@@ -323,12 +323,6 @@ void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence)
}
EXPORT_SYMBOL(dma_resv_add_excl_fence);
-/**
- * dma_resv_iter_restart_unlocked - restart the unlocked iterator
- * @cursor: The dma_resv_iter object to restart
- *
- * Restart the unlocked iteration by initializing the cursor object.
- */
static void dma_resv_iter_restart_unlocked(struct dma_resv_iter *cursor)
{
cursor->seq = read_seqcount_begin(&cursor->obj->seq);
@@ -344,14 +338,6 @@ static void dma_resv_iter_restart_unlocked(struct dma_resv_iter *cursor)
cursor->is_restarted = true;
}
-/**
- * dma_resv_iter_walk_unlocked - walk over fences in a dma_resv obj
- * @cursor: cursor to record the current position
- *
- * Return all the fences in the dma_resv object which are not yet signaled.
- * The returned fence has an extra local reference so will stay alive.
- * If a concurrent modify is detected the whole iteration is started over again.
- */
static void dma_resv_iter_walk_unlocked(struct dma_resv_iter *cursor)
{
struct dma_resv *obj = cursor->obj;
@@ -387,6 +373,12 @@ static void dma_resv_iter_walk_unlocked(struct dma_resv_iter *cursor)
* dma_resv_iter_first_unlocked - first fence in an unlocked dma_resv obj.
* @cursor: the cursor with the current position
*
+ * Subsequent fences are iterated with dma_resv_iter_next_unlocked().
+ *
+ * Beware that the iterator can be restarted. Code which accumulates statistics
+ * or similar needs to check for this with dma_resv_iter_is_restarted(). For
+ * this reason prefer the locked dma_resv_iter_first() whenver possible.
+ *
* Returns the first fence from an unlocked dma_resv obj.
*/
struct dma_fence *dma_resv_iter_first_unlocked(struct dma_resv_iter *cursor)
@@ -406,6 +398,10 @@ EXPORT_SYMBOL(dma_resv_iter_first_unlocked);
* dma_resv_iter_next_unlocked - next fence in an unlocked dma_resv obj.
* @cursor: the cursor with the current position
*
+ * Beware that the iterator can be restarted. Code which accumulates statistics
+ * or similar needs to check for this with dma_resv_iter_is_restarted(). For
+ * this reason prefer the locked dma_resv_iter_next() whenver possible.
+ *
* Returns the next fence from an unlocked dma_resv obj.
*/
struct dma_fence *dma_resv_iter_next_unlocked(struct dma_resv_iter *cursor)
@@ -431,6 +427,8 @@ EXPORT_SYMBOL(dma_resv_iter_next_unlocked);
* dma_resv_iter_first - first fence from a locked dma_resv object
* @cursor: cursor to record the current position
*
+ * Subsequent fences are iterated with dma_resv_iter_next_unlocked().
+ *
* Return the first fence in the dma_resv object while holding the
* &dma_resv.lock.
*/
diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
index dbd235ab447f..ebe908592ac3 100644
--- a/include/linux/dma-resv.h
+++ b/include/linux/dma-resv.h
@@ -153,6 +153,13 @@ struct dma_resv {
* struct dma_resv_iter - current position into the dma_resv fences
*
* Don't touch this directly in the driver, use the accessor function instead.
+ *
+ * IMPORTANT
+ *
+ * When using the lockless iterators like dma_resv_iter_next_unlocked() or
+ * dma_resv_for_each_fence_unlocked() beware that the iterator can be restarted.
+ * Code which accumulates statistics or similar needs to check for this with
+ * dma_resv_iter_is_restarted().
*/
struct dma_resv_iter {
/** @obj: The dma_resv object we iterate over */
@@ -243,7 +250,11 @@ static inline bool dma_resv_iter_is_restarted(struct dma_resv_iter *cursor)
* &dma_resv.lock and using RCU instead. The cursor needs to be initialized
* with dma_resv_iter_begin() and cleaned up with dma_resv_iter_end(). Inside
* the iterator a reference to the dma_fence is held and the RCU lock dropped.
- * When the dma_resv is modified the iteration starts over again.
+ *
+ * Beware that the iterator can be restarted when the struct dma_resv for
+ * @cursor is modified. Code which accumulates statistics or similar needs to
+ * check for this with dma_resv_iter_is_restarted(). For this reason prefer the
+ * lock iterator dma_resv_for_each_fence() whenever possible.
*/
#define dma_resv_for_each_fence_unlocked(cursor, fence) \
for (fence = dma_resv_iter_first_unlocked(cursor); \
--
2.33.0
Syzbot has reported GPF in sg_alloc_append_table_from_pages(). The
problem was in ubuf->pages == ZERO_PTR.
ubuf->pagecount is calculated from arguments passed from user-space. If
user creates udmabuf with list.size == 0 then ubuf->pagecount will be
also equal to zero; it causes kmalloc_array() to return ZERO_PTR.
Fix it by validating ubuf->pagecount before passing it to
kmalloc_array().
Fixes: fbb0de795078 ("Add udmabuf misc device")
Reported-and-tested-by: syzbot+2c56b725ec547fa9cb29(a)syzkaller.appspotmail.com
Signed-off-by: Pavel Skripkin <paskripkin(a)gmail.com>
---
Happy New Year and Merry Christmas! :)
With regards,
Pavel Skripkin
---
drivers/dma-buf/udmabuf.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
index c57a609db75b..e7330684d3b8 100644
--- a/drivers/dma-buf/udmabuf.c
+++ b/drivers/dma-buf/udmabuf.c
@@ -190,6 +190,10 @@ static long udmabuf_create(struct miscdevice *device,
if (ubuf->pagecount > pglimit)
goto err;
}
+
+ if (!ubuf->pagecount)
+ goto err;
+
ubuf->pages = kmalloc_array(ubuf->pagecount, sizeof(*ubuf->pages),
GFP_KERNEL);
if (!ubuf->pages) {
--
2.34.1
Hi Xianting,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on linus/master]
[also build test WARNING on v5.16-rc6 next-20211217]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Xianting-Tian/udmabuf-put-dmabuf-i…
base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git a7904a538933c525096ca2ccde1e60d0ee62c08e
config: x86_64-randconfig-r024-20211220 (https://download.01.org/0day-ci/archive/20211220/202112202114.4rnU3YZF-lkp@…)
compiler: clang version 14.0.0 (https://github.com/llvm/llvm-project 555eacf75f21cd1dfc6363d73ad187b730349543)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/0day-ci/linux/commit/322781a4da9de4a3057afd933108d23ca7f…
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Xianting-Tian/udmabuf-put-dmabuf-in-case-of-get-fd-failed/20211220-134433
git checkout 322781a4da9de4a3057afd933108d23ca7f5282e
# save the config file to linux build tree
mkdir build_dir
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash drivers/dma-buf/
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp(a)intel.com>
All warnings (new ones prefixed by >>):
drivers/dma-buf/udmabuf.c:293:1: error: function definition is not allowed here
{
^
drivers/dma-buf/udmabuf.c:312:1: error: function definition is not allowed here
{
^
drivers/dma-buf/udmabuf.c:334:1: error: function definition is not allowed here
{
^
drivers/dma-buf/udmabuf.c:353:20: error: use of undeclared identifier 'udmabuf_ioctl'
.unlocked_ioctl = udmabuf_ioctl,
^
drivers/dma-buf/udmabuf.c:355:20: error: use of undeclared identifier 'udmabuf_ioctl'
.compat_ioctl = udmabuf_ioctl,
^
drivers/dma-buf/udmabuf.c:366:1: error: function definition is not allowed here
{
^
drivers/dma-buf/udmabuf.c:371:1: error: function definition is not allowed here
{
^
drivers/dma-buf/udmabuf.c:375:13: error: use of undeclared identifier 'udmabuf_dev_init'
module_init(udmabuf_dev_init)
^
drivers/dma-buf/udmabuf.c:375:13: error: use of undeclared identifier 'udmabuf_dev_init'
drivers/dma-buf/udmabuf.c:376:13: error: use of undeclared identifier 'udmabuf_dev_exit'
module_exit(udmabuf_dev_exit)
^
drivers/dma-buf/udmabuf.c:379:26: error: expected '}'
MODULE_LICENSE("GPL v2");
^
drivers/dma-buf/udmabuf.c:166:1: note: to match this '{'
{
^
>> drivers/dma-buf/udmabuf.c:351:37: warning: ISO C90 forbids mixing declarations and code [-Wdeclaration-after-statement]
static const struct file_operations udmabuf_fops = {
^
1 warning and 11 errors generated.
vim +351 drivers/dma-buf/udmabuf.c
fbb0de79507819 Gerd Hoffmann 2018-08-27 350
fbb0de79507819 Gerd Hoffmann 2018-08-27 @351 static const struct file_operations udmabuf_fops = {
fbb0de79507819 Gerd Hoffmann 2018-08-27 352 .owner = THIS_MODULE,
fbb0de79507819 Gerd Hoffmann 2018-08-27 353 .unlocked_ioctl = udmabuf_ioctl,
d4a197f4047e01 Kristian H. Kristensen 2020-09-03 354 #ifdef CONFIG_COMPAT
d4a197f4047e01 Kristian H. Kristensen 2020-09-03 355 .compat_ioctl = udmabuf_ioctl,
d4a197f4047e01 Kristian H. Kristensen 2020-09-03 356 #endif
fbb0de79507819 Gerd Hoffmann 2018-08-27 357 };
fbb0de79507819 Gerd Hoffmann 2018-08-27 358
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
On Tue, Dec 07, 2021 at 09:46:47PM +0100, Thomas Hellström wrote:
>
> On 12/7/21 19:08, Daniel Vetter wrote:
> > Once more an entire week behind on mails, but this looked interesting
> > enough.
> >
> > On Fri, Dec 03, 2021 at 03:18:01PM +0100, Thomas Hellström wrote:
> > > On Fri, 2021-12-03 at 14:08 +0100, Christian König wrote:
> > > > Am 01.12.21 um 13:16 schrieb Thomas Hellström (Intel):
> > > > > On 12/1/21 12:25, Christian König wrote:
> > > > > > And why do you use dma_fence_chain to generate a timeline for
> > > > > > TTM?
> > > > > > That should come naturally because all the moves must be ordered.
> > > > > Oh, in this case because we're looking at adding stuff at the end
> > > > > of
> > > > > migration (like coalescing object shared fences and / or async
> > > > > unbind
> > > > > fences), which may not complete in order.
> > > > Well that's ok as well. My question is why does this single dma_fence
> > > > then shows up in the dma_fence_chain representing the whole
> > > > migration?
> > > What we'd like to happen during eviction is that we
> > >
> > > 1) await any exclusive- or moving fences, then schedule the migration
> > > blit. The blit manages its own GPU ptes. Results in a single fence.
> > > 2) Schedule unbind of any gpu vmas, resulting possibly in multiple
> > > fences.
> > This sounds like over-optimizing for nothing. We only really care about
> > pipeling moves on dgpu, and on dgpu we only care about modern userspace
> > (because even gl moves in that direction).
> Hmm. It's not totally clear what you mean with over-optimizing for nothing,
> is it the fact that we want to start the blit before all shared fences have
> signaled or the fact that we're doing async unbinding to avoid a
> synchronization point that stops us from fully pipelining evictions?
Yup. Least because that breaks vulkan, so you really can't do this
optimizations :-)
In general I meant that unless you really, really understand everything
all the time (which frankly no one does), then trying to be clever just
isn't worth it. We have access pending in the dma_resv, we wait for it is
dumb, simple, no surprises.
> > And modern means that usually even write access is only setting a read
> > fence, because in vk/compute we only set write fences for object which
> > need implicit sync, and _only_ when actually needed.
> >
> > So ignoring read fences for movings "because it's only reads" is actually
> > busted.
>
> I'm fine with awaiting also shared fences before we start the blit, as
> mentioned also later in the thread, but that is just a matter of when we
> coalesce the shared fences. So since difference in complexity is minimal,
> what's viewed as optimizing for nothing can also be conversely be viewed as
> unneccesarily waiting for nothing, blocking the migration context timeline
> from progressing with unrelated blits. (Unless there are correctness issues
> of course, see below).
>
> But not setting a write fence after write seems to conflict with dma-buf
> rules as also discussed later in the thread. Perhaps some clarity is needed
> here. How would a writer or reader that implicitly *wants* to wait for
> previous writers go about doing that?
>
> Note that what we're doing is not "moving" in the sense that we're giving up
> or modifying the old storage but rather start a blit assuming that the
> contents of the old storage is stable, or the writer doesn't care.
Yeah that's not how dma-buf works, and which is what Christian is trying
to rectify with his huge refactoring/doc series to give a bit clearer
meaning to what a fence in a dma_resv means.
> > I think for buffer moves we should document and enforce (in review) the
> > rule that you have to wait for all fences, otherwise boom. Same really
> > like before freeing backing storage. Otherwise there's just too many gaps
> > and surprises.
> >
> > And yes with Christian's rework of dma_resv this will change, and we'll
> > allow multiple write fences (because that's what amdgpu encoded into their
> > uapi). Still means that you cannot move a buffer without waiting for read
> > fences (or kernel fences or anything really).
>
> Sounds like some agreement is needed here what rules we actually should
> obey. As mentioned above I'm fine with either.
I think it would be good to comment on the doc patch in Christian's series
for that. But essentially read/write don't mean actual read/write to
memory, but only read/write access in terms of implicit sync. Buffers
which do not partake in implicit sync (driver internal stuff) or access
which is not implicitly synced (anything vk does) do _not_ need to set a
write fence. They will (except amdgpu, until they fix their CS uapi)
_only_ set a read fence.
Christian and me had a multi-month discussion on this, so it's a bit
tricky.
> > The other thing is this entire spinlock recursion topic for dma_fence, and
> > I'm deeply unhappy about the truckload of tricks i915 plays and hence in
> > favour of avoiding recursion in this area as much as possible.
>
> TBH I think the i915 corresponding container manages to avoid both the deep
> recursive calls and lock nesting simply by early enable_signaling() and not
> storing the fence pointers of the array fences, which to me appears to be a
> simple and clean approach. No tricks there.
>
> >
> > If we really can't avoid it then irq_work to get a new clean context gets
> > the job done. Making this messy and work is imo a feature, lock nesting of
> > same level locks is just not a good&robust engineering idea.
>
> For the dma-fence-chain and dma-fence-array there are four possibilities
> moving forward:
>
> 1) Keeping the current same-level locking nesting order of container-first
> containee later. This is fully annotated, but fragile and blows up if users
> attempt to nest containers in different orders.
>
> 2) Establishing a reverse-signaling locking order. Not annotatable. blows up
> on signal-on-any.
>
> 3) Early enable-signaling, no lock nesting, low latency but possibly
> unnecessary enable_signaling calls.
>
> 4) irq_work in enable_signaling(). High latency.
>
> The tread finally agreed the solution would be to keep 1), add early
> warnings for the pitfalls and if possible provide helpers to flatten to
> avoid container recursion.
Hm ok seems ok. It's definitely an area where we don't have great
solutions :-/
-Daniel
>
> /Thomas
>
>
> >
> > /me back to being completely burried
> >
> > I do hope I can find some more time to review a few more of Christian's
> > patches this week though :-/
> >
> > Cheers, Daniel
> >
> > > 3) Most but not all of the remaining resv shared fences will have been
> > > finished in 2) We can't easily tell which so we have a couple of shared
> > > fences left.
> > > 4) Add all fences resulting from 1) 2) and 3) into the per-memory-type
> > > dma-fence-chain.
> > > 5) hand the resulting dma-fence-chain representing the end of migration
> > > over to ttm's resource manager.
> > >
> > > Now this means we have a dma-fence-chain disguised as a dma-fence out
> > > in the wild, and it could in theory reappear as a 3) fence for another
> > > migration unless a very careful audit is done, or as an input to the
> > > dma-fence-array used for that single dependency.
> > >
> > > > That somehow doesn't seem to make sense because each individual step
> > > > of
> > > > the migration needs to wait for those dependencies as well even when
> > > > it
> > > > runs in parallel.
> > > >
> > > > > But that's not really the point, the point was that an (at least to
> > > > > me) seemingly harmless usage pattern, be it real or fictious, ends
> > > > > up
> > > > > giving you severe internal- or cross-driver headaches.
> > > > Yeah, we probably should document that better. But in general I don't
> > > > see much reason to allow mixing containers. The dma_fence_array and
> > > > dma_fence_chain objects have some distinct use cases and and using
> > > > them
> > > > to build up larger dependency structures sounds really questionable.
> > > Yes, I tend to agree to some extent here. Perhaps add warnings when
> > > adding a chain or array as an input to array and when accidently
> > > joining chains, and provide helpers for flattening if needed.
> > >
> > > /Thomas
> > >
> > >
> > > > Christian.
> > > >
> > > > > /Thomas
> > > > >
> > > > >
> > > > > > Regards,
> > > > > > Christian.
> > > > > >
> > > > > >
> > >
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch