Am Mittwoch, dem 21.04.2021 um 14:54 +0000 schrieb Robin Gong:
> On 20201/04/20 22:01 Lucas Stach <l.stach(a)pengutronix.de> wrote:
> > Am Dienstag, dem 20.04.2021 um 13:47 +0000 schrieb Robin Gong:
> > > On 2021/04/19 17:46 Lucas Stach <l.stach(a)pengutronix.de> wrote:
> > > > Am Montag, dem 19.04.2021 um 07:17 +0000 schrieb Robin Gong:
> > > > > Hi Lucas,
> > > > >
> > > > > On 2021/04/14 Lucas Stach <l.stach(a)pengutronix.de> wrote:
> > > > > > Hi Robin,
> > > > > >
> > > > > > Am Mittwoch, dem 14.04.2021 um 14:33 +0000 schrieb Robin Gong:
> > > > > > > On 2020/05/20 17:43 Lucas Stach <l.stach(a)pengutronix.de> wrote:
> > > > > > > > Am Mittwoch, den 20.05.2020, 16:20 +0800 schrieb Shengjiu
> > Wang:
> > > > > > > > > Hi
> > > > > > > > >
> > > > > > > > > On Tue, May 19, 2020 at 6:04 PM Lucas Stach
> > > > > > > > > <l.stach(a)pengutronix.de>
> > > > > > > > wrote:
> > > > > > > > > > Am Dienstag, den 19.05.2020, 17:41 +0800 schrieb Shengjiu
> > Wang:
> > > > > > > > > > > There are two requirements that we need to move the
> > > > > > > > > > > request of dma channel from probe to open.
> > > > > > > > > >
> > > > > > > > > > How do you handle -EPROBE_DEFER return code from the
> > > > > > > > > > channel request if you don't do it in probe?
> > > > > > > > >
> > > > > > > > > I use the dma_request_slave_channel or dma_request_channel
> > > > > > > > > instead of dmaengine_pcm_request_chan_of. so there should
> > > > > > > > > be not -EPROBE_DEFER return code.
> > > > > > > >
> > > > > > > > This is a pretty weak argument. The dmaengine device might
> > > > > > > > probe after you try to get the channel. Using a function to
> > > > > > > > request the channel that doesn't allow you to handle probe
> > > > > > > > deferral is IMHO a bug and should be fixed, instead of
> > > > > > > > building even more assumptions on top
> > > > > > of it.
> > > > > > > >
> > > > > > > > > > > - When dma device binds with power-domains, the power
> > > > > > > > > > > will be enabled when we request dma channel. If the
> > > > > > > > > > > request of dma channel happen on probe, then the
> > > > > > > > > > > power-domains will be always enabled after kernel boot
> > > > > > > > > > > up, which is not good for power saving, so we need
> > > > > > > > > > > to move the request of dma channel to .open();
> > > > > > > > > >
> > > > > > > > > > This is certainly something which could be fixed in the
> > > > > > > > > > dmaengine driver.
> > > > > > > > >
> > > > > > > > > Dma driver always call the pm_runtime_get_sync in
> > > > > > > > > device_alloc_chan_resources, the
> > > > > > > > > device_alloc_chan_resources is called when channel is
> > > > > > > > > requested. so power is enabled on channel
> > > > > > request.
> > > > > > > >
> > > > > > > > So why can't you fix the dmaengine driver to do that RPM
> > > > > > > > call at a later time when the channel is actually going to
> > > > > > > > be used? This will allow further power savings with other
> > > > > > > > slave devices than the audio
> > > > PCM.
> > > > > > > Hi Lucas,
> > > > > > > Thanks for your suggestion. I have tried to implement
> > > > > > > runtime autosuspend in fsl-edma driver on i.mx8qm/qxp with
> > > > > > > delay time (2
> > > > > > > sec) for this feature as below (or you can refer to
> > > > > > > drivers/dma/qcom/hidma.c), and pm_runtime_get_sync/
> > > > > > > pm_runtime_put_autosuspend in all dmaengine driver interface
> > > > > > > like
> > > > > > > device_alloc_chan_resources/device_prep_slave_sg/device_prep_d
> > > > > > > ma_c
> > > > > > > ycli
> > > > > > > c/
> > > > > > > device_tx_status...
> > > > > > >
> > > > > > >
> > > > > > > pm_runtime_use_autosuspend(fsl_chan->de
> > v);
> > > > > > > pm_runtime_set_autosuspend_delay(fsl_cha
> > n->
> > > > dev,
> > > > > > 2000);
> > > > > > >
> > > > > > > That could resolve this audio case since the autosuspend could
> > > > > > > suspend runtime after
> > > > > > > 2 seconds if there is no further dma transfer but only channel
> > > > > > request(device_alloc_chan_resources).
> > > > > > > But unfortunately, it cause another issue. As you know, on our
> > > > > > > i.mx8qm/qxp, power domain done by scfw
> > > > > > > (drivers/firmware/imx/scu-pd.c)
> > > > > > over mailbox:
> > > > > > > imx_sc_pd_power()->imx_scu_call_rpc()->
> > > > > > > imx_scu_ipc_write()->mbox_send_message()
> > > > > > > which means have to 'waits for completion', meanwhile, some
> > > > > > > driver like tty will call dmaengine interfaces in non-atomic
> > > > > > > case as below,
> > > > > > >
> > > > > > > static int uart_write(struct tty_struct *tty, const unsigned
> > > > > > > char *buf, int count) {
> > > > > > > .......
> > > > > > > port = uart_port_lock(state, flags);
> > > > > > > ......
> > > > > > > __uart_start(tty); //call
> > > > start_tx()->dmaengine_prep_slave_sg...
> > > > > > > uart_port_unlock(port, flags);
> > > > > > > return ret;
> > > > > > > }
> > > > > > >
> > > > > > > Thus dma runtime resume may happen in that timing window and
> > > > > > > cause
> > > > > > kernel alarm.
> > > > > > > I'm not sure whether there are similar limitations on other
> > > > > > > driver subsystem. But for me, It looks like the only way to
> > > > > > > resolve the contradiction between tty and scu-pd (hardware
> > > > > > > limitation on
> > > > > > > i.mx8qm/qxp) is to give up autosuspend and keep
> > > > > > > pm_runtime_get_sync
> > > > > > only in device_alloc_chan_resources because request channel is a
> > > > > > safe non-atomic phase.
> > > > > > > Do you have any idea? Thanks in advance.
> > > > > >
> > > > > > If you look closely at the driver you used as an example
> > > > > > (hidma.c) it looks like there is already something in there,
> > > > > > which looks very much like what you need
> > > > > > here:
> > > > > >
> > > > > > In hidma_issue_pending() the driver tries to get the device to
> > > > > > runtime
> > > > resume.
> > > > > > If this doesn't work, maybe due to the power domain code not
> > > > > > being able to be called in atomic context, the actual work of
> > > > > > waking up the dma hardware and issuing the descriptor is shunted to a
> > tasklet.
> > > > > >
> > > > > > If I'm reading this right, this is exactly what you need here to
> > > > > > be able to call the dmaengine code from atomic context: try the
> > > > > > rpm get and issue immediately when possible, otherwise shunt the
> > > > > > work to a
> > > > > > non- atomic context where you can deal with the requirements of
> > scu-pd.
> > > > > Yes, I can schedule_work to worker to runtime resume edma channel
> > > > > by
> > > > calling scu-pd.
> > > > > But that means all dmaengine interfaces should be taken care, not
> > > > > only
> > > > > issue_pending() but also
> > > > > dmaengine_terminate_all()/dmaengine_pause()/dmaengine_resume()/
> > > > > dmaengine_tx_status(). Not sure why hidma only take care
> > > > > issue_pending. Maybe their user case is just for memcpy/memset so
> > > > > that no further complicate case as ALSA or TTY.
> > > > > Besides, for autosuspend in cyclic, we have to add
> > > > > pm_runtime_get_sync into interrupt handler as qcom/bam_dma.c. but
> > > > > how could resolve the scu-pd's non-atmoic limitation in interrupt
> > handler?
> > > >
> > > > Sure, this all needs some careful analysis on how those functions
> > > > are called and what to do about atomic callers, but it should be
> > > > doable. I don't see any fundamental issues here.
> > > >
> > > > I don't see why you would ever need to wake the hardware in an
> > > > interrupt handler. Surely the hardware is already awake, as it
> > > > wouldn't signal an interrupt otherwise. And for the issue with
> > > > scu-pd you only care about the state transition of
> > > > suspended->running. If the hardware is already running/awake, the
> > > > runtime pm state handling is nothing more than bumping a refcount,
> > > > which is atomic safe. Putting the HW in suspend is already handled
> > asynchronously in a worker, so this is also atomic safe.
> > > But with autosuspend used, in corner case, may runtime suspended
> > > before falling Into edma interrupt handler if timeout happen with the
> > > delay value of pm_runtime_set_autosuspend_delay(). Thus, can't touch
> > > any edma interrupt status register unless runtime resume edma in
> > > interrupt handler while runtime resume function based on scu-pd's power
> > domain may block or sleep.
> > > I have a simple workaround that disable runtime suspend in
> > > issue_pending worker by calling pm_runtime_forbid() and then enable
> > > runtime auto suspend in dmaengine_terminate_all so that we could
> > > easily regard that edma channel is always in runtime resume between
> > > issue_pending and channel terminated and ignore the above interrupt
> > handler/scu-pd limitation.
> >
> > The IRQ handler is the point where you are informed by the hardware that a
> > specific operation is complete. I don't see any use-case where it would be valid
> > to drop the rpm refcount to 0 before the IRQ is handled. Surely the hardware
> > needs to stay awake until the currently queued operations are complete and if
> > the IRQ handler is the completion point the IRQ handler is the first point in
> > time where your autosuspend timer should start to run. There should never be
> > a situation where the timer expiry can get between IRQ signaling and the
> > handler code running.
> But the timer of runtime_auto_suspend decide when enter runtime suspend rather
> than hardware, while transfer data size and transfer rate on IP bus decide when the
> dma interrupt happen.
>
But it isn't the hardware that decides to drop the rpm refcount to 0
and starting the autosuspend timer, it's the driver.
> Generally, we can call pm_runtime_get_sync(fsl_chan->dev)/
> pm_runtime_mark_last_busy in interrupt handler to hope the runtime_auto_suspend
> timer expiry later than interrupt coming, but if the transfer data size is larger in cyclic
> and transfer rate is very slow like 115200 or lower on uart, the fix autosuspend timer
> 100ms/200ms maybe not enough, hence, runtime suspend may execute meanwhile
> the dma interrupt maybe triggered and caught by GIC(but interrupt handler prevent
> by spin_lock_irqsave in pm_suspend_timer_fn() ), and then interrupt handler start
> to run after runtime suspend.
If your driver code drops the rpm refcount to 0 and starts the
autosuspend timer while a cyclic transfer is still in flight this is
clearly a bug. Autosuspend is not there to paper over driver bugs, but
to amortize cost of actually suspending and resuming the hardware. Your
driver code must still work even if the timeout is 0, i.e. the hardware
is immediately suspended after you drop the rpm refcount to 0.
If you still have transfers queued/in-flight the driver code must keep
a rpm reference.
Regards,
Lucas
On Wed, 21 Apr 2021 10:37:11 +0000
<Peter.Enderborg(a)sony.com> wrote:
> On 4/21/21 11:15 AM, Daniel Vetter wrote:
> > On Tue, Apr 20, 2021 at 11:37:41AM +0000, Peter.Enderborg(a)sony.com wrote:
> >> But I dont think they will. dma-buf does not have to be mapped to a process,
> >> and the case of vram, it is not covered in current global_zone. All of them
> >> would be very nice to have in some form. But it wont change what the
> >> correct value of what "Total" is.
> > We need to understand what the "correct" value is. Not in terms of kernel
> > code, but in terms of semantics. Like if userspace allocates a GL texture,
> > is this supposed to show up in your metric or not. Stuff like that.
> That it like that would like to only one pointer type. You need to know what
> you pointing at to know what it is. it might be a hardware or a other pointer.
To clarify the GL texture example: a GL texture consumes "graphics
memory", whatever that is, but they are not allocated as dmabufs. So
they count for resource consumption, but they do not show up in your
counter, until they become exported. Most GL textures are never
exported at all. In fact, exporting GL textures is a path strongly
recommended against due to unsuitable EGL/GL API.
As far as I understand, dmabufs are never allocated as is. Dmabufs
always just wrap an existing memory allocation. So creating (exporting)
a dmabuf does not increase resource usage. Allocation increases
resource usage, and most allocations are never exported.
> If there is a limitation on your pointers it is a good metric to count them
> even if you don't know what they are. Same goes for dma-buf, they
> are generic, but they consume some resources that are counted in pages.
Given above, I could even argue that *dmabufs* do not consume
resources. They only reference resources that were already allocated
by some specific means (not generic). They might keep the resource
allocated, preventing it from being freed if leaked.
As you might know, there is no really generic "dmabuf allocator", not
as a kernel UAPI nor as a userspace library (the hypothetical Unix
Device Memory Allocator library notwithstanding).
So this kind of leaves the question, what is DmaBufTotal good for? Is
it the same kind of counter as VIRT in 'top'? If you know your
particular programs, you can maybe infer if VIRT is too much or not,
but for e.g. WebKitWebProcess it is normal to have 85 GB in VIRT and
it's not a problem (like I have, on this 8 GB RAM machine).
Thanks,
pq
On Tue, Apr 20, 2021 at 11:37:41AM +0000, Peter.Enderborg(a)sony.com wrote:
> On 4/20/21 1:14 PM, Daniel Vetter wrote:
> > On Tue, Apr 20, 2021 at 09:26:00AM +0000, Peter.Enderborg(a)sony.com wrote:
> >> On 4/20/21 10:58 AM, Daniel Vetter wrote:
> >>> On Sat, Apr 17, 2021 at 06:38:35PM +0200, Peter Enderborg wrote:
> >>>> This adds a total used dma-buf memory. Details
> >>>> can be found in debugfs, however it is not for everyone
> >>>> and not always available. dma-buf are indirect allocated by
> >>>> userspace. So with this value we can monitor and detect
> >>>> userspace applications that have problems.
> >>>>
> >>>> Signed-off-by: Peter Enderborg <peter.enderborg(a)sony.com>
> >>> So there have been tons of discussions around how to track dma-buf and
> >>> why, and I really need to understand the use-cass here first I think. proc
> >>> uapi is as much forever as anything else, and depending what you're doing
> >>> this doesn't make any sense at all:
> >>>
> >>> - on most linux systems dma-buf are only instantiated for shared buffer.
> >>> So there this gives you a fairly meaningless number and not anything
> >>> reflecting gpu memory usage at all.
> >>>
> >>> - on Android all buffers are allocated through dma-buf afaik. But there
> >>> we've recently had some discussions about how exactly we should track
> >>> all this, and the conclusion was that most of this should be solved by
> >>> cgroups long term. So if this is for Android, then I don't think adding
> >>> random quick stop-gaps to upstream is a good idea (because it's a pretty
> >>> long list of patches that have come up on this).
> >>>
> >>> So what is this for?
> >> For the overview. dma-buf today only have debugfs for info. Debugfs
> >> is not allowed by google to use in andoid. So this aggregate the information
> >> so we can get information on what going on on the system.
> >>
> >> And the LKML standard respond to that is "SHOW ME THE CODE".
> > Yes. Except this extends to how exactly this is supposed to be used in
> > userspace and acted upon.
> >
> >> When the top memgc has a aggregated information on dma-buf it is maybe
> >> a better source to meminfo. But then it also imply that dma-buf requires memcg.
> >>
> >> And I dont see any problem to replace this with something better with it is ready.
> > The thing is, this is uapi. Once it's merged we cannot, ever, replace it.
> > It must be kept around forever, or a very close approximation thereof. So
> > merging this with the justification that we can fix it later on or replace
> > isn't going to happen.
>
> It is intended to be relevant as long there is a dma-buf. This is a proper
> metric. If the newer implementations is not get the same result it is
> not doing it right and is not better. If a memcg counter or a global_zone
> counter do the same thing they it can replace the suggested method.
We're not talking about a memcg controller, but about a dma-buf tracker.
Also my point was that you might not have a dma-buf on most linux systems
(outside of android really) for most gpu allocations. So we kinda need to
understand what you actually want to measure, not "I want to count all the
dma-buf in the system". Because that's a known-problematic metric in
general.
> But I dont think they will. dma-buf does not have to be mapped to a process,
> and the case of vram, it is not covered in current global_zone. All of them
> would be very nice to have in some form. But it wont change what the
> correct value of what "Total" is.
We need to understand what the "correct" value is. Not in terms of kernel
code, but in terms of semantics. Like if userspace allocates a GL texture,
is this supposed to show up in your metric or not. Stuff like that.
-Daniel
>
>
> > -Daniel
> >
> >>> -Daniel
> >>>
> >>>> ---
> >>>> drivers/dma-buf/dma-buf.c | 12 ++++++++++++
> >>>> fs/proc/meminfo.c | 5 ++++-
> >>>> include/linux/dma-buf.h | 1 +
> >>>> 3 files changed, 17 insertions(+), 1 deletion(-)
> >>>>
> >>>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> >>>> index f264b70c383e..4dc37cd4293b 100644
> >>>> --- a/drivers/dma-buf/dma-buf.c
> >>>> +++ b/drivers/dma-buf/dma-buf.c
> >>>> @@ -37,6 +37,7 @@ struct dma_buf_list {
> >>>> };
> >>>>
> >>>> static struct dma_buf_list db_list;
> >>>> +static atomic_long_t dma_buf_global_allocated;
> >>>>
> >>>> static char *dmabuffs_dname(struct dentry *dentry, char *buffer, int buflen)
> >>>> {
> >>>> @@ -79,6 +80,7 @@ static void dma_buf_release(struct dentry *dentry)
> >>>> if (dmabuf->resv == (struct dma_resv *)&dmabuf[1])
> >>>> dma_resv_fini(dmabuf->resv);
> >>>>
> >>>> + atomic_long_sub(dmabuf->size, &dma_buf_global_allocated);
> >>>> module_put(dmabuf->owner);
> >>>> kfree(dmabuf->name);
> >>>> kfree(dmabuf);
> >>>> @@ -586,6 +588,7 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
> >>>> mutex_lock(&db_list.lock);
> >>>> list_add(&dmabuf->list_node, &db_list.head);
> >>>> mutex_unlock(&db_list.lock);
> >>>> + atomic_long_add(dmabuf->size, &dma_buf_global_allocated);
> >>>>
> >>>> return dmabuf;
> >>>>
> >>>> @@ -1346,6 +1349,15 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
> >>>> }
> >>>> EXPORT_SYMBOL_GPL(dma_buf_vunmap);
> >>>>
> >>>> +/**
> >>>> + * dma_buf_allocated_pages - Return the used nr of pages
> >>>> + * allocated for dma-buf
> >>>> + */
> >>>> +long dma_buf_allocated_pages(void)
> >>>> +{
> >>>> + return atomic_long_read(&dma_buf_global_allocated) >> PAGE_SHIFT;
> >>>> +}
> >>>> +
> >>>> #ifdef CONFIG_DEBUG_FS
> >>>> static int dma_buf_debug_show(struct seq_file *s, void *unused)
> >>>> {
> >>>> diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> >>>> index 6fa761c9cc78..ccc7c40c8db7 100644
> >>>> --- a/fs/proc/meminfo.c
> >>>> +++ b/fs/proc/meminfo.c
> >>>> @@ -16,6 +16,7 @@
> >>>> #ifdef CONFIG_CMA
> >>>> #include <linux/cma.h>
> >>>> #endif
> >>>> +#include <linux/dma-buf.h>
> >>>> #include <asm/page.h>
> >>>> #include "internal.h"
> >>>>
> >>>> @@ -145,7 +146,9 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> >>>> show_val_kb(m, "CmaFree: ",
> >>>> global_zone_page_state(NR_FREE_CMA_PAGES));
> >>>> #endif
> >>>> -
> >>>> +#ifdef CONFIG_DMA_SHARED_BUFFER
> >>>> + show_val_kb(m, "DmaBufTotal: ", dma_buf_allocated_pages());
> >>>> +#endif
> >>>> hugetlb_report_meminfo(m);
> >>>>
> >>>> arch_report_meminfo(m);
> >>>> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> >>>> index efdc56b9d95f..5b05816bd2cd 100644
> >>>> --- a/include/linux/dma-buf.h
> >>>> +++ b/include/linux/dma-buf.h
> >>>> @@ -507,4 +507,5 @@ int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
> >>>> unsigned long);
> >>>> int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map);
> >>>> void dma_buf_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map);
> >>>> +long dma_buf_allocated_pages(void);
> >>>> #endif /* __DMA_BUF_H__ */
> >>>> --
> >>>> 2.17.1
> >>>>
> >>>> _______________________________________________
> >>>> dri-devel mailing list
> >>>> dri-devel(a)lists.freedesktop.org
> >>>> https://urldefense.com/v3/__https://lists.freedesktop.org/mailman/listinfo/…
> >> _______________________________________________
> >> dri-devel mailing list
> >> dri-devel(a)lists.freedesktop.org
> >> https://urldefense.com/v3/__https://lists.freedesktop.org/mailman/listinfo/…
>
> _______________________________________________
> dri-devel mailing list
> dri-devel(a)lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Am 16.04.21 um 16:37 schrieb Lee Jones:
> Fixes the following W=1 kernel build warning(s):
>
> drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c:169: warning: Function parameter or member 'sched_score' not described in 'amdgpu_ring_init'
>
> Cc: Alex Deucher <alexander.deucher(a)amd.com>
> Cc: "Christian König" <christian.koenig(a)amd.com>
> Cc: David Airlie <airlied(a)linux.ie>
> Cc: Daniel Vetter <daniel(a)ffwll.ch>
> Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
> Cc: amd-gfx(a)lists.freedesktop.org
> Cc: dri-devel(a)lists.freedesktop.org
> Cc: linux-media(a)vger.kernel.org
> Cc: linaro-mm-sig(a)lists.linaro.org
> Signed-off-by: Lee Jones <lee.jones(a)linaro.org>
Reviewed-by: Christian König <christian.koenig(a)amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
> index 688624ebe4211..7b634a1517f9c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
> @@ -158,6 +158,7 @@ void amdgpu_ring_undo(struct amdgpu_ring *ring)
> * @irq_src: interrupt source to use for this ring
> * @irq_type: interrupt type to use for this ring
> * @hw_prio: ring priority (NORMAL/HIGH)
> + * @sched_score: optional score atomic shared with other schedulers
> *
> * Initialize the driver information for the selected ring (all asics).
> * Returns 0 on success, error on failure.
Am 16.04.21 um 16:37 schrieb Lee Jones:
> Fixes the following W=1 kernel build warning(s):
>
> drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c:444: warning: Function parameter or member 'sched_score' not described in 'amdgpu_fence_driver_init_ring'
>
> Cc: Alex Deucher <alexander.deucher(a)amd.com>
> Cc: "Christian König" <christian.koenig(a)amd.com>
> Cc: David Airlie <airlied(a)linux.ie>
> Cc: Daniel Vetter <daniel(a)ffwll.ch>
> Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
> Cc: Jerome Glisse <glisse(a)freedesktop.org>
> Cc: amd-gfx(a)lists.freedesktop.org
> Cc: dri-devel(a)lists.freedesktop.org
> Cc: linux-media(a)vger.kernel.org
> Cc: linaro-mm-sig(a)lists.linaro.org
> Signed-off-by: Lee Jones <lee.jones(a)linaro.org>
Reviewed-by: Christian König <christian.koenig(a)amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> index 47ea468596184..30772608eac6c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> @@ -434,6 +434,7 @@ int amdgpu_fence_driver_start_ring(struct amdgpu_ring *ring,
> *
> * @ring: ring to init the fence driver on
> * @num_hw_submission: number of entries on the hardware queue
> + * @sched_score: optional score atomic shared with other schedulers
> *
> * Init the fence driver for the requested ring (all asics).
> * Helper function for amdgpu_fence_driver_init().
On Tue, 20 Apr 2021 at 14:46, <Peter.Enderborg(a)sony.com> wrote:
> On 4/20/21 3:34 PM, Daniel Stone wrote:
> > On Fri, 16 Apr 2021 at 13:34, Peter Enderborg <peter.enderborg(a)sony.com
> <mailto:peter.enderborg@sony.com>> wrote:
> > This adds a total used dma-buf memory. Details
> > can be found in debugfs, however it is not for everyone
> > and not always available. dma-buf are indirect allocated by
> > userspace. So with this value we can monitor and detect
> > userspace applications that have problems.
> >
> >
> > FWIW, this won't work super well for Android where gralloc is
> implemented as a system service, so all graphics usage will instantly be
> accounted to it.
>
> This resource allocation is a big part of why we need it. Why should it
> not work?
>
Sorry, I'd somehow completely misread that as being locally rather than
globally accounted. Given that, it's more correct, just also not super
useful.
Some drivers export allocation tracepoints which you could use if you have
a decent userspace tracing infrastructure. Short of that, many drivers
export this kind of thing through debugfs already. I think a better
long-term direction is probably getting accounting from dma-heaps rather
than extending core dmabuf itself.
Cheers,
Daniel