tldr; DMA buffers aren't normal memory, expecting that you can use
them like that (like calling get_user_pages works, or that they're
accounting like any other normal memory) cannot be guaranteed.
Since some userspace only runs on integrated devices, where all
buffers are actually all resident system memory, there's a huge
temptation to assume that a struct page is always present and useable
like for any more pagecache backed mmap. This has the potential to
result in a uapi nightmare.
To stop this gap require that DMA buffer mmaps are VM_PFNMAP, which
blocks get_user_pages and all the other struct page based
infrastructure for everyone. In spirit this is the uapi counterpart to
the kernel-internal CONFIG_DMABUF_DEBUG.
Motivated by a recent patch which wanted to swich the system dma-buf
heap to vm_insert_page instead of vm_insert_pfn.
v2:
Jason brought up that we also want to guarantee that all ptes have the
pte_special flag set, to catch fast get_user_pages (on architectures
that support this). Allowing VM_MIXEDMAP (like VM_SPECIAL does) would
still allow vm_insert_page, but limiting to VM_PFNMAP will catch that.
>From auditing the various functions to insert pfn pte entires
(vm_insert_pfn_prot, remap_pfn_range and all it's callers like
dma_mmap_wc) it looks like VM_PFNMAP is already required anyway, so
this should be the correct flag to check for.
References: https://lore.kernel.org/lkml/CAKMK7uHi+mG0z0HUmNt13QCCvutuRVjpcR0NjRL12k-Wb…
Acked-by: Christian König <christian.koenig(a)amd.com>
Cc: Jason Gunthorpe <jgg(a)ziepe.ca>
Cc: Suren Baghdasaryan <surenb(a)google.com>
Cc: Matthew Wilcox <willy(a)infradead.org>
Cc: John Stultz <john.stultz(a)linaro.org>
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: "Christian König" <christian.koenig(a)amd.com>
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
--
Resending this so I can test the next two patches for vgem/shmem in
intel-gfx-ci. Last round failed somehow, but I can't repro that at all
locally here.
No immediate plans to merge this patch here since ttm isn't addressed
yet (and there we have the hugepte issue, for which I don't think we
have a clear consensus yet).
-Daniel
---
drivers/dma-buf/dma-buf.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index eadd1eaa2fb5..dda583fb1f03 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -127,6 +127,7 @@ static struct file_system_type dma_buf_fs_type = {
static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
{
struct dma_buf *dmabuf;
+ int ret;
if (!is_dma_buf_file(file))
return -EINVAL;
@@ -142,7 +143,11 @@ static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
dmabuf->size >> PAGE_SHIFT)
return -EINVAL;
- return dmabuf->ops->mmap(dmabuf, vma);
+ ret = dmabuf->ops->mmap(dmabuf, vma);
+
+ WARN_ON(!(vma->vm_flags & VM_PFNMAP));
+
+ return ret;
}
static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence)
@@ -1244,6 +1249,8 @@ EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access);
int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
unsigned long pgoff)
{
+ int ret;
+
if (WARN_ON(!dmabuf || !vma))
return -EINVAL;
@@ -1264,7 +1271,11 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
vma_set_file(vma, dmabuf->file);
vma->vm_pgoff = pgoff;
- return dmabuf->ops->mmap(dmabuf, vma);
+ ret = dmabuf->ops->mmap(dmabuf, vma);
+
+ WARN_ON(!(vma->vm_flags & VM_PFNMAP));
+
+ return ret;
}
EXPORT_SYMBOL_GPL(dma_buf_mmap);
--
2.31.0
This set is part of a larger effort attempting to clean-up W=1
kernel builds, which are currently overwhelmingly riddled with
niggly little warnings.
Lee Jones (34):
drm/amd/pm/inc/smu_v13_0: Move table into the only source file that
uses it
drm/amd/pm/swsmu/smu13/aldebaran_ppt: Remove unused variable 'ret'
drm/amd/pm/powerplay/hwmgr/smu7_thermal: Provide function name for
'smu7_fan_ctrl_set_default_mode()'
drm/amd/pm/powerplay/hwmgr/vega12_thermal: Provide function name
drm/amd/pm/powerplay/hwmgr/vega12_hwmgr: Provide
'vega12_init_smc_table()' function name
drm/amd/pm/powerplay/hwmgr/vega10_hwmgr: Kernel-doc headers must
contain function names
drm/amd/pm/powerplay/hwmgr/vega20_hwmgr: Provide function name
'vega20_init_smc_table()'
drm/amd/display/dc/bios/command_table_helper: Fix function name for
'dal_cmd_table_helper_transmitter_bp_to_atom()'
drm/amd/display/dc/bios/command_table_helper2: Fix function name
'dal_cmd_table_helper_transmitter_bp_to_atom2()'
drm/amd/display/dc/bios/bios_parser: Fix formatting and misnaming
issues
drm/nouveau/nvkm/subdev/mc/tu102: Make functions called by reference
static
drm/amd/display/amdgpu_dm/amdgpu_dm: Functions must directly follow
their headers
drm/amd/display/dc/dce/dmub_outbox: Convert over to kernel-doc
drm/amd/display/dc/gpio/gpio_service: Pass around correct
dce_{version,environment} types
drm/amd/display/dc/dce110/dce110_hw_sequencer: Include our own header
drm/amd/display/dc/dce/dce_transform: Remove superfluous
re-initialisation of DCFE_MEM_LIGHT_SLEEP_CNTL,
drm/amd/display/dc/dce/dce_mem_input: Remove duplicate initialisation
of GRPH_CONTROL__GRPH_NUM_BANKS_{SHIFT,MASK}
drm/amd/display/dc/dce/dce_mem_input: Remove duplicate initialisation
of GRPH_CONTROL__GRPH_NUM_BANKS_{SHIFT,MASK
drm/amd/amdgpu/amdgpu_device: Make local function static
drm/amd/display/amdgpu_dm/amdgpu_dm: Fix kernel-doc formatting issue
drm/amd/display/dc/dce110/dce110_hw_sequencer: Include header
containing our prototypes
drm/amd/display/dc/core/dc: Convert function headers to kernel-doc
drm/amd/display/dmub/src/dmub_srv_stat: Convert function header to
kernel-doc
drm/amd/display/modules/hdcp/hdcp_psp: Remove unused function
'mod_hdcp_hdcp1_get_link_encryption_status()'
drm/xlnx/zynqmp_disp: Fix incorrectly named enum
'zynqmp_disp_layer_id'
drm/xlnx/zynqmp_dp: Fix incorrectly name function 'zynqmp_dp_train()'
drm/ttm/ttm_tt: Demote non-conformant kernel-doc header
drm/panel/panel-raspberrypi-touchscreen: Demote kernel-doc abuse
drm/panel/panel-sitronix-st7701: Demote kernel-doc abuse
drm/vgem/vgem_drv: Standard comment blocks should not use kernel-doc
format
drm/exynos/exynos7_drm_decon: Fix incorrect naming of
'decon_shadow_protect_win()'
drm/exynos/exynos_drm_ipp: Fix documentation for
'exynos_drm_ipp_get_{caps,res}_ioctl()'
drm/vboxvideo/hgsmi_base: Place function names into headers
drm/vboxvideo/modesetting: Provide function names for prototype
headers
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 2 +-
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 4 +-
.../gpu/drm/amd/display/dc/bios/bios_parser.c | 6 +--
.../display/dc/bios/command_table_helper.c | 2 +-
.../display/dc/bios/command_table_helper2.c | 2 +-
drivers/gpu/drm/amd/display/dc/core/dc.c | 46 +++++--------------
.../drm/amd/display/dc/dce/dce_mem_input.h | 2 -
.../drm/amd/display/dc/dce/dce_transform.h | 3 +-
.../gpu/drm/amd/display/dc/dce/dmub_outbox.c | 17 ++-----
.../display/dc/dce110/dce110_hw_sequencer.c | 3 ++
.../drm/amd/display/dc/gpio/gpio_service.c | 12 ++---
.../drm/amd/display/dmub/src/dmub_srv_stat.c | 19 +++-----
.../display/include/gpio_service_interface.h | 4 +-
.../drm/amd/display/modules/hdcp/hdcp_psp.c | 13 ------
drivers/gpu/drm/amd/pm/inc/smu_v13_0.h | 6 ---
.../drm/amd/pm/powerplay/hwmgr/smu7_thermal.c | 8 ++--
.../drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c | 26 ++++++-----
.../drm/amd/pm/powerplay/hwmgr/vega12_hwmgr.c | 2 +-
.../amd/pm/powerplay/hwmgr/vega12_thermal.c | 3 +-
.../drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c | 2 +-
.../drm/amd/pm/swsmu/smu13/aldebaran_ppt.c | 9 +++-
drivers/gpu/drm/exynos/exynos7_drm_decon.c | 2 +-
drivers/gpu/drm/exynos/exynos_drm_ipp.c | 4 +-
.../gpu/drm/nouveau/nvkm/subdev/mc/tu102.c | 6 +--
.../drm/panel/panel-raspberrypi-touchscreen.c | 2 +-
drivers/gpu/drm/panel/panel-sitronix-st7701.c | 2 +-
drivers/gpu/drm/ttm/ttm_tt.c | 2 +-
drivers/gpu/drm/vboxvideo/hgsmi_base.c | 19 +++++---
drivers/gpu/drm/vboxvideo/modesetting.c | 20 ++++----
drivers/gpu/drm/vgem/vgem_drv.c | 2 +-
drivers/gpu/drm/xlnx/zynqmp_disp.c | 2 +-
drivers/gpu/drm/xlnx/zynqmp_dp.c | 2 +-
32 files changed, 107 insertions(+), 147 deletions(-)
Cc: Adam Jackson <ajax(a)redhat.com>
Cc: Ajay Kumar <ajaykumar.rs(a)samsung.com>
Cc: Akshu Agarwal <akshua(a)gmail.com>
Cc: Alex Deucher <alexander.deucher(a)amd.com>
Cc: Alistair Popple <apopple(a)nvidia.com>
Cc: amd-gfx(a)lists.freedesktop.org
Cc: Ben Skeggs <bskeggs(a)redhat.com>
Cc: Ben Widawsky <ben(a)bwidawsk.net>
Cc: Christian Koenig <christian.koenig(a)amd.com>
Cc: "Christian König" <christian.koenig(a)amd.com>
Cc: Daniel Vetter <daniel(a)ffwll.ch>
Cc: David Airlie <airlied(a)linux.ie>
Cc: dri-devel(a)lists.freedesktop.org
Cc: Eric Anholt <eric(a)anholt.net>
Cc: Evan Quan <evan.quan(a)amd.com>
Cc: Hans de Goede <hdegoede(a)redhat.com>
Cc: Harry Wentland <harry.wentland(a)amd.com>
Cc: Huang Rui <ray.huang(a)amd.com>
Cc: Hyun Kwon <hyun.kwon(a)xilinx.com>
Cc: Inki Dae <inki.dae(a)samsung.com>
Cc: Jagan Teki <jagan(a)amarulasolutions.com>
Cc: Joonyoung Shim <jy0922.shim(a)samsung.com>
Cc: Jun Lei <Jun.Lei(a)amd.com>
Cc: Kevin Wang <kevin1.wang(a)amd.com>
Cc: Krzysztof Kozlowski <krzysztof.kozlowski(a)canonical.com>
Cc: Kyungmin Park <kyungmin.park(a)samsung.com>
Cc: Laurent Pinchart <laurent.pinchart(a)ideasonboard.com>
Cc: Lee Jones <lee.jones(a)linaro.org>
Cc: Leo Li <sunpeng.li(a)amd.com>
Cc: linaro-mm-sig(a)lists.linaro.org
Cc: linux-arm-kernel(a)lists.infradead.org
Cc: linux-media(a)vger.kernel.org
Cc: linux-samsung-soc(a)vger.kernel.org
Cc: Marek Szyprowski <m.szyprowski(a)samsung.com>
Cc: Mauro Rossi <issor.oruam(a)gmail.com>
Cc: Meenakshikumar Somasundaram <meenakshikumar.somasundaram(a)amd.com>
Cc: Michal Simek <michal.simek(a)xilinx.com>
Cc: nouveau(a)lists.freedesktop.org
Cc: Philipp Zabel <p.zabel(a)pengutronix.de>
Cc: Rodrigo Siqueira <Rodrigo.Siqueira(a)amd.com>
Cc: Sam Ravnborg <sam(a)ravnborg.org>
Cc: Seung-Woo Kim <sw0312.kim(a)samsung.com>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: Thierry Reding <thierry.reding(a)gmail.com>
--
2.31.1
tldr; DMA buffers aren't normal memory, expecting that you can use
them like that (like calling get_user_pages works, or that they're
accounting like any other normal memory) cannot be guaranteed.
Since some userspace only runs on integrated devices, where all
buffers are actually all resident system memory, there's a huge
temptation to assume that a struct page is always present and useable
like for any more pagecache backed mmap. This has the potential to
result in a uapi nightmare.
To stop this gap require that DMA buffer mmaps are VM_PFNMAP, which
blocks get_user_pages and all the other struct page based
infrastructure for everyone. In spirit this is the uapi counterpart to
the kernel-internal CONFIG_DMABUF_DEBUG.
Motivated by a recent patch which wanted to swich the system dma-buf
heap to vm_insert_page instead of vm_insert_pfn.
v2:
Jason brought up that we also want to guarantee that all ptes have the
pte_special flag set, to catch fast get_user_pages (on architectures
that support this). Allowing VM_MIXEDMAP (like VM_SPECIAL does) would
still allow vm_insert_page, but limiting to VM_PFNMAP will catch that.
>From auditing the various functions to insert pfn pte entires
(vm_insert_pfn_prot, remap_pfn_range and all it's callers like
dma_mmap_wc) it looks like VM_PFNMAP is already required anyway, so
this should be the correct flag to check for.
References: https://lore.kernel.org/lkml/CAKMK7uHi+mG0z0HUmNt13QCCvutuRVjpcR0NjRL12k-Wb…
Acked-by: Christian König <christian.koenig(a)amd.com>
Cc: Jason Gunthorpe <jgg(a)ziepe.ca>
Cc: Suren Baghdasaryan <surenb(a)google.com>
Cc: Matthew Wilcox <willy(a)infradead.org>
Cc: John Stultz <john.stultz(a)linaro.org>
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: "Christian König" <christian.koenig(a)amd.com>
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
--
Resending this so I can test the next two patches for vgem/shmem in
intel-gfx-ci. Last round failed somehow, but I can't repro that at all
locally here.
No immediate plans to merge this patch here since ttm isn't addressed
yet (and there we have the hugepte issue, for which I don't think we
have a clear consensus yet).
-Daniel
---
drivers/dma-buf/dma-buf.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index eadd1eaa2fb5..dda583fb1f03 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -127,6 +127,7 @@ static struct file_system_type dma_buf_fs_type = {
static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
{
struct dma_buf *dmabuf;
+ int ret;
if (!is_dma_buf_file(file))
return -EINVAL;
@@ -142,7 +143,11 @@ static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
dmabuf->size >> PAGE_SHIFT)
return -EINVAL;
- return dmabuf->ops->mmap(dmabuf, vma);
+ ret = dmabuf->ops->mmap(dmabuf, vma);
+
+ WARN_ON(!(vma->vm_flags & VM_PFNMAP));
+
+ return ret;
}
static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence)
@@ -1244,6 +1249,8 @@ EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access);
int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
unsigned long pgoff)
{
+ int ret;
+
if (WARN_ON(!dmabuf || !vma))
return -EINVAL;
@@ -1264,7 +1271,11 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
vma_set_file(vma, dmabuf->file);
vma->vm_pgoff = pgoff;
- return dmabuf->ops->mmap(dmabuf, vma);
+ ret = dmabuf->ops->mmap(dmabuf, vma);
+
+ WARN_ON(!(vma->vm_flags & VM_PFNMAP));
+
+ return ret;
}
EXPORT_SYMBOL_GPL(dma_buf_mmap);
--
2.31.0
Docs for struct dma_resv are fairly clear:
"A reservation object can have attached one exclusive fence (normally
associated with write operations) or N shared fences (read
operations)."
https://dri.freedesktop.org/docs/drm/driver-api/dma-buf.html#reservation-ob…
Furthermore a review across all of upstream.
First of render drivers and how they set implicit fences:
- nouveau follows this contract, see in validate_fini_no_ticket()
nouveau_bo_fence(nvbo, fence, !!b->write_domains);
and that last boolean controls whether the exclusive or shared fence
slot is used.
- radeon follows this contract by setting
p->relocs[i].tv.num_shared = !r->write_domain;
in radeon_cs_parser_relocs(), which ensures that the call to
ttm_eu_fence_buffer_objects() in radeon_cs_parser_fini() will do the
right thing.
- vmwgfx seems to follow this contract with the shotgun approach of
always setting ttm_val_buf->num_shared = 0, which means
ttm_eu_fence_buffer_objects() will only use the exclusive slot.
- etnaviv follows this contract, as can be trivially seen by looking
at submit_attach_object_fences()
- i915 is a bit a convoluted maze with multiple paths leading to
i915_vma_move_to_active(). Which sets the exclusive flag if
EXEC_OBJECT_WRITE is set. This can either come as a buffer flag for
softpin mode, or through the write_domain when using relocations. It
follows this contract.
- lima follows this contract, see lima_gem_submit() which sets the
exclusive fence when the LIMA_SUBMIT_BO_WRITE flag is set for that
bo
- msm follows this contract, see msm_gpu_submit() which sets the
exclusive flag when the MSM_SUBMIT_BO_WRITE is set for that buffer
- panfrost follows this contract with the shotgun approach of just
always setting the exclusive fence, see
panfrost_attach_object_fences(). Benefits of a single engine I guess
- v3d follows this contract with the same shotgun approach in
v3d_attach_fences_and_unlock_reservation(), but it has at least an
XXX comment that maybe this should be improved
- v4c uses the same shotgun approach of always setting an exclusive
fence, see vc4_update_bo_seqnos()
- vgem also follows this contract, see vgem_fence_attach_ioctl() and
the VGEM_FENCE_WRITE. This is used in some igts to validate prime
sharing with i915.ko without the need of a 2nd gpu
- vritio follows this contract again with the shotgun approach of
always setting an exclusive fence, see virtio_gpu_array_add_fence()
This covers the setting of the exclusive fences when writing.
Synchronizing against the exclusive fence is a lot more tricky, and I
only spot checked a few:
- i915 does it, with the optional EXEC_OBJECT_ASYNC to skip all
implicit dependencies (which is used by vulkan)
- etnaviv does this. Implicit dependencies are collected in
submit_fence_sync(), again with an opt-out flag
ETNA_SUBMIT_NO_IMPLICIT. These are then picked up in
etnaviv_sched_dependency which is the
drm_sched_backend_ops->dependency callback.
- v4c seems to not do much here, maybe gets away with it by not having
a scheduler and only a single engine. Since all newer broadcom chips than
the OG vc4 use v3d for rendering, which follows this contract, the
impact of this issue is fairly small.
- v3d does this using the drm_gem_fence_array_add_implicit() helper,
which then it's drm_sched_backend_ops->dependency callback
v3d_job_dependency() picks up.
- panfrost is nice here and tracks the implicit fences in
panfrost_job->implicit_fences, which again the
drm_sched_backend_ops->dependency callback panfrost_job_dependency()
picks up. It is mildly questionable though since it only picks up
exclusive fences in panfrost_acquire_object_fences(), but not buggy
in practice because it also always sets the exclusive fence. It
should pick up both sets of fences, just in case there's ever going
to be a 2nd gpu in a SoC with a mali gpu. Or maybe a mali SoC with a
pcie port and a real gpu, which might actually happen eventually. A
bug, but easy to fix. Should probably use the
drm_gem_fence_array_add_implicit() helper.
- lima is nice an easy, uses drm_gem_fence_array_add_implicit() and
the same schema as v3d.
- msm is mildly entertaining. It also supports MSM_SUBMIT_NO_IMPLICIT,
but because it doesn't use the drm/scheduler it handles fences from
the wrong context with a synchronous dma_fence_wait. See
submit_fence_sync() leading to msm_gem_sync_object(). Investing into
a scheduler might be a good idea.
- all the remaining drivers are ttm based, where I hope they do
appropriately obey implicit fences already. I didn't do the full
audit there because a) not follow the contract would confuse ttm
quite well and b) reading non-standard scheduler and submit code
which isn't based on drm/scheduler is a pain.
Onwards to the display side.
- Any driver using the drm_gem_plane_helper_prepare_fb() helper will
correctly. Overwhelmingly most drivers get this right, except a few
totally dont. I'll follow up with a patch to make this the default
and avoid a bunch of bugs.
- I didn't audit the ttm drivers, but given that dma_resv started
there I hope they get this right.
In conclusion this IS the contract, both as documented and
overwhelmingly implemented, specically as implemented by all render
drivers except amdgpu.
Amdgpu tried to fix this already in
commit 049aca4363d8af87cab8d53de5401602db3b9999
Author: Christian König <christian.koenig(a)amd.com>
Date: Wed Sep 19 16:54:35 2018 +0200
drm/amdgpu: fix using shared fence for exported BOs v2
but this fix falls short on a number of areas:
- It's racy, by the time the buffer is shared it might be too late. To
make sure there's definitely never a problem we need to set the
fences correctly for any buffer that's potentially exportable.
- It's breaking uapi, dma-buf fds support poll() and differentitiate
between, which was introduced in
commit 9b495a5887994a6d74d5c261d012083a92b94738
Author: Maarten Lankhorst <maarten.lankhorst(a)canonical.com>
Date: Tue Jul 1 12:57:43 2014 +0200
dma-buf: add poll support, v3
- Christian König wants to nack new uapi building further on this
dma_resv contract because it breaks amdgpu, quoting
"Yeah, and that is exactly the reason why I will NAK this uAPI change.
"This doesn't works for amdgpu at all for the reasons outlined above."
https://lore.kernel.org/dri-devel/f2eb6751-2f82-9b23-f57e-548de5b729de@gmai…
Rejecting new development because your own driver is broken and
violates established cross driver contracts and uapi is really not
how upstream works.
Now this patch will have a severe performance impact on anything that
runs on multiple engines. So we can't just merge it outright, but need
a bit a plan:
- amdgpu needs a proper uapi for handling implicit fencing. The funny
thing is that to do it correctly, implicit fencing must be treated
as a very strange IPC mechanism for transporting fences, where both
setting the fence and dependency intercepts must be handled
explicitly. Current best practices is a per-bo flag to indicate
writes, and a per-bo flag to to skip implicit fencing in the CS
ioctl as a new chunk.
- Since amdgpu has been shipping with broken behaviour we need an
opt-out flag from the butchered implicit fencing model to enable the
proper explicit implicit fencing model.
- for kernel memory fences due to bo moves at least the i915 idea is
to use ttm_bo->moving. amdgpu probably needs the same.
- since the current p2p dma-buf interface assumes the kernel memory
fence is in the exclusive dma_resv fence slot we need to add a new
fence slot for kernel fences, which must never be ignored. Since
currently only amdgpu supports this there's no real problem here
yet, until amdgpu gains a NO_IMPLICIT CS flag.
- New userspace needs to ship in enough desktop distros so that users
wont notice the perf impact. I think we can ignore LTS distros who
upgrade their kernels but not their mesa3d snapshot.
- Then when this is all in place we can merge this patch here.
What is not a solution to this problem here is trying to make the
dma_resv rules in the kernel more clever. The fundamental issue here
is that the amdgpu CS uapi is the least expressive one across all
drivers (only equalled by panfrost, which has an actual excuse) by not
allowing any userspace control over how implicit sync is conducted.
Until this is fixed it's completely pointless to make the kernel more
clever to improve amdgpu, because all we're doing is papering over
this uapi design issue. amdgpu needs to attain the status quo
established by other drivers first, once that's achieved we can tackle
the remaining issues in a consistent way across drivers.
Cc: mesa-dev(a)lists.freedesktop.org
Cc: Bas Nieuwenhuizen <bas(a)basnieuwenhuizen.nl>
Cc: Dave Airlie <airlied(a)gmail.com>
Cc: Rob Clark <robdclark(a)chromium.org>
Cc: Kristian H. Kristensen <hoegsberg(a)google.com>
Cc: Michel Dänzer <michel(a)daenzer.net>
Cc: Daniel Stone <daniels(a)collabora.com>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: "Christian König" <christian.koenig(a)amd.com>
Cc: Alex Deucher <alexander.deucher(a)amd.com>
Cc: Daniel Vetter <daniel.vetter(a)ffwll.ch>
Cc: Deepak R Varma <mh12gx2825(a)gmail.com>
Cc: Chen Li <chenli(a)uniontech.com>
Cc: Kevin Wang <kevin1.wang(a)amd.com>
Cc: Dennis Li <Dennis.Li(a)amd.com>
Cc: Luben Tuikov <luben.tuikov(a)amd.com>
Cc: linaro-mm-sig(a)lists.linaro.org
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 88a24a0b5691..cc8426e1e8a8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -617,8 +617,8 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
amdgpu_bo_list_for_each_entry(e, p->bo_list) {
struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
- /* Make sure we use the exclusive slot for shared BOs */
- if (bo->prime_shared_count)
+ /* Make sure we use the exclusive slot for all potentially shared BOs */
+ if (!(bo->flags & AMDGPU_GEM_CREATE_VM_ALWAYS_VALID))
e->tv.num_shared = 0;
e->bo_va = amdgpu_vm_bo_find(vm, bo);
}
--
2.31.0
In the function amdgpu_uvd_cs_msg(), every branch in the switch
statement will have a return, so the code below the switch statement
will not be executed.
Eliminate the follow smatch warning:
drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c:845 amdgpu_uvd_cs_msg() warn:
ignoring unreachable code.
Reported-by: Abaci Robot <abaci(a)linux.alibaba.com>
Signed-off-by: Jiapeng Chong <jiapeng.chong(a)linux.alibaba.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
index 82f0542..375b346 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
@@ -842,8 +842,6 @@ static int amdgpu_uvd_cs_msg(struct amdgpu_uvd_cs_ctx *ctx,
DRM_ERROR("Illegal UVD message type (%d)!\n", msg_type);
return -EINVAL;
}
- BUG();
- return -EINVAL;
}
/**
--
1.8.3.1
In the function amdgpu_uvd_cs_msg(), every branch in the switch
statement will have a return, so the code below the switch statement
will not be executed.
Eliminate the follow smatch warning:
drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c:845 amdgpu_uvd_cs_msg() warn:
ignoring unreachable code.
Reported-by: Abaci Robot <abaci(a)linux.alibaba.com>
Signed-off-by: Jiapeng Chong <jiapeng.chong(a)linux.alibaba.com>
---
Changes in v2:
-For the follow advice: https://lore.kernel.org/patchwork/patch/1435074/
drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
index 82f0542..b32ed85 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
@@ -840,7 +840,6 @@ static int amdgpu_uvd_cs_msg(struct amdgpu_uvd_cs_ctx *ctx,
default:
DRM_ERROR("Illegal UVD message type (%d)!\n", msg_type);
- return -EINVAL;
}
BUG();
return -EINVAL;
--
1.8.3.1
Fix to return a negative error code from the error handling
case instead of 0, as done elsewhere in this function.
Fixes: dc8276b78917 ("staging: media: tegra-vde: use pm_runtime_resume_and_get()")
Reported-by: Hulk Robot <hulkci(a)huawei.com>
Signed-off-by: Wei Yongjun <weiyongjun1(a)huawei.com>
---
drivers/staging/media/tegra-vde/vde.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/media/tegra-vde/vde.c b/drivers/staging/media/tegra-vde/vde.c
index e025b69776f2..321d14ba2e56 100644
--- a/drivers/staging/media/tegra-vde/vde.c
+++ b/drivers/staging/media/tegra-vde/vde.c
@@ -1071,7 +1071,8 @@ static int tegra_vde_probe(struct platform_device *pdev)
* power-cycle it in order to put hardware into a predictable lower
* power state.
*/
- if (pm_runtime_resume_and_get(dev) < 0)
+ err = pm_runtime_resume_and_get(dev);
+ if (err < 0)
goto err_pm_runtime;
pm_runtime_put(dev);
From: Rob Clark <robdclark(a)chromium.org>
In some cases, like double-buffered rendering, missing vblanks can
trick the GPU into running at a lower frequence, when really we
want to be running at a higher frequency to not miss the vblanks
in the first place.
This is partially inspired by a trick i915 does, but implemented
via dma-fence for a couple of reasons:
1) To continue to be able to use the atomic helpers
2) To support cases where display and gpu are different drivers
The last patch is just proof of concept, in reality I think it
may want to be a bit more clever. But sending this out as it
is as an RFC to get feedback.
Rob Clark (3):
dma-fence: Add boost fence op
drm/atomic: Call dma_fence_boost() when we've missed a vblank
drm/msm: Wire up gpu boost
drivers/gpu/drm/drm_atomic_helper.c | 11 +++++++++++
drivers/gpu/drm/msm/msm_fence.c | 10 ++++++++++
drivers/gpu/drm/msm/msm_gpu.c | 13 +++++++++++++
drivers/gpu/drm/msm/msm_gpu.h | 2 ++
include/linux/dma-fence.h | 26 ++++++++++++++++++++++++++
5 files changed, 62 insertions(+)
--
2.30.2
On 2021-04-27 7:27, Fabio M. De Francesco wrote:
> In the documentation of functions, removed excess parameters, described
> undocumented ones, and fixed syntax errors.
>
> Signed-off-by: Fabio M. De Francesco <fmdefrancesco(a)gmail.com>
> ---
>
> Changes from v1: Cc'ed all the maintainers.
Looks like Alex already applied V1. So this one doesn't apply. "git am
-3" tells me:
Applying: drm/amd/amdgpu: Fix errors in documentation of function parameters
Using index info to reconstruct a base tree...
M drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
M drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c
M drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
Falling back to patching base and 3-way merge...
No changes -- Patch already applied.
Regards,
Felix
>
> drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c | 12 ++++++------
> drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c | 4 +++-
> drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c | 8 ++++----
> 3 files changed, 13 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
> index 2e9b16fb3fcd..bf2939b6eb43 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
> @@ -76,7 +76,7 @@ struct amdgpu_atif {
> /**
> * amdgpu_atif_call - call an ATIF method
> *
> - * @handle: acpi handle
> + * @atif: acpi handle
> * @function: the ATIF function to execute
> * @params: ATIF function params
> *
> @@ -166,7 +166,6 @@ static void amdgpu_atif_parse_functions(struct amdgpu_atif_functions *f, u32 mas
> /**
> * amdgpu_atif_verify_interface - verify ATIF
> *
> - * @handle: acpi handle
> * @atif: amdgpu atif struct
> *
> * Execute the ATIF_FUNCTION_VERIFY_INTERFACE ATIF function
> @@ -240,8 +239,7 @@ static acpi_handle amdgpu_atif_probe_handle(acpi_handle dhandle)
> /**
> * amdgpu_atif_get_notification_params - determine notify configuration
> *
> - * @handle: acpi handle
> - * @n: atif notification configuration struct
> + * @atif: acpi handle
> *
> * Execute the ATIF_FUNCTION_GET_SYSTEM_PARAMETERS ATIF function
> * to determine if a notifier is used and if so which one
> @@ -304,7 +302,7 @@ static int amdgpu_atif_get_notification_params(struct amdgpu_atif *atif)
> /**
> * amdgpu_atif_query_backlight_caps - get min and max backlight input signal
> *
> - * @handle: acpi handle
> + * @atif: acpi handle
> *
> * Execute the QUERY_BRIGHTNESS_TRANSFER_CHARACTERISTICS ATIF function
> * to determine the acceptable range of backlight values
> @@ -363,7 +361,7 @@ static int amdgpu_atif_query_backlight_caps(struct amdgpu_atif *atif)
> /**
> * amdgpu_atif_get_sbios_requests - get requested sbios event
> *
> - * @handle: acpi handle
> + * @atif: acpi handle
> * @req: atif sbios request struct
> *
> * Execute the ATIF_FUNCTION_GET_SYSTEM_BIOS_REQUESTS ATIF function
> @@ -899,6 +897,8 @@ void amdgpu_acpi_fini(struct amdgpu_device *adev)
> /**
> * amdgpu_acpi_is_s0ix_supported
> *
> + * @adev: amdgpu_device_pointer
> + *
> * returns true if supported, false if not.
> */
> bool amdgpu_acpi_is_s0ix_supported(struct amdgpu_device *adev)
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c
> index 5af464933976..98d31ebad9ce 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c
> @@ -111,6 +111,8 @@ static const char *amdkfd_fence_get_timeline_name(struct dma_fence *f)
> * a KFD BO and schedules a job to move the BO.
> * If fence is already signaled return true.
> * If fence is not signaled schedule a evict KFD process work item.
> + *
> + * @f: dma_fence
> */
> static bool amdkfd_fence_enable_signaling(struct dma_fence *f)
> {
> @@ -131,7 +133,7 @@ static bool amdkfd_fence_enable_signaling(struct dma_fence *f)
> /**
> * amdkfd_fence_release - callback that fence can be freed
> *
> - * @fence: fence
> + * @f: dma_fence
> *
> * This function is called when the reference count becomes zero.
> * Drops the mm_struct reference and RCU schedules freeing up the fence.
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
> index b43e68fc1378..ed3014fbb563 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
> @@ -719,7 +719,7 @@ static void unlock_spi_csq_mutexes(struct amdgpu_device *adev)
> }
>
> /**
> - * @get_wave_count: Read device registers to get number of waves in flight for
> + * get_wave_count: Read device registers to get number of waves in flight for
> * a particular queue. The method also returns the VMID associated with the
> * queue.
> *
> @@ -755,19 +755,19 @@ static void get_wave_count(struct amdgpu_device *adev, int queue_idx,
> }
>
> /**
> - * @kgd_gfx_v9_get_cu_occupancy: Reads relevant registers associated with each
> + * kgd_gfx_v9_get_cu_occupancy: Reads relevant registers associated with each
> * shader engine and aggregates the number of waves that are in flight for the
> * process whose pasid is provided as a parameter. The process could have ZERO
> * or more queues running and submitting waves to compute units.
> *
> * @kgd: Handle of device from which to get number of waves in flight
> * @pasid: Identifies the process for which this query call is invoked
> - * @wave_cnt: Output parameter updated with number of waves in flight that
> + * @pasid_wave_cnt: Output parameter updated with number of waves in flight that
> * belong to process with given pasid
> * @max_waves_per_cu: Output parameter updated with maximum number of waves
> * possible per Compute Unit
> *
> - * @note: It's possible that the device has too many queues (oversubscription)
> + * Note: It's possible that the device has too many queues (oversubscription)
> * in which case a VMID could be remapped to a different PASID. This could lead
> * to an iaccurate wave count. Following is a high-level sequence:
> * Time T1: vmid = getVmid(); vmid is associated with Pasid P1
Multi line comment have been aligned starting with a *
The closing */ has been shifted to a new line.
Single space replaced with tab space
This is done to maintain code uniformity.
Signed-off-by: Shubhankar Kuranagatti <shubhankarvk(a)gmail.com>
---
drivers/i2c/i2c-core-smbus.c | 17 ++++++++++-------
1 file changed, 10 insertions(+), 7 deletions(-)
diff --git a/drivers/i2c/i2c-core-smbus.c b/drivers/i2c/i2c-core-smbus.c
index d2d32c0fd8c3..205750518c21 100644
--- a/drivers/i2c/i2c-core-smbus.c
+++ b/drivers/i2c/i2c-core-smbus.c
@@ -66,10 +66,11 @@ static inline void i2c_smbus_add_pec(struct i2c_msg *msg)
}
/* Return <0 on CRC error
- If there was a write before this read (most cases) we need to take the
- partial CRC from the write part into account.
- Note that this function does modify the message (we need to decrease the
- message length to hide the CRC byte from the caller). */
+ * If there was a write before this read (most cases) we need to take the
+ * partial CRC from the write part into account.
+ * Note that this function does modify the message (we need to decrease the
+ * message length to hide the CRC byte from the caller).
+ */
static int i2c_smbus_check_pec(u8 cpec, struct i2c_msg *msg)
{
u8 rpec = msg->buf[--msg->len];
@@ -113,7 +114,7 @@ EXPORT_SYMBOL(i2c_smbus_read_byte);
s32 i2c_smbus_write_byte(const struct i2c_client *client, u8 value)
{
return i2c_smbus_xfer(client->adapter, client->addr, client->flags,
- I2C_SMBUS_WRITE, value, I2C_SMBUS_BYTE, NULL);
+ I2C_SMBUS_WRITE, value, I2C_SMBUS_BYTE, NULL);
}
EXPORT_SYMBOL(i2c_smbus_write_byte);
@@ -387,7 +388,8 @@ static s32 i2c_smbus_xfer_emulated(struct i2c_adapter *adapter, u16 addr,
if (read_write == I2C_SMBUS_READ) {
msg[1].flags |= I2C_M_RECV_LEN;
msg[1].len = 1; /* block length will be added by
- the underlying bus driver */
+ * the underlying bus driver
+ */
i2c_smbus_try_get_dmabuf(&msg[1], 0);
} else {
msg[0].len = data->block[0] + 2;
@@ -418,7 +420,8 @@ static s32 i2c_smbus_xfer_emulated(struct i2c_adapter *adapter, u16 addr,
msg[1].flags |= I2C_M_RECV_LEN;
msg[1].len = 1; /* block length will be added by
- the underlying bus driver */
+ * the underlying bus driver
+ */
i2c_smbus_try_get_dmabuf(&msg[1], 0);
break;
case I2C_SMBUS_I2C_BLOCK_DATA:
--
2.17.1
Am Mittwoch, dem 21.04.2021 um 14:54 +0000 schrieb Robin Gong:
> On 20201/04/20 22:01 Lucas Stach <l.stach(a)pengutronix.de> wrote:
> > Am Dienstag, dem 20.04.2021 um 13:47 +0000 schrieb Robin Gong:
> > > On 2021/04/19 17:46 Lucas Stach <l.stach(a)pengutronix.de> wrote:
> > > > Am Montag, dem 19.04.2021 um 07:17 +0000 schrieb Robin Gong:
> > > > > Hi Lucas,
> > > > >
> > > > > On 2021/04/14 Lucas Stach <l.stach(a)pengutronix.de> wrote:
> > > > > > Hi Robin,
> > > > > >
> > > > > > Am Mittwoch, dem 14.04.2021 um 14:33 +0000 schrieb Robin Gong:
> > > > > > > On 2020/05/20 17:43 Lucas Stach <l.stach(a)pengutronix.de> wrote:
> > > > > > > > Am Mittwoch, den 20.05.2020, 16:20 +0800 schrieb Shengjiu
> > Wang:
> > > > > > > > > Hi
> > > > > > > > >
> > > > > > > > > On Tue, May 19, 2020 at 6:04 PM Lucas Stach
> > > > > > > > > <l.stach(a)pengutronix.de>
> > > > > > > > wrote:
> > > > > > > > > > Am Dienstag, den 19.05.2020, 17:41 +0800 schrieb Shengjiu
> > Wang:
> > > > > > > > > > > There are two requirements that we need to move the
> > > > > > > > > > > request of dma channel from probe to open.
> > > > > > > > > >
> > > > > > > > > > How do you handle -EPROBE_DEFER return code from the
> > > > > > > > > > channel request if you don't do it in probe?
> > > > > > > > >
> > > > > > > > > I use the dma_request_slave_channel or dma_request_channel
> > > > > > > > > instead of dmaengine_pcm_request_chan_of. so there should
> > > > > > > > > be not -EPROBE_DEFER return code.
> > > > > > > >
> > > > > > > > This is a pretty weak argument. The dmaengine device might
> > > > > > > > probe after you try to get the channel. Using a function to
> > > > > > > > request the channel that doesn't allow you to handle probe
> > > > > > > > deferral is IMHO a bug and should be fixed, instead of
> > > > > > > > building even more assumptions on top
> > > > > > of it.
> > > > > > > >
> > > > > > > > > > > - When dma device binds with power-domains, the power
> > > > > > > > > > > will be enabled when we request dma channel. If the
> > > > > > > > > > > request of dma channel happen on probe, then the
> > > > > > > > > > > power-domains will be always enabled after kernel boot
> > > > > > > > > > > up, which is not good for power saving, so we need
> > > > > > > > > > > to move the request of dma channel to .open();
> > > > > > > > > >
> > > > > > > > > > This is certainly something which could be fixed in the
> > > > > > > > > > dmaengine driver.
> > > > > > > > >
> > > > > > > > > Dma driver always call the pm_runtime_get_sync in
> > > > > > > > > device_alloc_chan_resources, the
> > > > > > > > > device_alloc_chan_resources is called when channel is
> > > > > > > > > requested. so power is enabled on channel
> > > > > > request.
> > > > > > > >
> > > > > > > > So why can't you fix the dmaengine driver to do that RPM
> > > > > > > > call at a later time when the channel is actually going to
> > > > > > > > be used? This will allow further power savings with other
> > > > > > > > slave devices than the audio
> > > > PCM.
> > > > > > > Hi Lucas,
> > > > > > > Thanks for your suggestion. I have tried to implement
> > > > > > > runtime autosuspend in fsl-edma driver on i.mx8qm/qxp with
> > > > > > > delay time (2
> > > > > > > sec) for this feature as below (or you can refer to
> > > > > > > drivers/dma/qcom/hidma.c), and pm_runtime_get_sync/
> > > > > > > pm_runtime_put_autosuspend in all dmaengine driver interface
> > > > > > > like
> > > > > > > device_alloc_chan_resources/device_prep_slave_sg/device_prep_d
> > > > > > > ma_c
> > > > > > > ycli
> > > > > > > c/
> > > > > > > device_tx_status...
> > > > > > >
> > > > > > >
> > > > > > > pm_runtime_use_autosuspend(fsl_chan->de
> > v);
> > > > > > > pm_runtime_set_autosuspend_delay(fsl_cha
> > n->
> > > > dev,
> > > > > > 2000);
> > > > > > >
> > > > > > > That could resolve this audio case since the autosuspend could
> > > > > > > suspend runtime after
> > > > > > > 2 seconds if there is no further dma transfer but only channel
> > > > > > request(device_alloc_chan_resources).
> > > > > > > But unfortunately, it cause another issue. As you know, on our
> > > > > > > i.mx8qm/qxp, power domain done by scfw
> > > > > > > (drivers/firmware/imx/scu-pd.c)
> > > > > > over mailbox:
> > > > > > > imx_sc_pd_power()->imx_scu_call_rpc()->
> > > > > > > imx_scu_ipc_write()->mbox_send_message()
> > > > > > > which means have to 'waits for completion', meanwhile, some
> > > > > > > driver like tty will call dmaengine interfaces in non-atomic
> > > > > > > case as below,
> > > > > > >
> > > > > > > static int uart_write(struct tty_struct *tty, const unsigned
> > > > > > > char *buf, int count) {
> > > > > > > .......
> > > > > > > port = uart_port_lock(state, flags);
> > > > > > > ......
> > > > > > > __uart_start(tty); //call
> > > > start_tx()->dmaengine_prep_slave_sg...
> > > > > > > uart_port_unlock(port, flags);
> > > > > > > return ret;
> > > > > > > }
> > > > > > >
> > > > > > > Thus dma runtime resume may happen in that timing window and
> > > > > > > cause
> > > > > > kernel alarm.
> > > > > > > I'm not sure whether there are similar limitations on other
> > > > > > > driver subsystem. But for me, It looks like the only way to
> > > > > > > resolve the contradiction between tty and scu-pd (hardware
> > > > > > > limitation on
> > > > > > > i.mx8qm/qxp) is to give up autosuspend and keep
> > > > > > > pm_runtime_get_sync
> > > > > > only in device_alloc_chan_resources because request channel is a
> > > > > > safe non-atomic phase.
> > > > > > > Do you have any idea? Thanks in advance.
> > > > > >
> > > > > > If you look closely at the driver you used as an example
> > > > > > (hidma.c) it looks like there is already something in there,
> > > > > > which looks very much like what you need
> > > > > > here:
> > > > > >
> > > > > > In hidma_issue_pending() the driver tries to get the device to
> > > > > > runtime
> > > > resume.
> > > > > > If this doesn't work, maybe due to the power domain code not
> > > > > > being able to be called in atomic context, the actual work of
> > > > > > waking up the dma hardware and issuing the descriptor is shunted to a
> > tasklet.
> > > > > >
> > > > > > If I'm reading this right, this is exactly what you need here to
> > > > > > be able to call the dmaengine code from atomic context: try the
> > > > > > rpm get and issue immediately when possible, otherwise shunt the
> > > > > > work to a
> > > > > > non- atomic context where you can deal with the requirements of
> > scu-pd.
> > > > > Yes, I can schedule_work to worker to runtime resume edma channel
> > > > > by
> > > > calling scu-pd.
> > > > > But that means all dmaengine interfaces should be taken care, not
> > > > > only
> > > > > issue_pending() but also
> > > > > dmaengine_terminate_all()/dmaengine_pause()/dmaengine_resume()/
> > > > > dmaengine_tx_status(). Not sure why hidma only take care
> > > > > issue_pending. Maybe their user case is just for memcpy/memset so
> > > > > that no further complicate case as ALSA or TTY.
> > > > > Besides, for autosuspend in cyclic, we have to add
> > > > > pm_runtime_get_sync into interrupt handler as qcom/bam_dma.c. but
> > > > > how could resolve the scu-pd's non-atmoic limitation in interrupt
> > handler?
> > > >
> > > > Sure, this all needs some careful analysis on how those functions
> > > > are called and what to do about atomic callers, but it should be
> > > > doable. I don't see any fundamental issues here.
> > > >
> > > > I don't see why you would ever need to wake the hardware in an
> > > > interrupt handler. Surely the hardware is already awake, as it
> > > > wouldn't signal an interrupt otherwise. And for the issue with
> > > > scu-pd you only care about the state transition of
> > > > suspended->running. If the hardware is already running/awake, the
> > > > runtime pm state handling is nothing more than bumping a refcount,
> > > > which is atomic safe. Putting the HW in suspend is already handled
> > asynchronously in a worker, so this is also atomic safe.
> > > But with autosuspend used, in corner case, may runtime suspended
> > > before falling Into edma interrupt handler if timeout happen with the
> > > delay value of pm_runtime_set_autosuspend_delay(). Thus, can't touch
> > > any edma interrupt status register unless runtime resume edma in
> > > interrupt handler while runtime resume function based on scu-pd's power
> > domain may block or sleep.
> > > I have a simple workaround that disable runtime suspend in
> > > issue_pending worker by calling pm_runtime_forbid() and then enable
> > > runtime auto suspend in dmaengine_terminate_all so that we could
> > > easily regard that edma channel is always in runtime resume between
> > > issue_pending and channel terminated and ignore the above interrupt
> > handler/scu-pd limitation.
> >
> > The IRQ handler is the point where you are informed by the hardware that a
> > specific operation is complete. I don't see any use-case where it would be valid
> > to drop the rpm refcount to 0 before the IRQ is handled. Surely the hardware
> > needs to stay awake until the currently queued operations are complete and if
> > the IRQ handler is the completion point the IRQ handler is the first point in
> > time where your autosuspend timer should start to run. There should never be
> > a situation where the timer expiry can get between IRQ signaling and the
> > handler code running.
> But the timer of runtime_auto_suspend decide when enter runtime suspend rather
> than hardware, while transfer data size and transfer rate on IP bus decide when the
> dma interrupt happen.
>
But it isn't the hardware that decides to drop the rpm refcount to 0
and starting the autosuspend timer, it's the driver.
> Generally, we can call pm_runtime_get_sync(fsl_chan->dev)/
> pm_runtime_mark_last_busy in interrupt handler to hope the runtime_auto_suspend
> timer expiry later than interrupt coming, but if the transfer data size is larger in cyclic
> and transfer rate is very slow like 115200 or lower on uart, the fix autosuspend timer
> 100ms/200ms maybe not enough, hence, runtime suspend may execute meanwhile
> the dma interrupt maybe triggered and caught by GIC(but interrupt handler prevent
> by spin_lock_irqsave in pm_suspend_timer_fn() ), and then interrupt handler start
> to run after runtime suspend.
If your driver code drops the rpm refcount to 0 and starts the
autosuspend timer while a cyclic transfer is still in flight this is
clearly a bug. Autosuspend is not there to paper over driver bugs, but
to amortize cost of actually suspending and resuming the hardware. Your
driver code must still work even if the timeout is 0, i.e. the hardware
is immediately suspended after you drop the rpm refcount to 0.
If you still have transfers queued/in-flight the driver code must keep
a rpm reference.
Regards,
Lucas
On Wed, 21 Apr 2021 10:37:11 +0000
<Peter.Enderborg(a)sony.com> wrote:
> On 4/21/21 11:15 AM, Daniel Vetter wrote:
> > On Tue, Apr 20, 2021 at 11:37:41AM +0000, Peter.Enderborg(a)sony.com wrote:
> >> But I dont think they will. dma-buf does not have to be mapped to a process,
> >> and the case of vram, it is not covered in current global_zone. All of them
> >> would be very nice to have in some form. But it wont change what the
> >> correct value of what "Total" is.
> > We need to understand what the "correct" value is. Not in terms of kernel
> > code, but in terms of semantics. Like if userspace allocates a GL texture,
> > is this supposed to show up in your metric or not. Stuff like that.
> That it like that would like to only one pointer type. You need to know what
> you pointing at to know what it is. it might be a hardware or a other pointer.
To clarify the GL texture example: a GL texture consumes "graphics
memory", whatever that is, but they are not allocated as dmabufs. So
they count for resource consumption, but they do not show up in your
counter, until they become exported. Most GL textures are never
exported at all. In fact, exporting GL textures is a path strongly
recommended against due to unsuitable EGL/GL API.
As far as I understand, dmabufs are never allocated as is. Dmabufs
always just wrap an existing memory allocation. So creating (exporting)
a dmabuf does not increase resource usage. Allocation increases
resource usage, and most allocations are never exported.
> If there is a limitation on your pointers it is a good metric to count them
> even if you don't know what they are. Same goes for dma-buf, they
> are generic, but they consume some resources that are counted in pages.
Given above, I could even argue that *dmabufs* do not consume
resources. They only reference resources that were already allocated
by some specific means (not generic). They might keep the resource
allocated, preventing it from being freed if leaked.
As you might know, there is no really generic "dmabuf allocator", not
as a kernel UAPI nor as a userspace library (the hypothetical Unix
Device Memory Allocator library notwithstanding).
So this kind of leaves the question, what is DmaBufTotal good for? Is
it the same kind of counter as VIRT in 'top'? If you know your
particular programs, you can maybe infer if VIRT is too much or not,
but for e.g. WebKitWebProcess it is normal to have 85 GB in VIRT and
it's not a problem (like I have, on this 8 GB RAM machine).
Thanks,
pq
On Tue, Apr 20, 2021 at 11:37:41AM +0000, Peter.Enderborg(a)sony.com wrote:
> On 4/20/21 1:14 PM, Daniel Vetter wrote:
> > On Tue, Apr 20, 2021 at 09:26:00AM +0000, Peter.Enderborg(a)sony.com wrote:
> >> On 4/20/21 10:58 AM, Daniel Vetter wrote:
> >>> On Sat, Apr 17, 2021 at 06:38:35PM +0200, Peter Enderborg wrote:
> >>>> This adds a total used dma-buf memory. Details
> >>>> can be found in debugfs, however it is not for everyone
> >>>> and not always available. dma-buf are indirect allocated by
> >>>> userspace. So with this value we can monitor and detect
> >>>> userspace applications that have problems.
> >>>>
> >>>> Signed-off-by: Peter Enderborg <peter.enderborg(a)sony.com>
> >>> So there have been tons of discussions around how to track dma-buf and
> >>> why, and I really need to understand the use-cass here first I think. proc
> >>> uapi is as much forever as anything else, and depending what you're doing
> >>> this doesn't make any sense at all:
> >>>
> >>> - on most linux systems dma-buf are only instantiated for shared buffer.
> >>> So there this gives you a fairly meaningless number and not anything
> >>> reflecting gpu memory usage at all.
> >>>
> >>> - on Android all buffers are allocated through dma-buf afaik. But there
> >>> we've recently had some discussions about how exactly we should track
> >>> all this, and the conclusion was that most of this should be solved by
> >>> cgroups long term. So if this is for Android, then I don't think adding
> >>> random quick stop-gaps to upstream is a good idea (because it's a pretty
> >>> long list of patches that have come up on this).
> >>>
> >>> So what is this for?
> >> For the overview. dma-buf today only have debugfs for info. Debugfs
> >> is not allowed by google to use in andoid. So this aggregate the information
> >> so we can get information on what going on on the system.
> >>
> >> And the LKML standard respond to that is "SHOW ME THE CODE".
> > Yes. Except this extends to how exactly this is supposed to be used in
> > userspace and acted upon.
> >
> >> When the top memgc has a aggregated information on dma-buf it is maybe
> >> a better source to meminfo. But then it also imply that dma-buf requires memcg.
> >>
> >> And I dont see any problem to replace this with something better with it is ready.
> > The thing is, this is uapi. Once it's merged we cannot, ever, replace it.
> > It must be kept around forever, or a very close approximation thereof. So
> > merging this with the justification that we can fix it later on or replace
> > isn't going to happen.
>
> It is intended to be relevant as long there is a dma-buf. This is a proper
> metric. If the newer implementations is not get the same result it is
> not doing it right and is not better. If a memcg counter or a global_zone
> counter do the same thing they it can replace the suggested method.
We're not talking about a memcg controller, but about a dma-buf tracker.
Also my point was that you might not have a dma-buf on most linux systems
(outside of android really) for most gpu allocations. So we kinda need to
understand what you actually want to measure, not "I want to count all the
dma-buf in the system". Because that's a known-problematic metric in
general.
> But I dont think they will. dma-buf does not have to be mapped to a process,
> and the case of vram, it is not covered in current global_zone. All of them
> would be very nice to have in some form. But it wont change what the
> correct value of what "Total" is.
We need to understand what the "correct" value is. Not in terms of kernel
code, but in terms of semantics. Like if userspace allocates a GL texture,
is this supposed to show up in your metric or not. Stuff like that.
-Daniel
>
>
> > -Daniel
> >
> >>> -Daniel
> >>>
> >>>> ---
> >>>> drivers/dma-buf/dma-buf.c | 12 ++++++++++++
> >>>> fs/proc/meminfo.c | 5 ++++-
> >>>> include/linux/dma-buf.h | 1 +
> >>>> 3 files changed, 17 insertions(+), 1 deletion(-)
> >>>>
> >>>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> >>>> index f264b70c383e..4dc37cd4293b 100644
> >>>> --- a/drivers/dma-buf/dma-buf.c
> >>>> +++ b/drivers/dma-buf/dma-buf.c
> >>>> @@ -37,6 +37,7 @@ struct dma_buf_list {
> >>>> };
> >>>>
> >>>> static struct dma_buf_list db_list;
> >>>> +static atomic_long_t dma_buf_global_allocated;
> >>>>
> >>>> static char *dmabuffs_dname(struct dentry *dentry, char *buffer, int buflen)
> >>>> {
> >>>> @@ -79,6 +80,7 @@ static void dma_buf_release(struct dentry *dentry)
> >>>> if (dmabuf->resv == (struct dma_resv *)&dmabuf[1])
> >>>> dma_resv_fini(dmabuf->resv);
> >>>>
> >>>> + atomic_long_sub(dmabuf->size, &dma_buf_global_allocated);
> >>>> module_put(dmabuf->owner);
> >>>> kfree(dmabuf->name);
> >>>> kfree(dmabuf);
> >>>> @@ -586,6 +588,7 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
> >>>> mutex_lock(&db_list.lock);
> >>>> list_add(&dmabuf->list_node, &db_list.head);
> >>>> mutex_unlock(&db_list.lock);
> >>>> + atomic_long_add(dmabuf->size, &dma_buf_global_allocated);
> >>>>
> >>>> return dmabuf;
> >>>>
> >>>> @@ -1346,6 +1349,15 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
> >>>> }
> >>>> EXPORT_SYMBOL_GPL(dma_buf_vunmap);
> >>>>
> >>>> +/**
> >>>> + * dma_buf_allocated_pages - Return the used nr of pages
> >>>> + * allocated for dma-buf
> >>>> + */
> >>>> +long dma_buf_allocated_pages(void)
> >>>> +{
> >>>> + return atomic_long_read(&dma_buf_global_allocated) >> PAGE_SHIFT;
> >>>> +}
> >>>> +
> >>>> #ifdef CONFIG_DEBUG_FS
> >>>> static int dma_buf_debug_show(struct seq_file *s, void *unused)
> >>>> {
> >>>> diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> >>>> index 6fa761c9cc78..ccc7c40c8db7 100644
> >>>> --- a/fs/proc/meminfo.c
> >>>> +++ b/fs/proc/meminfo.c
> >>>> @@ -16,6 +16,7 @@
> >>>> #ifdef CONFIG_CMA
> >>>> #include <linux/cma.h>
> >>>> #endif
> >>>> +#include <linux/dma-buf.h>
> >>>> #include <asm/page.h>
> >>>> #include "internal.h"
> >>>>
> >>>> @@ -145,7 +146,9 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> >>>> show_val_kb(m, "CmaFree: ",
> >>>> global_zone_page_state(NR_FREE_CMA_PAGES));
> >>>> #endif
> >>>> -
> >>>> +#ifdef CONFIG_DMA_SHARED_BUFFER
> >>>> + show_val_kb(m, "DmaBufTotal: ", dma_buf_allocated_pages());
> >>>> +#endif
> >>>> hugetlb_report_meminfo(m);
> >>>>
> >>>> arch_report_meminfo(m);
> >>>> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> >>>> index efdc56b9d95f..5b05816bd2cd 100644
> >>>> --- a/include/linux/dma-buf.h
> >>>> +++ b/include/linux/dma-buf.h
> >>>> @@ -507,4 +507,5 @@ int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
> >>>> unsigned long);
> >>>> int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map);
> >>>> void dma_buf_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map);
> >>>> +long dma_buf_allocated_pages(void);
> >>>> #endif /* __DMA_BUF_H__ */
> >>>> --
> >>>> 2.17.1
> >>>>
> >>>> _______________________________________________
> >>>> dri-devel mailing list
> >>>> dri-devel(a)lists.freedesktop.org
> >>>> https://urldefense.com/v3/__https://lists.freedesktop.org/mailman/listinfo/…
> >> _______________________________________________
> >> dri-devel mailing list
> >> dri-devel(a)lists.freedesktop.org
> >> https://urldefense.com/v3/__https://lists.freedesktop.org/mailman/listinfo/…
>
> _______________________________________________
> dri-devel mailing list
> dri-devel(a)lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Am 16.04.21 um 16:37 schrieb Lee Jones:
> Fixes the following W=1 kernel build warning(s):
>
> drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c:169: warning: Function parameter or member 'sched_score' not described in 'amdgpu_ring_init'
>
> Cc: Alex Deucher <alexander.deucher(a)amd.com>
> Cc: "Christian König" <christian.koenig(a)amd.com>
> Cc: David Airlie <airlied(a)linux.ie>
> Cc: Daniel Vetter <daniel(a)ffwll.ch>
> Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
> Cc: amd-gfx(a)lists.freedesktop.org
> Cc: dri-devel(a)lists.freedesktop.org
> Cc: linux-media(a)vger.kernel.org
> Cc: linaro-mm-sig(a)lists.linaro.org
> Signed-off-by: Lee Jones <lee.jones(a)linaro.org>
Reviewed-by: Christian König <christian.koenig(a)amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
> index 688624ebe4211..7b634a1517f9c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
> @@ -158,6 +158,7 @@ void amdgpu_ring_undo(struct amdgpu_ring *ring)
> * @irq_src: interrupt source to use for this ring
> * @irq_type: interrupt type to use for this ring
> * @hw_prio: ring priority (NORMAL/HIGH)
> + * @sched_score: optional score atomic shared with other schedulers
> *
> * Initialize the driver information for the selected ring (all asics).
> * Returns 0 on success, error on failure.
Am 16.04.21 um 16:37 schrieb Lee Jones:
> Fixes the following W=1 kernel build warning(s):
>
> drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c:444: warning: Function parameter or member 'sched_score' not described in 'amdgpu_fence_driver_init_ring'
>
> Cc: Alex Deucher <alexander.deucher(a)amd.com>
> Cc: "Christian König" <christian.koenig(a)amd.com>
> Cc: David Airlie <airlied(a)linux.ie>
> Cc: Daniel Vetter <daniel(a)ffwll.ch>
> Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
> Cc: Jerome Glisse <glisse(a)freedesktop.org>
> Cc: amd-gfx(a)lists.freedesktop.org
> Cc: dri-devel(a)lists.freedesktop.org
> Cc: linux-media(a)vger.kernel.org
> Cc: linaro-mm-sig(a)lists.linaro.org
> Signed-off-by: Lee Jones <lee.jones(a)linaro.org>
Reviewed-by: Christian König <christian.koenig(a)amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> index 47ea468596184..30772608eac6c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> @@ -434,6 +434,7 @@ int amdgpu_fence_driver_start_ring(struct amdgpu_ring *ring,
> *
> * @ring: ring to init the fence driver on
> * @num_hw_submission: number of entries on the hardware queue
> + * @sched_score: optional score atomic shared with other schedulers
> *
> * Init the fence driver for the requested ring (all asics).
> * Helper function for amdgpu_fence_driver_init().
On Tue, 20 Apr 2021 at 14:46, <Peter.Enderborg(a)sony.com> wrote:
> On 4/20/21 3:34 PM, Daniel Stone wrote:
> > On Fri, 16 Apr 2021 at 13:34, Peter Enderborg <peter.enderborg(a)sony.com
> <mailto:peter.enderborg@sony.com>> wrote:
> > This adds a total used dma-buf memory. Details
> > can be found in debugfs, however it is not for everyone
> > and not always available. dma-buf are indirect allocated by
> > userspace. So with this value we can monitor and detect
> > userspace applications that have problems.
> >
> >
> > FWIW, this won't work super well for Android where gralloc is
> implemented as a system service, so all graphics usage will instantly be
> accounted to it.
>
> This resource allocation is a big part of why we need it. Why should it
> not work?
>
Sorry, I'd somehow completely misread that as being locally rather than
globally accounted. Given that, it's more correct, just also not super
useful.
Some drivers export allocation tracepoints which you could use if you have
a decent userspace tracing infrastructure. Short of that, many drivers
export this kind of thing through debugfs already. I think a better
long-term direction is probably getting accounting from dma-heaps rather
than extending core dmabuf itself.
Cheers,
Daniel