Hello,
On 5/10/26 08:30, Deepanshu Kartikey wrote:
> virtio_gpu_cursor_plane_update() calls virtio_gpu_array_lock_resv()
> but ignores its return value. The function can fail in two ways:
>
> - dma_resv_lock_interruptible() returns -ERESTARTSYS when a signal
> is delivered while waiting for the reservation lock.
> - dma_resv_reserve_fences() returns -ENOMEM if it fails to allocate
> a fence slot; in this case lock_resv unlocks before returning.
>
> In both cases the resv lock is not held on return. The cursor path
> proceeds to queue a fenced transfer command. The queue path then
> walks the object array and calls dma_resv_add_fence() on the cursor
> BO's reservation. dma_resv_add_fence() requires the resv lock to be
> held; with lockdep enabled the missing lock trips
> dma_resv_assert_held():
>
> WARNING: drivers/dma-buf/dma-resv.c:296 at dma_resv_add_fence+0x71e/0x840
> Call Trace:
> virtio_gpu_array_add_fence+0xcd/0x140
> virtio_gpu_queue_ctrl_sgs
> virtio_gpu_queue_fenced_ctrl_buffer+0x578/0xfb0
> virtio_gpu_cursor_plane_update+0x411/0xbc0
> drm_atomic_helper_commit_planes+0x497/0xf10
> ...
> drm_mode_cursor_ioctl+0xd4/0x110
> drm_ioctl+0x5e6/0xc60
> __x64_sys_ioctl+0x18e/0x210
>
> Beyond the WARN, mutating the dma_resv fence list without the lock
> races with concurrent readers/writers and can corrupt the list.
>
> Check the return value of virtio_gpu_array_lock_resv(). On failure,
> drop the references taken by virtio_gpu_array_add_obj() with
> virtio_gpu_array_put_free() (which does not unlock, matching the
> not-locked state) and return without queueing the command. A
> skipped cursor frame is harmless; the WARN and the underlying race
> are not.
>
> The bug was reported by syzbot, triggered via fault injection
> (fail_nth) on the DRM_IOCTL_MODE_CURSOR path, which forces the
> -ENOMEM branch in dma_resv_reserve_fences().
>
> Reported-by: syzbot+72bd3dd3a5d5f39a0271(a)syzkaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=72bd3dd3a5d5f39a0271
> Fixes: 5cfd31c5b3a3 ("drm/virtio: fix virtio_gpu_cursor_plane_update().")
> Cc: stable(a)vger.kernel.org
> Signed-off-by: Deepanshu Kartikey <kartikey406(a)gmail.com>
> ---
> drivers/gpu/drm/virtio/virtgpu_plane.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c
> index a126d1b25f46..ca379b08b9ec 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_plane.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
> @@ -459,7 +459,10 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
> if (!objs)
> return;
> virtio_gpu_array_add_obj(objs, vgfb->base.obj[0]);
> - virtio_gpu_array_lock_resv(objs);
> + if (virtio_gpu_array_lock_resv(objs)) {
> + virtio_gpu_array_put_free(objs);
> + return;
> + }
> virtio_gpu_cmd_transfer_to_host_2d
> (vgdev, 0,
> plane->state->crtc_w,
Thanks for the patch. Atomic update shouldn't fail due to non-critical
errors like on a signal interrupt. Could you please move this code that
may fail in update() to .prepare/cleanup_fb() callbacks?
--
Best regards,
Dmitry
On Wed, May 06, 2026 at 04:55:27PM +0100, Matt Evans wrote:
> Hi Leon,
>
> On 06/05/2026 16:29, Leon Romanovsky wrote:
> >
> > On Wed, May 06, 2026 at 02:53:31PM +0100, Matt Evans wrote:
> > > Hi Alex,
> > >
> > > On 01/05/2026 20:12, Alex Williamson wrote:
> > > >
> > > > On Thu, 16 Apr 2026 06:17:44 -0700
> > > > Matt Evans <mattev(a)meta.com> wrote:
> > > >
> > > > > vfio_pci_dma_buf_cleanup() assumed all VFIO device DMABUFs need to be
> > > > > revoked. However, if vfio_pci_dma_buf_move() revokes DMABUFs before
> > > > > the fd/device closes, then vfio_pci_dma_buf_cleanup() would do a
> > > > > second/underflowing kref_put() then wait_for_completion() on a
> > > > > completion that never fires. Fixed by predicating on revocation
> > > > > status.
> > > > >
> > > > > This could happen if PCI_COMMAND_MEMORY is cleared before closing the
> > > > > device fd (but the scenario is more likely to hit when future commits
> > > > > add more methods to revoke DMABUFs).
> > > > >
> > > > > Fixes: 1a8a5227f2299 ("vfio: Wait for dma-buf invalidation to complete")
> > > > > Signed-off-by: Matt Evans <mattev(a)meta.com>
> > > > > ---
> > > > >
> > > > > (Just a fix, but later "vfio/pci: Convert BAR mmap() to use a DMABUF"
> > > > > and "vfio/pci: Permanently revoke a DMABUF on request" depend on this
> > > > > context, so including in this series.)
> > > >
> > > > We really need a fix for this split out from this series, It's already
> > > > been shown[1] that this is trivially reachable. Carlos proposed[2] a
> > > > similar solution to the one below. I was concurrently working on the
> > > > issued and suggested an alternative[3]. Let's pick a solution for
> > > > 7.1-rc. Thanks,
> > >
> > > It looks like [3] is progressing, so I'll drop this one when I can rebase
> > > onto it.
> > >
> > > I noticed [3] removes the dma_resv_lock(priv->dmabuf->resv) around the
> > > priv->vdev = NULL, and this series' vfio_pci_mmap_huge_fault() relies on
> > > vdev only changing whilst resv is held to resolve a race between a fault and
> > > cleanup (see patch 7 of this series). The handler takes resv so that it can
> > > stably test vdev in order to take memory_lock.
> >
> > I think that you should rely on priv->revoked and not on priv->vdev.
>
> Needs both unfortunately, as the fault handler ultimately needs to take
> vdev->memory_lock.
One can argue that if priv->revoked == True, all accesses to device
should be denied and treated as priv->vdev == Null.
Thanks
On 06/05/2026 11:45, Nicolas Frattaroli wrote:
> RAM is not, in fact, cheap. Especially on embedded systems with a low
> amount of memory, but known and well-defined userspace, more explicit
> resource management can lead to better utilisation patterns. As an
> example, a resource manager process on a purpose-built device may wish
> to launch, and then explicitly swap out, memory of processes that are
> kept "warm", to improve perceived startup latency of individual
> full-screen applications without making the kernel figure out the usage
> pattern from observation alone in order to swap out the right pages.
Have you considered memory control groups (memcg) for this purpose?
Imposing a lower limit than currently allocated should trigger reclaim,
so 'background' applications could have the limit lowered and then
restored when moved to the foreground.
> To allow for this explicit control in the context of panthor's GPU
> memory, add two new sysfs knobs. The first, mem_reclaim, runs an
> explicit priv BO reclaim cycle on the TGID written to it.
>
> The second, mem_claim, does the opposite: it swaps BOs back into active
> memory.
How necessary is this mem_claim for performance? Have you done any
benchmarking of explicitly claiming vs just allowing it to happen
naturally? My gut feeling is that mem_claim should be unnecessary in
most situations, but I'm prepared to be proved wrong.
I'm not saying this series is necessarily the wrong approach - but I
think we need a bit more justification for adding a new API for this.
Thanks,
Steve
> Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli(a)collabora.com>
> ---
> Nicolas Frattaroli (4):
> drm/panthor: Add freed_sz parameter to reclaim_priv_bos
> MAINTAINERS: Add sysfs ABI docs to list of panthor files
> drm/panthor: Add explicit memory reclaim sysfs knob
> drm/panthor: Add explicit memory claim sysfs knob
>
> Documentation/ABI/testing/sysfs-driver-panthor-mem | 34 ++++++++
> MAINTAINERS | 1 +
> drivers/gpu/drm/panthor/panthor_drv.c | 93 ++++++++++++++++++++++
> drivers/gpu/drm/panthor/panthor_gem.c | 7 +-
> drivers/gpu/drm/panthor/panthor_gem.h | 1 +
> drivers/gpu/drm/panthor/panthor_mmu.c | 70 +++++++++++++++-
> drivers/gpu/drm/panthor/panthor_mmu.h | 4 +
> 7 files changed, 205 insertions(+), 5 deletions(-)
> ---
> base-commit: 2c4b906cd135bbb44855287d0d0eff0ee0b47afe
> change-id: 20260506-panthor-explicit-reclaim-3dffed028d8c
>
> Best regards,
> --
> Nicolas Frattaroli <nicolas.frattaroli(a)collabora.com>
>
On Wed, May 06, 2026 at 02:53:31PM +0100, Matt Evans wrote:
> Hi Alex,
>
> On 01/05/2026 20:12, Alex Williamson wrote:
> >
> > On Thu, 16 Apr 2026 06:17:44 -0700
> > Matt Evans <mattev(a)meta.com> wrote:
> >
> > > vfio_pci_dma_buf_cleanup() assumed all VFIO device DMABUFs need to be
> > > revoked. However, if vfio_pci_dma_buf_move() revokes DMABUFs before
> > > the fd/device closes, then vfio_pci_dma_buf_cleanup() would do a
> > > second/underflowing kref_put() then wait_for_completion() on a
> > > completion that never fires. Fixed by predicating on revocation
> > > status.
> > >
> > > This could happen if PCI_COMMAND_MEMORY is cleared before closing the
> > > device fd (but the scenario is more likely to hit when future commits
> > > add more methods to revoke DMABUFs).
> > >
> > > Fixes: 1a8a5227f2299 ("vfio: Wait for dma-buf invalidation to complete")
> > > Signed-off-by: Matt Evans <mattev(a)meta.com>
> > > ---
> > >
> > > (Just a fix, but later "vfio/pci: Convert BAR mmap() to use a DMABUF"
> > > and "vfio/pci: Permanently revoke a DMABUF on request" depend on this
> > > context, so including in this series.)
> >
> > We really need a fix for this split out from this series, It's already
> > been shown[1] that this is trivially reachable. Carlos proposed[2] a
> > similar solution to the one below. I was concurrently working on the
> > issued and suggested an alternative[3]. Let's pick a solution for
> > 7.1-rc. Thanks,
>
> It looks like [3] is progressing, so I'll drop this one when I can rebase
> onto it.
>
> I noticed [3] removes the dma_resv_lock(priv->dmabuf->resv) around the
> priv->vdev = NULL, and this series' vfio_pci_mmap_huge_fault() relies on
> vdev only changing whilst resv is held to resolve a race between a fault and
> cleanup (see patch 7 of this series). The handler takes resv so that it can
> stably test vdev in order to take memory_lock.
I think that you should rely on priv->revoked and not on priv->vdev.
Thanks
>
> Must your fix change vdev outside of holding resv? I'm still sketching
> alternatives; at first glance perhaps the fault handler could rely on vdev
> being valid if !revoked, which can be tested holding [only] resv.
>
>
> Thanks,
>
> Matt
>
> >
> > Alex
> >
> > [1]https://lore.kernel.org/all/GVXPR02MB12019AA6014F27EF5D773E89BFB372@GVXPR…
> > [2]https://lore.kernel.org/all/20260429182736.409323-2-clopez@suse.de/
> > [3]https://lore.kernel.org/all/20260429142242.70f746b4@nvidia.com/
> >
> > > drivers/vfio/pci/vfio_pci_dmabuf.c | 9 +++++++--
> > > 1 file changed, 7 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/drivers/vfio/pci/vfio_pci_dmabuf.c b/drivers/vfio/pci/vfio_pci_dmabuf.c
> > > index 281ba7d69567..04478b7415a0 100644
> > > --- a/drivers/vfio/pci/vfio_pci_dmabuf.c
> > > +++ b/drivers/vfio/pci/vfio_pci_dmabuf.c
> > > @@ -395,20 +395,25 @@ void vfio_pci_dma_buf_cleanup(struct vfio_pci_core_device *vdev)
> > > down_write(&vdev->memory_lock);
> > > list_for_each_entry_safe(priv, tmp, &vdev->dmabufs, dmabufs_elm) {
> > > + bool was_revoked;
> > > +
> > > if (!get_file_active(&priv->dmabuf->file))
> > > continue;
> > > dma_resv_lock(priv->dmabuf->resv, NULL);
> > > list_del_init(&priv->dmabufs_elm);
> > > priv->vdev = NULL;
> > > + was_revoked = priv->revoked;
> > > priv->revoked = true;
> > > dma_buf_invalidate_mappings(priv->dmabuf);
> > > dma_resv_wait_timeout(priv->dmabuf->resv,
> > > DMA_RESV_USAGE_BOOKKEEP, false,
> > > MAX_SCHEDULE_TIMEOUT);
> > > dma_resv_unlock(priv->dmabuf->resv);
> > > - kref_put(&priv->kref, vfio_pci_dma_buf_done);
> > > - wait_for_completion(&priv->comp);
> > > + if (!was_revoked) {
> > > + kref_put(&priv->kref, vfio_pci_dma_buf_done);
> > > + wait_for_completion(&priv->comp);
> > > + }
> > > vfio_device_put_registration(&vdev->vdev);
> > > fput(priv->dmabuf->file);
> > > }
> >
>
On 06/05/2026 11:45, Nicolas Frattaroli wrote:
> panthor_mmu_reclaim_priv_bos returns the number of freed pages. However,
> how many bytes of freed memory this translates to can't generally be
> deduced from the number of pages, as the page size is a per-VM property.
>
> It may be useful to know the exact number of bytes that have been freed
The "useful" aspect seems to just be a drm_dbg() message from what I can
see with this series? Am I missing something or is it not actually that
useful?
Thanks,
Steve
> for observability and debugging purposes. To that end, add a new
> parameter "freed_sz", which is a pointer to a size_t where this
> information will be stored. It may be NULL, in which case the
> information isn't stored at all.
>
> Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli(a)collabora.com>
> ---
> drivers/gpu/drm/panthor/panthor_gem.c | 3 ++-
> drivers/gpu/drm/panthor/panthor_mmu.c | 12 ++++++++++--
> drivers/gpu/drm/panthor/panthor_mmu.h | 1 +
> 3 files changed, 13 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/panthor/panthor_gem.c b/drivers/gpu/drm/panthor/panthor_gem.c
> index 13295d7a593d..80e82238f3c5 100644
> --- a/drivers/gpu/drm/panthor/panthor_gem.c
> +++ b/drivers/gpu/drm/panthor/panthor_gem.c
> @@ -1511,7 +1511,8 @@ panthor_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
> goto out;
>
> freed += panthor_mmu_reclaim_priv_bos(ptdev, sc->nr_to_scan - freed,
> - &remaining, panthor_gem_try_evict);
> + &remaining, NULL,
> + panthor_gem_try_evict);
> if (freed >= sc->nr_to_scan)
> goto out;
>
> diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
> index a7ee14986849..b81388b35a58 100644
> --- a/drivers/gpu/drm/panthor/panthor_mmu.c
> +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
> @@ -3127,13 +3127,18 @@ int panthor_vm_prepare_mapped_bos_resvs(struct drm_exec *exec, struct panthor_vm
> unsigned long
> panthor_mmu_reclaim_priv_bos(struct panthor_device *ptdev,
> unsigned int nr_to_scan, unsigned long *remaining,
> + size_t *freed_sz,
> bool (*shrink)(struct drm_gem_object *,
> struct ww_acquire_ctx *))
> {
> + unsigned long newly_freed;
> unsigned long freed = 0;
> LIST_HEAD(remaining_vms);
> LIST_HEAD(vms);
>
> + if (freed_sz)
> + *freed_sz = 0;
> +
> mutex_lock(&ptdev->reclaim.lock);
> list_splice_init(&ptdev->reclaim.vms, &vms);
>
> @@ -3152,8 +3157,11 @@ panthor_mmu_reclaim_priv_bos(struct panthor_device *ptdev,
>
> mutex_unlock(&ptdev->reclaim.lock);
>
> - freed += drm_gem_lru_scan(&vm->reclaim.lru, nr_to_scan - freed,
> - remaining, shrink, NULL);
> + newly_freed = drm_gem_lru_scan(&vm->reclaim.lru, nr_to_scan - freed,
> + remaining, shrink, NULL);
> + if (freed_sz)
> + *freed_sz += panthor_vm_page_size(vm) * newly_freed;
> + freed += newly_freed;
>
> mutex_lock(&ptdev->reclaim.lock);
>
> diff --git a/drivers/gpu/drm/panthor/panthor_mmu.h b/drivers/gpu/drm/panthor/panthor_mmu.h
> index 3522fbbce369..12b18b5f90e1 100644
> --- a/drivers/gpu/drm/panthor/panthor_mmu.h
> +++ b/drivers/gpu/drm/panthor/panthor_mmu.h
> @@ -52,6 +52,7 @@ int panthor_vm_evict_bo_mappings_locked(struct panthor_gem_object *bo);
> unsigned long
> panthor_mmu_reclaim_priv_bos(struct panthor_device *ptdev,
> unsigned int nr_to_scan, unsigned long *remaining,
> + size_t *freed_sz,
> bool (*shrink)(struct drm_gem_object *,
> struct ww_acquire_ctx *));
> int panthor_vm_prepare_mapped_bos_resvs(struct drm_exec *exec,
>
On Wed, May 06, 2026 at 02:43:42PM +0200, Nicolas Frattaroli wrote:
> On Wednesday, 6 May 2026 12:08:24 Central European Summer Time Maxime Ripard wrote:
> > Hi,
> >
> > On Tue, May 05, 2026 at 04:05:10PM +0200, Ketil Johnsen wrote:
> > > From: Florent Tomasin <florent.tomasin(a)arm.com>
> > >
> > > This patch allows Panthor to allocate buffer objects from a
> > > protected heap. The Panthor driver should be seen as a consumer
> > > of the heap and not an exporter.
> > >
> > > Protected memory buffers needed by the Panthor driver:
> > > - On CSF FW load, the Panthor driver must allocate a protected
> > > buffer object to hold data to use by the FW when in protected
> > > mode. This protected buffer object is owned by the device
> > > and does not belong to a process.
> > > - On CSG creation, the Panthor driver must allocate a protected
> > > suspend buffer object for the FW to store data when suspending
> > > the CSG while in protected mode. The kernel owns this allocation
> > > and does not allow user space mapping. The format of the data
> > > in this buffer is only known by the FW and does not need to be
> > > shared with other entities.
> > >
> > > The driver will retrieve the protected heap using the name of the
> > > heap provided to the driver as module parameter.
> >
> > I know it's what dma_heap_find asks for, but I wonder if it wouldn't be
> > better in the device tree and lookup through the device node? heaps are
> > going to have a node anyway, right?
> >
> > This would allow you to have a default that works and not mess to much
> > with the kernel parameters that aren't always easy to change for
> > end-users.
>
> Hopefully the kernel parameters aren't easy to change for end-users on
> systems that deploy this. :) The use-case is copy protection for embedded
> devices running on locked-down systems. Though admittedly the mechanism
> works even on "tampered"-with systems, as long as the underlying hardware
> implements the access restrictions properly.
I guess it wasn't just about the user tampering it, but also about
distros shipping, say, a signed UKI that would support multiple
platforms with 42 versions of optee but all using panthor. I'm not sure
we can expect a single heap name to work for all of them.
> I'm a bit hesitant about making this DT myself. It would solve the problem
> that panthor could probe before the heap provider and needs to handle
> deferral by itself, but it does mean that we'd be putting software
> configuration into the device tree.
Is it? If the system has a protected allocator, and if panthor
absolutely needs to allocate from that allocator, it's not software
configuration: it's a description of what the platform looks like from
Linux PoV.
Or put it differently, it's not more software than optee is, and yet it
has its own node.
> Having the secure heap be a node with no address would allow the tee
> (or whatever else) to still dynamically allocate it wherever, and let
> us handle the dependency relationship between dma heap and GPU, but
> then we require that tee heap driver implementations play nice with
> this scheme, and bring OF into the dma_heap APIs.
I mean, it's a dma_heap API that we create so we don't bring anything
more to it :)
Maxime
On Wed, May 06, 2026 at 12:35:42PM +1000, Alexey Kardashevskiy wrote:
> Hi!
>
> Let's reignite this topic.
>
> I've been using these patches + QEMU side hacks for 6+ months. And it's been fine until I got a device where MSIX BAR is in a middle of another BAR marked as TEE in the TDISP interface report. And no trusted MSIX yet.
>
> Every time QEMU mmaps a BAR - I request a dmabuf fd from VFIO in QEMU. Since mapping of an entire MSIX BAR is allowed by default, VFIORegion::nr_mmaps==1 and it is an entire BAR.
>
> Problem: KVM memslot mismatches the dmabuf fd size
Huh? kvm does not care about dmabuf at all? Are you running other
patches to hook kvm and dmabuf?
Putting a slice in a dmabuf is a well understood need for MSI, so I
expect whatever kvm dmabuf interface that gets merged to accomodate
this?
> Solution2: modify logic in VFIO dmabuf to allow multiple KVM memory
> slots per dmabuf. Now it is kvm_memory_slot::dmabuf_attach with no
> offset into the dmabuf and one kvm_vfio_dmabuf per dma_buf.
Yes, when kvm learns to take in a dmabuf it needs to take in a slice,
not the whole buf. Or you need to create multiple dmabufs with the
necessary slices from the VFIO. The upstream vfio dmabuf creation
allows creating it with a slice.
Jason
On Wed, May 06, 2026 at 12:50:15PM +0200, Boris Brezillon wrote:
> On Wed, 6 May 2026 12:08:24 +0200
> Maxime Ripard <mripard(a)kernel.org> wrote:
>
> > Hi,
> >
> > On Tue, May 05, 2026 at 04:05:10PM +0200, Ketil Johnsen wrote:
> > > From: Florent Tomasin <florent.tomasin(a)arm.com>
> > >
> > > This patch allows Panthor to allocate buffer objects from a
> > > protected heap. The Panthor driver should be seen as a consumer
> > > of the heap and not an exporter.
> > >
> > > Protected memory buffers needed by the Panthor driver:
> > > - On CSF FW load, the Panthor driver must allocate a protected
> > > buffer object to hold data to use by the FW when in protected
> > > mode. This protected buffer object is owned by the device
> > > and does not belong to a process.
> > > - On CSG creation, the Panthor driver must allocate a protected
> > > suspend buffer object for the FW to store data when suspending
> > > the CSG while in protected mode. The kernel owns this allocation
> > > and does not allow user space mapping. The format of the data
> > > in this buffer is only known by the FW and does not need to be
> > > shared with other entities.
> > >
> > > The driver will retrieve the protected heap using the name of the
> > > heap provided to the driver as module parameter.
> >
> > I know it's what dma_heap_find asks for, but I wonder if it wouldn't be
> > better in the device tree and lookup through the device node? heaps are
> > going to have a node anyway, right?
>
> I'm not too sure. Take the PROTMEM (name="protected,xxxx") dma_heaps
> instantiated by optee for instance, I don't think the originating
> tee_device comes from a device node, nor is the underlying heap
> described as a device node. The reserved memory pool this protected heap
> comes from is most likely defined somewhere as reserved memory in the
> DT, but there's nothing to correlate this range of reserved mem to some
> sub-range that the TEE implementation is carving out to provide
> protected memory.
Maybe we should be working on a dt bindings for heaps then? Something
simple like we have for clocks with a phandle and an ID would probably
be enough. In optee's case, it looks like it would map nicely with
TEE_DMA_HEAP_* flags too.
The only two that wouldn't be covered would be the system and default
CMA heap if not setup in the DT, which shouldn't be too bad for this
particular use-case.
> > This would allow you to have a default that works and not mess to much
> > with the kernel parameters that aren't always easy to change for
> > end-users.
>
> I guess we can have a default list of heaps that we know provide
> protected memory for GPU rendering if that helps. Right now this list
> would contain only "protected,trusted-ui" :D. The other option would be
> to make this list a panthor Kconfig option and not expose it as a module
> param.
My main concern is that firmware builds are board specific, and thus its
capabilities isn't something we can reasonably expect to be consistent
across boards, SoCs and platforms. Kernel images (and the kernel
parameters) however can be made generic and unreasonably hard for users
to modify once you start playing with things like secure boot or
measured boot.
The only thing bridging the gap between the two is the DT.
Maxime
Hi,
On Tue, May 05, 2026 at 04:05:10PM +0200, Ketil Johnsen wrote:
> From: Florent Tomasin <florent.tomasin(a)arm.com>
>
> This patch allows Panthor to allocate buffer objects from a
> protected heap. The Panthor driver should be seen as a consumer
> of the heap and not an exporter.
>
> Protected memory buffers needed by the Panthor driver:
> - On CSF FW load, the Panthor driver must allocate a protected
> buffer object to hold data to use by the FW when in protected
> mode. This protected buffer object is owned by the device
> and does not belong to a process.
> - On CSG creation, the Panthor driver must allocate a protected
> suspend buffer object for the FW to store data when suspending
> the CSG while in protected mode. The kernel owns this allocation
> and does not allow user space mapping. The format of the data
> in this buffer is only known by the FW and does not need to be
> shared with other entities.
>
> The driver will retrieve the protected heap using the name of the
> heap provided to the driver as module parameter.
I know it's what dma_heap_find asks for, but I wonder if it wouldn't be
better in the device tree and lookup through the device node? heaps are
going to have a node anyway, right?
This would allow you to have a default that works and not mess to much
with the kernel parameters that aren't always easy to change for
end-users.
Maxime