Am 18.03.25 um 20:22 schrieb Daniel Almeida:
> From: Asahi Lina <lina(a)asahilina.net>
>
> Drivers may want to support driver-private objects, which cannot be
> shared. This allows them to share a single lock and enables other
> optimizations.
>
> Add an `exportable` field to drm_gem_object, which blocks PRIME export
> if set to false. It is initialized to true in
> drm_gem_private_object_init.
We already have a method for doing that which is used by almost all drivers (except for lsdc).
Basically you just create a function which checks the per-requisites if a buffer can be exported before calling drm_gem_prime_export() and installs that as .export callback into the drm_gem_object_funcs.
See amdgpu_gem_prime_export() for a simpler example.
Regards,
Christian.
>
> Signed-off-by: Asahi Lina <lina(a)asahilina.net>
> Signed-off-by: Daniel Almeida <daniel.almeida(a)collabora.com>
> ---
> drivers/gpu/drm/drm_gem.c | 1 +
> drivers/gpu/drm/drm_prime.c | 5 +++++
> include/drm/drm_gem.h | 8 ++++++++
> 3 files changed, 14 insertions(+)
>
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index ee811764c3df4b4e9c377a66afd4967512ba2001..8f998fe6beecd285ce3e2d5badfa95eb7d7bd548 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -195,6 +195,7 @@ void drm_gem_private_object_init(struct drm_device *dev,
>
> drm_vma_node_reset(&obj->vma_node);
> INIT_LIST_HEAD(&obj->lru_node);
> + obj->exportable = true;
> }
> EXPORT_SYMBOL(drm_gem_private_object_init);
>
> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> index 32a8781cfd67b82ece7b7b94625715171bb41917..20aa350280abe9a6ed6742e131ff50c65bc9dfa9 100644
> --- a/drivers/gpu/drm/drm_prime.c
> +++ b/drivers/gpu/drm/drm_prime.c
> @@ -387,6 +387,11 @@ static struct dma_buf *export_and_register_object(struct drm_device *dev,
> return dmabuf;
> }
>
> + if (!obj->exportable) {
> + dmabuf = ERR_PTR(-EINVAL);
> + return dmabuf;
> + }
> +
> if (obj->funcs && obj->funcs->export)
> dmabuf = obj->funcs->export(obj, flags);
> else
> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> index fdae947682cd0b7b06db5e35e120f049a0f30179..f700e4996eccb92597cca6b8c3df8e35b864c1e1 100644
> --- a/include/drm/drm_gem.h
> +++ b/include/drm/drm_gem.h
> @@ -432,6 +432,14 @@ struct drm_gem_object {
> * The current LRU list that the GEM object is on.
> */
> struct drm_gem_lru *lru;
> +
> + /**
> + * @exportable:
> + *
> + * Whether this GEM object can be exported via the drm_gem_object_funcs->export
> + * callback. Defaults to true.
> + */
> + bool exportable;
> };
>
> /**
>
Le mardi 04 mars 2025 à 13:15 +0530, Sumit Garg a écrit :
> On Tue, Mar 04, 2025 at 08:17:23AM +0100, Jens Wiklander wrote:
> > Hi Daniel,
> >
> > On Fri, Feb 21, 2025 at 3:12 PM Daniel Stone <daniel(a)fooishbar.org> wrote:
> > >
> > > Hi Sumit,
> > >
> > > On Fri, 21 Feb 2025 at 11:24, Sumit Garg <sumit.garg(a)linaro.org> wrote:
> > > > On Tue, 18 Feb 2025 at 21:52, Daniel Stone <daniel(a)fooishbar.org> wrote:
> > > > > dma-heaps was created to solve the problem of having too many
> > > > > 'allocate $n bytes from $specialplace' uAPIs. The proliferation was
> > > > > painful and making it difficult for userspace to do what it needed to
> > > > > do. Userspace doesn't _yet_ make full use of it, but the solution is
> > > > > to make userspace make full use of it, not to go create entirely
> > > > > separate allocation paths for unclear reasons.
> > > > >
> > > > > Besides, I'm writing this from a platform that implements SVP not via
> > > > > TEE. I've worked on platforms which implement SVP without any TEE,
> > > > > where the TEE implementation would be at best a no-op stub, and at
> > > > > worst flat-out impossible.
> > > >
> > > > Can you elaborate the non-TEE use-case for Secure Video Path (SVP) a
> > > > bit more? As to how the protected/encrypted media content pipeline
> > > > works? Which architecture support does your use-case require? Is there
> > > > any higher privileged level firmware interaction required to perform
> > > > media content decryption into restricted memory? Do you plan to
> > > > upstream corresponding support in near future?
> > >
> > > You can see the MTK SVP patches on list which use the MTK SMC to mediate it.
> > >
> > > There are TI Jacinto platforms which implement a 'secure' area
> > > configured statically by (IIRC) BL2, with static permissions defined
> > > for each AXI endpoint, e.g. CPU write + codec RW + dispc read. I've
> > > heard of another SoC vendor doing the same, but I don't think I can
> > > share those details. There is no TEE interaction.
> > >
> > > I'm writing this message from an AMD laptop which implements
> > > restricted content paths outside of TEE. I don't have the full picture
> > > of how SVP is implemented on AMD systems, but I do know that I don't
> > > have any TEE devices exposed.
> > >
> > > > Let me try to elaborate on the Secure Video Path (SVP) flow requiring
> > > > a TEE implementation (in general terms a higher privileged firmware
> > > > managing the pipeline as the kernel/user-space has no access
> > > > permissions to the plain text media content):
> > > >
> > > > - [...]
> > >
> > > Yeah, I totally understand the TEE usecase. I think that TEE is a good
> > > design to implement this. I think that TEE should be used for SVP
> > > where it makes sense.
> > >
> > > Please understand that I am _not_ arguing that no-one should use TEE for SVP!
> > >
> > > > > So, again, let's
> > > > > please turn this around: _why_ TEE? Who benefits from exposing this as
> > > > > completely separate to the more generic uAPI that we specifically
> > > > > designed to handle things like this?
> > > >
> > > > The bridging between DMA heaps and TEE would still require user-space
> > > > to perform an IOCTL into TEE to register the DMA-bufs as you can see
> > > > here [1]. Then it will rather be two handles for user-space to manage.
> > >
> > > Yes, the decoder would need to do this. That's common though: if you
> > > want to share a buffer between V4L2 and DRM, you have three handles:
> > > the V4L2 buffer handle, the DRM GEM handle, and the dmabuf you use to
> > > bridge the two.
> > >
> > > > Similarly during restricted memory allocation/free we need another
> > > > glue layer under DMA heaps to TEE subsystem.
> > >
> > > Yep.
> > >
> > > > The reason is simply which has been iterated over many times in the
> > > > past threads that:
> > > >
> > > > "If user-space has to interact with a TEE device for SVP use-case
> > > > then why it's not better to ask TEE to allocate restricted DMA-bufs
> > > > too"
> > >
> > > The first word in your proposition is load-bearing.
> > >
> > > Build out the usecase a little more here. You have a DRMed video
> > > stream coming in, which you need to decode (involving TEE for this
> > > usecase). You get a dmabuf handle to the decoded frame. You need to
> > > pass the dmabuf across to the Wayland compositor. The compositor needs
> > > to pass it to EGL/Vulkan to import and do composition, which in turn
> > > passes it to the GPU DRM driver. The output of the composition is in
> > > turn shared between the GPU DRM driver and the separate KMS DRM
> > > driver, with the involvement of GBM.
> > >
> > > For the platforms I'm interested in, the GPU DRM driver needs to
> > > switch into protected mode, which has no involvement at all with TEE -
> > > it's architecturally impossible to have TEE involved without moving
> > > most of the GPU driver into TEE and destroying performance. The
> > > display hardware also needs to engage protected mode, which again has
> > > no involvement with TEE and again would need to have half the driver
> > > moved into TEE for no benefit in order to do so. The Wayland
> > > compositor also has no interest in TEE: it tells the GPU DRM driver
> > > about the protected status of its buffers, and that's it.
> > >
> > > What these components _are_ opinionated about, is the way buffers are
> > > allocated and managed. We built out dmabuf modifiers for this usecase,
> > > and we have a good negotiation protocol around that. We also really
> > > care about buffer placement in some usecases - e.g. some display/codec
> > > hardware requires buffers to be sourced from contiguous memory, other
> > > hardware needs to know that when it shares buffers with another
> > > device, it needs to place the buffers outside of inaccessible/slow
> > > local RAM. So we built out dma-heaps, so every part of the component
> > > in the stack can communicate their buffer-placement needs in the same
> > > way as we do modifiers, and negotiate an acceptable allocation.
> > >
> > > That's my starting point for this discussion. We have a mechanism to
> > > deal with the fact that buffers need to be shared between different IP
> > > blocks which have their own constraints on buffer placement, avoiding
> > > the current problem of having every subsystem reinvent their own
> > > allocation uAPI which was burying us in impedance mismatch and
> > > confusion. That mechanism is dma-heaps. It seems like your starting
> > > point from this discussion is that you've implemented a TEE-centric
> > > design for SVP, and so all of userspace should bypass our existing
> > > cross-subsystem special-purpose allocation mechanism, and write
> > > specifically to one implementation. I believe that is a massive step
> > > backwards and an immediate introduction of technical debt.
> > >
> > > Again, having an implementation of SVP via TEE makes a huge amount of
> > > sense. Having _most_ SVP implementations via TEE still makes a lot of
> > > sense. Having _all_ SVP implementations eventually be via TEE would
> > > still make sense. But even if we were at that point - which we aren't
> > > - it still doesn't justify telling userspace 'use the generic dma-heap
> > > uAPI for every device-specific allocation constraint, apart from SVP
> > > which has a completely different way to allocate some bytes'.
> >
> > I must admit that I don't see how this makes a significant difference,
> > but then I haven't hacked much in the stacks you're talking about, so
> > I'm going to take your word for it.
> >
> > I've experimented with providing a dma-heap replacing the TEE API. The
> > implementation is more complex than I first anticipated, adding about
> > 400 lines to the patch set.
>
> I did anticipated this but let's give it a try and see if DMA heaps
> really adds any value from user-space point of view. If it does then it
> will be worth the maintenence overhead.
>
> > From user space, it looks like another
> > dma-heap. I'm using the names you gave earlier,
> > protected,secure-video, protected,trusted-ui, and
> > protected,secure-video-record. However, I wonder if we shouldn't use
> > "restricted" instead of "protected" since we had agreed to call it
> > restricted memory earlier.
>
> Let's stick with "restricted" memory buffer references only.
Until now, we didn't have a standard to balance our naming choice, we
simply wanted to move away from "secure" which didn't mean much, and
restricted met our needs. I think the discussion is worth having again,
now that there is a standard that decided toward "protected". Matchcing
the Khronos standard means reducing a lot of confusion.
https://docs.vulkan.org/guide/latest/protected.html
regards,
Nicolas
On Mon, Jan 20, 2025 at 08:45:51PM +1100, Alexey Kardashevskiy wrote:
> > For CC I'm expecting the KVM fd to be the handle for the cVM, so any
> > RPCs that want to call into the secure world need the KVM FD to get
> > the cVM's identifier. Ie a "bind to cVM" RPC will need the PCI
> > information and the cVM's handle.
>
> And keep KVM fd open until unbind? Or just for the short time to call the
> PSP?
iommufd will keep the KVM fd alive so long as the vIOMMU object
exists. Other uses for kvm require it to work like this.
> > But it also seems to me that VFIO should be able to support putting
> > the device into the RUN state without involving KVM or cVMs.
>
> AMD's TDI bind handler in the PSP wants a guest handle ("GCTX") and a guest
> device BDFn, and VFIO has no desire to dive into this KVM business beyond
> IOMMUFD.
As in my other email, VFIO is not restricted to running VMs, useful
things should be available to apps like DPDK.
There is a use case for using TDISP and getting devices up into an
ecrypted/attested state on pure bare metal without any KVM, VFIO
should work in that use case too.
Jason
On 17/03/2025 03:37, feng.wei8(a)zte.com.cn wrote:
> From: FengWei <feng.wei8(a)zte.com.cn>
>
> Use max3() macro instead of nesting max() to simplify the return
> statement.
>
> Signed-off-by: FengWei <feng.wei8(a)zte.com.cn>
> ---
> v3 -> v4
> fix the format of this patch.
> drivers/dma-buf/dma-heap.c | 2 +-
You sent five versions per day of the same patch.
Look what was in v3:
v2 -> v3
fix the format of this patch
So you are doing the same over and over and sending it to us?
Srsly, ZTE, slow down and be sure you follow the process BEFORE you send
flood of patches like that and learn on the go.
Best regards,
Krzysztof
On 12-02-25, 17:35, Jyothi Kumar Seerapu wrote:
> GSI hardware generates an interrupt for each transfer completion.
> For multiple messages within a single transfer, this results in
> N interrupts for N messages, leading to significant software
> interrupt latency.
>
> To mitigate this latency, utilize Block Event Interrupt (BEI) mechanism.
> Enabling BEI instructs the GSI hardware to prevent interrupt generation
> and BEI is disabled when an interrupt is necessary.
>
> When using BEI, consider splitting a single multi-message transfer into
> chunks of 8 messages internally and so interrupts are not expected for
> the first 7 message completions, only the last message triggers
> an interrupt, indicating the completion of 8 messages.
>
> This BEI mechanism enhances overall transfer efficiency.
That sounds good but I dont like the idea that we add a custom interface
for this. Please use DMA_PREP_INTERRUPT instead. Adding this flag should
trigger N interrupts, absence of this should lead to Block events only
--
~Vinod