On Wed, Apr 9, 2025 at 2:50 PM Sumit Garg <sumit.garg(a)kernel.org> wrote:
>
> On Tue, Apr 08, 2025 at 03:28:45PM +0200, Jens Wiklander wrote:
> > On Tue, Apr 8, 2025 at 11:14 AM Sumit Garg <sumit.garg(a)kernel.org> wrote:
> > >
> > > On Tue, Apr 01, 2025 at 10:33:04AM +0200, Jens Wiklander wrote:
> > > > On Tue, Apr 1, 2025 at 9:58 AM Sumit Garg <sumit.garg(a)kernel.org> wrote:
> > > > >
> > > > > On Tue, Mar 25, 2025 at 11:55:46AM +0100, Jens Wiklander wrote:
> > > > > > Hi Sumit,
> > > > > >
> > > > >
> > > > > <snip>
> > > > >
> > > > > >
> > > > > > >
> > > > > > > > +
> > > > > > > > +#include "tee_private.h"
> > > > > > > > +
> > > > > > > > +struct tee_dma_heap {
> > > > > > > > + struct dma_heap *heap;
> > > > > > > > + enum tee_dma_heap_id id;
> > > > > > > > + struct tee_rstmem_pool *pool;
> > > > > > > > + struct tee_device *teedev;
> > > > > > > > + /* Protects pool and teedev above */
> > > > > > > > + struct mutex mu;
> > > > > > > > +};
> > > > > > > > +
> > > > > > > > +struct tee_heap_buffer {
> > > > > > > > + struct tee_rstmem_pool *pool;
> > > > > > > > + struct tee_device *teedev;
> > > > > > > > + size_t size;
> > > > > > > > + size_t offs;
> > > > > > > > + struct sg_table table;
> > > > > > > > +};
> > > > > > > > +
> > > > > > > > +struct tee_heap_attachment {
> > > > > > > > + struct sg_table table;
> > > > > > > > + struct device *dev;
> > > > > > > > +};
> > > > > > > > +
> > > > > > > > +struct tee_rstmem_static_pool {
> > > > > > > > + struct tee_rstmem_pool pool;
> > > > > > > > + struct gen_pool *gen_pool;
> > > > > > > > + phys_addr_t pa_base;
> > > > > > > > +};
> > > > > > > > +
> > > > > > > > +#if !IS_MODULE(CONFIG_TEE) && IS_ENABLED(CONFIG_DMABUF_HEAPS)
> > > > > > >
> > > > > > > Can this dependency rather be better managed via Kconfig?
> > > > > >
> > > > > > This was the easiest yet somewhat flexible solution I could find. If
> > > > > > you have something better, let's use that instead.
> > > > > >
> > > > >
> > > > > --- a/drivers/tee/optee/Kconfig
> > > > > +++ b/drivers/tee/optee/Kconfig
> > > > > @@ -5,6 +5,7 @@ config OPTEE
> > > > > depends on HAVE_ARM_SMCCC
> > > > > depends on MMU
> > > > > depends on RPMB || !RPMB
> > > > > + select DMABUF_HEAPS
> > > > > help
> > > > > This implements the OP-TEE Trusted Execution Environment (TEE)
> > > > > driver.
> > > >
> > > > I wanted to avoid that since there are plenty of use cases where
> > > > DMABUF_HEAPS aren't needed.
> > >
> > > Yeah, but how the users will figure out the dependency to enable DMA
> > > heaps with TEE subsystem.
> >
> > I hope, without too much difficulty. They are after all looking for a
> > way to allocate memory from a DMA heap.
> >
> > > So it's better we provide a generic kernel
> > > Kconfig which enables all the default features.
> >
> > I disagree, it should be possible to configure without DMABUF_HEAPS if desired.
>
> It's hard to see a use-case for that additional compile time option. If
> you are worried about kernel size then those can be built as modules. On
> the other hand the benifit is that we avoid ifdefery and providing sane
> TEE defaults where features can be detected and enabled at runtime
> instead.
My primary concern isn't kernel size, even if it shouldn't be
irrelevant. It doesn't seem right to enable features that are not
asked for casually. In this case, it's not unreasonable or unexpected
that DMABUF_HEAPS must be explicitly enabled in the config if a heap
interface is needed. It's the same as before this patch set.
>
> >
> > >
> > > > This seems to do the job:
> > > > +config TEE_DMABUF_HEAP
> > > > + bool
> > > > + depends on TEE = y && DMABUF_HEAPS
> > > >
> > > > We can only use DMABUF_HEAPS if the TEE subsystem is compiled into the kernel.
> > >
> > > Ah, I see. So we aren't exporting the DMA heaps APIs for TEE subsystem
> > > to use. We should do that such that there isn't a hard dependency to
> > > compile them into the kernel.
> >
> > I was saving that for a later patch set as a later problem. We may
> > save some time by not doing it now.
> >
>
> But I think it's not a correct way to just reuse internal APIs from DMA
> heaps subsystem without exporting them. It can be seen as a inter
> subsystem API contract breach. I hope it won't be an issue with DMA heap
> maintainers regarding export of those APIs.
Fair enough. I'll add a patch in the next patch set for that. I guess
the same goes for CMA.
Cheers,
Jens
Am 09.04.25 um 16:01 schrieb Philipp Stanner:
> On Wed, 2025-04-09 at 15:14 +0200, Christian König wrote:
>> Am 09.04.25 um 14:56 schrieb Philipp Stanner:
>>> On Wed, 2025-04-09 at 14:51 +0200, Philipp Stanner wrote:
>>>> On Wed, 2025-04-09 at 14:39 +0200, Boris Brezillon wrote:
>>>>> Hi Philipp,
>>>>>
>>>>> On Wed, 9 Apr 2025 14:06:37 +0200
>>>>> Philipp Stanner <phasta(a)kernel.org> wrote:
>>>>>
>>>>>> dma_fence_is_signaled()'s name strongly reads as if this
>>>>>> function
>>>>>> were
>>>>>> intended for checking whether a fence is already signaled.
>>>>>> Also
>>>>>> the
>>>>>> boolean it returns hints at that.
>>>>>>
>>>>>> The function's behavior, however, is more complex: it can
>>>>>> check
>>>>>> with a
>>>>>> driver callback whether the hardware's sequence number
>>>>>> indicates
>>>>>> that
>>>>>> the fence can already be treated as signaled, although the
>>>>>> hardware's /
>>>>>> driver's interrupt handler has not signaled it yet. If that's
>>>>>> the
>>>>>> case,
>>>>>> the function also signals the fence.
>>>>>>
>>>>>> (Presumably) this has caused a bug in Nouveau (unknown
>>>>>> commit),
>>>>>> where
>>>>>> nouveau_fence_done() uses the function to check a fence,
>>>>>> which
>>>>>> causes a
>>>>>> race.
>>>>>>
>>>>>> Give the function a more obvious name.
>>>>> This is just my personal view on this, but I find the new name
>>>>> just
>>>>> as
>>>>> confusing as the old one. It sounds like something is checked,
>>>>> but
>>>>> it's
>>>>> clear what, and then the fence is forcibly signaled like it
>>>>> would
>>>>> be
>>>>> if
>>>>> you call drm_fence_signal(). Of course, this clarified by the
>>>>> doc,
>>>>> but
>>>>> given the goal was to make the function name clearly reflect
>>>>> what
>>>>> it
>>>>> does, I'm not convinced it's significantly better.
>>>>>
>>>>> Maybe dma_fence_check_hw_state_and_propagate(), though it might
>>>>> be
>>>>> too long of name. Oh well, feel free to ignore this comments if
>>>>> a
>>>>> majority is fine with the new name.
>>>> Yoa, the name isn't perfect (the perfect name describing the
>>>> whole
>>>> behavior would be
>>>> dma_fence_check_if_already_signaled_then_check_hardware_state_and
>>>> _pro
>>>> pa
>>>> gate() ^^'
>>>>
>>>> My intention here is to have the reader realize "watch out, the
>>>> fence
>>>> might get signaled here!", which is probably the most important
>>>> event
>>>> regarding fences, which can race, invoke the callbacks and so on.
>>>>
>>>> For details readers will then check the documentation.
>>>>
>>>> But I'm of course open to see if there's a majority for this or
>>>> that
>>>> name.
>>> how about:
>>>
>>> dma_fence_check_hw_and_signal() ?
>> I don't think that renaming the function is a good idea in the first
>> place.
>>
>> What the function does internally is an implementation detail of the
>> framework.
>>
>> For the code using this function it's completely irrelevant if the
>> function might also signal the fence, what matters for the caller is
>> the returned status of the fence. I think this also counts for the
>> dma_fence_is_signaled() documentation.
> It does obviously matter. As it's currently implemented, a lot of
> important things happen implicitly.
Yeah, but that's ok.
The code who calls this is the consumer of the interface and so shouldn't need to know this. That's why we have created the DMA fence framework in the first place.
For the provider side when a driver or similar implements the interface the relevant documentation is the dma_fence_ops structure.
> I only see improvement by making things more obvious.
>
> In any case, how would you call a wrapper that just does
> test_bit(IS_SIGNALED, …) ?
Broken, that was very intentionally removed quite shortly after we created the framework.
We have a few cases were implementations do check that for their fences, but consumers should never be allowed to touch such internals.
Regards,
Christian.
>
> P.
>
>> What we should improve is the documentation of the dma_fence_ops-
>>> enable_signaling and dma_fence_ops->signaled callbacks.
>> Especially see the comment about reference counts on enable_signaling
>> which is missing on the signaled callback. That is most likely the
>> root cause why nouveau implemented enable_signaling correctly but not
>> the other one.
>>
>> But putting that aside I think we should make nails with heads and
>> let the framework guarantee that the fences stay alive until they are
>> signaled (one way or another). This completely removes the burden to
>> keep a reference on unsignaled fences from the drivers /
>> implementations and make things more over all more defensive.
>>
>> Regards,
>> Christian.
>>
>>> P.
>>>
>>>> P.
>>>>
>>>>
>>>>> Regards,
>>>>>
>>>>> Boris
Am 09.04.25 um 14:56 schrieb Philipp Stanner:
> On Wed, 2025-04-09 at 14:51 +0200, Philipp Stanner wrote:
>> On Wed, 2025-04-09 at 14:39 +0200, Boris Brezillon wrote:
>>> Hi Philipp,
>>>
>>> On Wed, 9 Apr 2025 14:06:37 +0200
>>> Philipp Stanner <phasta(a)kernel.org> wrote:
>>>
>>>> dma_fence_is_signaled()'s name strongly reads as if this function
>>>> were
>>>> intended for checking whether a fence is already signaled. Also
>>>> the
>>>> boolean it returns hints at that.
>>>>
>>>> The function's behavior, however, is more complex: it can check
>>>> with a
>>>> driver callback whether the hardware's sequence number indicates
>>>> that
>>>> the fence can already be treated as signaled, although the
>>>> hardware's /
>>>> driver's interrupt handler has not signaled it yet. If that's the
>>>> case,
>>>> the function also signals the fence.
>>>>
>>>> (Presumably) this has caused a bug in Nouveau (unknown commit),
>>>> where
>>>> nouveau_fence_done() uses the function to check a fence, which
>>>> causes a
>>>> race.
>>>>
>>>> Give the function a more obvious name.
>>> This is just my personal view on this, but I find the new name just
>>> as
>>> confusing as the old one. It sounds like something is checked, but
>>> it's
>>> clear what, and then the fence is forcibly signaled like it would
>>> be
>>> if
>>> you call drm_fence_signal(). Of course, this clarified by the doc,
>>> but
>>> given the goal was to make the function name clearly reflect what
>>> it
>>> does, I'm not convinced it's significantly better.
>>>
>>> Maybe dma_fence_check_hw_state_and_propagate(), though it might be
>>> too long of name. Oh well, feel free to ignore this comments if a
>>> majority is fine with the new name.
>> Yoa, the name isn't perfect (the perfect name describing the whole
>> behavior would be
>> dma_fence_check_if_already_signaled_then_check_hardware_state_and_pro
>> pa
>> gate() ^^'
>>
>> My intention here is to have the reader realize "watch out, the fence
>> might get signaled here!", which is probably the most important event
>> regarding fences, which can race, invoke the callbacks and so on.
>>
>> For details readers will then check the documentation.
>>
>> But I'm of course open to see if there's a majority for this or that
>> name.
> how about:
>
> dma_fence_check_hw_and_signal() ?
I don't think that renaming the function is a good idea in the first place.
What the function does internally is an implementation detail of the framework.
For the code using this function it's completely irrelevant if the function might also signal the fence, what matters for the caller is the returned status of the fence. I think this also counts for the dma_fence_is_signaled() documentation.
What we should improve is the documentation of the dma_fence_ops->enable_signaling and dma_fence_ops->signaled callbacks.
Especially see the comment about reference counts on enable_signaling which is missing on the signaled callback. That is most likely the root cause why nouveau implemented enable_signaling correctly but not the other one.
But putting that aside I think we should make nails with heads and let the framework guarantee that the fences stay alive until they are signaled (one way or another). This completely removes the burden to keep a reference on unsignaled fences from the drivers / implementations and make things more over all more defensive.
Regards,
Christian.
>
> P.
>
>> P.
>>
>>
>>> Regards,
>>>
>>> Boris
On 01.04.25 12:13, Sumit Garg wrote:
> + MM folks to seek guidance here.
>
> On Thu, Mar 27, 2025 at 09:07:34AM +0100, Jens Wiklander wrote:
>> Hi Sumit,
>>
>> On Tue, Mar 25, 2025 at 8:42 AM Sumit Garg <sumit.garg(a)kernel.org> wrote:
>>>
>>> On Wed, Mar 05, 2025 at 02:04:15PM +0100, Jens Wiklander wrote:
>>>> Add support in the OP-TEE backend driver dynamic restricted memory
>>>> allocation with FF-A.
>>>>
>>>> The restricted memory pools for dynamically allocated restrict memory
>>>> are instantiated when requested by user-space. This instantiation can
>>>> fail if OP-TEE doesn't support the requested use-case of restricted
>>>> memory.
>>>>
>>>> Restricted memory pools based on a static carveout or dynamic allocation
>>>> can coexist for different use-cases. We use only dynamic allocation with
>>>> FF-A.
>>>>
>>>> Signed-off-by: Jens Wiklander <jens.wiklander(a)linaro.org>
>>>> ---
>>>> drivers/tee/optee/Makefile | 1 +
>>>> drivers/tee/optee/ffa_abi.c | 143 ++++++++++++-
>>>> drivers/tee/optee/optee_private.h | 13 +-
>>>> drivers/tee/optee/rstmem.c | 329 ++++++++++++++++++++++++++++++
>>>> 4 files changed, 483 insertions(+), 3 deletions(-)
>>>> create mode 100644 drivers/tee/optee/rstmem.c
>>>>
>
> <snip>
>
>>>> diff --git a/drivers/tee/optee/rstmem.c b/drivers/tee/optee/rstmem.c
>>>> new file mode 100644
>>>> index 000000000000..ea27769934d4
>>>> --- /dev/null
>>>> +++ b/drivers/tee/optee/rstmem.c
>>>> @@ -0,0 +1,329 @@
>>>> +// SPDX-License-Identifier: GPL-2.0-only
>>>> +/*
>>>> + * Copyright (c) 2025, Linaro Limited
>>>> + */
>>>> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
>>>> +
>>>> +#include <linux/errno.h>
>>>> +#include <linux/genalloc.h>
>>>> +#include <linux/slab.h>
>>>> +#include <linux/string.h>
>>>> +#include <linux/tee_core.h>
>>>> +#include <linux/types.h>
>>>> +#include "optee_private.h"
>>>> +
>>>> +struct optee_rstmem_cma_pool {
>>>> + struct tee_rstmem_pool pool;
>>>> + struct gen_pool *gen_pool;
>>>> + struct optee *optee;
>>>> + size_t page_count;
>>>> + u16 *end_points;
>>>> + u_int end_point_count;
>>>> + u_int align;
>>>> + refcount_t refcount;
>>>> + u32 use_case;
>>>> + struct tee_shm *rstmem;
>>>> + /* Protects when initializing and tearing down this struct */
>>>> + struct mutex mutex;
>>>> +};
>>>> +
>>>> +static struct optee_rstmem_cma_pool *
>>>> +to_rstmem_cma_pool(struct tee_rstmem_pool *pool)
>>>> +{
>>>> + return container_of(pool, struct optee_rstmem_cma_pool, pool);
>>>> +}
>>>> +
>>>> +static int init_cma_rstmem(struct optee_rstmem_cma_pool *rp)
>>>> +{
>>>> + int rc;
>>>> +
>>>> + rp->rstmem = tee_shm_alloc_cma_phys_mem(rp->optee->ctx, rp->page_count,
>>>> + rp->align);
>>>> + if (IS_ERR(rp->rstmem)) {
>>>> + rc = PTR_ERR(rp->rstmem);
>>>> + goto err_null_rstmem;
>>>> + }
>>>> +
>>>> + /*
>>>> + * TODO unmap the memory range since the physical memory will
>>>> + * become inaccesible after the lend_rstmem() call.
>>>> + */
>>>
>>> What's your plan for this TODO? I think we need a CMA allocator here
>>> which can allocate un-mapped memory such that any cache speculation
>>> won't lead to CPU hangs once the memory restriction comes into picture.
>>
>> What happens is platform-specific. For some platforms, it might be
>> enough to avoid explicit access. Yes, a CMA allocator with unmapped
>> memory or where memory can be unmapped is one option.
>
> Did you get a chance to enable real memory protection on RockPi board?
> This will atleast ensure that mapped restricted memory without explicit
> access works fine. Since otherwise once people start to enable real
> memory restriction in OP-TEE, there can be chances of random hang ups
> due to cache speculation.
>
> MM folks,
>
> Basically what we are trying to achieve here is a "no-map" DT behaviour
> [1] which is rather dynamic in nature. The use-case here is that a memory
> block allocated from CMA can be marked restricted at runtime where we
> would like the Linux not being able to directly or indirectly (cache
> speculation) access it. Once memory restriction use-case has been
> completed, the memory block can be marked as normal and freed for
> further CMA allocation.
>
> It will be apprciated if you can guide us regarding the appropriate APIs
> to use for un-mapping/mamping CMA allocations for this use-case.
Can we get some more information why that is even required, so we can
decide if that is even the right thing to do? :)
Who would mark the memory block as restricted and for which purpose?
In arch/powerpc/platforms/powernv/memtrace.c we have some arch-specific
code to remove the directmap after alloc_contig_pages(). See
memtrace_alloc_node(). But it's very arch-specific ...
--
Cheers,
David / dhildenb
On Wed, Apr 9, 2025 at 9:20 AM Amirreza Zarrabi
<amirreza.zarrabi(a)oss.qualcomm.com> wrote:
>
>
>
> On 4/9/2025 4:41 PM, Jens Wiklander wrote:
> > Hi Amirreza,
> >
> > On Wed, Apr 9, 2025 at 2:28 AM Amirreza Zarrabi
> > <amirreza.zarrabi(a)oss.qualcomm.com> wrote:
> >>
> >> Hi jens,
> >>
> >> On 4/8/2025 10:19 PM, Jens Wiklander wrote:
> >>
> >> Hi Amirreza,
> >>
> >> On Fri, Mar 28, 2025 at 3:48 AM Amirreza Zarrabi
> >> <amirreza.zarrabi(a)oss.qualcomm.com> wrote:
> >>
> >> For drivers that can transfer data to the TEE without using shared
> >> memory from client, it is necessary to receive the user address
> >> directly, bypassing any processing by the TEE subsystem. Introduce
> >> TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_INPUT/OUTPUT/INOUT to represent
> >> userspace buffers.
> >>
> >> Signed-off-by: Amirreza Zarrabi <amirreza.zarrabi(a)oss.qualcomm.com>
> >> ---
> >> drivers/tee/tee_core.c | 33 +++++++++++++++++++++++++++++++++
> >> include/linux/tee_drv.h | 6 ++++++
> >> include/uapi/linux/tee.h | 22 ++++++++++++++++------
> >> 3 files changed, 55 insertions(+), 6 deletions(-)
> >>
> >> Is this patch needed now that the QCOMTEE driver supports shared
> >> memory? I prefer keeping changes to the ABI to a minimum.
> >>
> >> Cheers,
> >> Jens
> >>
> >> Unfortunately, this is still required. QTEE supports two types of data transfer:
> >> (1) using UBUF and (2) memory objects. Even with memory object support, some APIs still
> >> expect to receive data using UBUF. For instance, to load a TA, QTEE offers two interfaces:
> >> one where the TA binary is in UBUF and another where the TA binary is in a memory object.
> >
> > Is this a limitation in the QTEE backend driver or on the secure side?
> > Can it be fixed? I don't ask for changes in the ABI to the secure
> > world since I assume you haven't made such changes while this patch
> > set has evolved.
> >
> > Cheers,
> > Jens
>
> The secure-side ABI supports passing data using memcpy to the same
> buffer that contains the message for QTEE, rather than using a memory
> object. Some services tend to use this approach for small data instead
> of allocating a memory object. I have no choice but to expose this support.
Got it, thanks! It's needed.
>
> Throughout the patchset, I have not made any change to the ABI but
> tried to provide support for the memory object in a separate,
> independent commit, distinct from the UBUF.
OK
Cheers,
Jens
>
> Best regards,
> Amir
>
> >
> >>
> >> Best Regards,
> >> Amir
> >>
> >> diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c
> >> index 22cc7d624b0c..bc862a11d437 100644
> >> --- a/drivers/tee/tee_core.c
> >> +++ b/drivers/tee/tee_core.c
> >> @@ -404,6 +404,17 @@ static int params_from_user(struct tee_context *ctx, struct tee_param *params,
> >> params[n].u.value.b = ip.b;
> >> params[n].u.value.c = ip.c;
> >> break;
> >> + case TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_INPUT:
> >> + case TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_OUTPUT:
> >> + case TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_INOUT:
> >> + params[n].u.ubuf.uaddr = u64_to_user_ptr(ip.a);
> >> + params[n].u.ubuf.size = ip.b;
> >> +
> >> + if (!access_ok(params[n].u.ubuf.uaddr,
> >> + params[n].u.ubuf.size))
> >> + return -EFAULT;
> >> +
> >> + break;
> >> case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT:
> >> case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT:
> >> case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT:
> >> @@ -472,6 +483,11 @@ static int params_to_user(struct tee_ioctl_param __user *uparams,
> >> put_user(p->u.value.c, &up->c))
> >> return -EFAULT;
> >> break;
> >> + case TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_OUTPUT:
> >> + case TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_INOUT:
> >> + if (put_user((u64)p->u.ubuf.size, &up->b))
> >> + return -EFAULT;
> >> + break;
> >> case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT:
> >> case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT:
> >> if (put_user((u64)p->u.memref.size, &up->b))
> >> @@ -672,6 +688,13 @@ static int params_to_supp(struct tee_context *ctx,
> >> ip.b = p->u.value.b;
> >> ip.c = p->u.value.c;
> >> break;
> >> + case TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_INPUT:
> >> + case TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_OUTPUT:
> >> + case TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_INOUT:
> >> + ip.a = (u64)p->u.ubuf.uaddr;
> >> + ip.b = p->u.ubuf.size;
> >> + ip.c = 0;
> >> + break;
> >> case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT:
> >> case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT:
> >> case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT:
> >> @@ -774,6 +797,16 @@ static int params_from_supp(struct tee_param *params, size_t num_params,
> >> p->u.value.b = ip.b;
> >> p->u.value.c = ip.c;
> >> break;
> >> + case TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_OUTPUT:
> >> + case TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_INOUT:
> >> + p->u.ubuf.uaddr = u64_to_user_ptr(ip.a);
> >> + p->u.ubuf.size = ip.b;
> >> +
> >> + if (!access_ok(params[n].u.ubuf.uaddr,
> >> + params[n].u.ubuf.size))
> >> + return -EFAULT;
> >> +
> >> + break;
> >> case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT:
> >> case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT:
> >> /*
> >> diff --git a/include/linux/tee_drv.h b/include/linux/tee_drv.h
> >> index ce23fd42c5d4..d773f91c6bdd 100644
> >> --- a/include/linux/tee_drv.h
> >> +++ b/include/linux/tee_drv.h
> >> @@ -82,6 +82,11 @@ struct tee_param_memref {
> >> struct tee_shm *shm;
> >> };
> >>
> >> +struct tee_param_ubuf {
> >> + void * __user uaddr;
> >> + size_t size;
> >> +};
> >> +
> >> struct tee_param_value {
> >> u64 a;
> >> u64 b;
> >> @@ -92,6 +97,7 @@ struct tee_param {
> >> u64 attr;
> >> union {
> >> struct tee_param_memref memref;
> >> + struct tee_param_ubuf ubuf;
> >> struct tee_param_value value;
> >> } u;
> >> };
> >> diff --git a/include/uapi/linux/tee.h b/include/uapi/linux/tee.h
> >> index d0430bee8292..3e9b1ec5dfde 100644
> >> --- a/include/uapi/linux/tee.h
> >> +++ b/include/uapi/linux/tee.h
> >> @@ -151,6 +151,13 @@ struct tee_ioctl_buf_data {
> >> #define TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT 6
> >> #define TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT 7 /* input and output */
> >>
> >> +/*
> >> + * These defines userspace buffer parameters.
> >> + */
> >> +#define TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_INPUT 8
> >> +#define TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_OUTPUT 9
> >> +#define TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_INOUT 10 /* input and output */
> >> +
> >> /*
> >> * Mask for the type part of the attribute, leaves room for more types
> >> */
> >> @@ -186,14 +193,17 @@ struct tee_ioctl_buf_data {
> >> /**
> >> * struct tee_ioctl_param - parameter
> >> * @attr: attributes
> >> - * @a: if a memref, offset into the shared memory object, else a value parameter
> >> - * @b: if a memref, size of the buffer, else a value parameter
> >> + * @a: if a memref, offset into the shared memory object,
> >> + * else if a ubuf, address of the user buffer,
> >> + * else a value parameter
> >> + * @b: if a memref or ubuf, size of the buffer, else a value parameter
> >> * @c: if a memref, shared memory identifier, else a value parameter
> >> *
> >> - * @attr & TEE_PARAM_ATTR_TYPE_MASK indicates if memref or value is used in
> >> - * the union. TEE_PARAM_ATTR_TYPE_VALUE_* indicates value and
> >> - * TEE_PARAM_ATTR_TYPE_MEMREF_* indicates memref. TEE_PARAM_ATTR_TYPE_NONE
> >> - * indicates that none of the members are used.
> >> + * @attr & TEE_PARAM_ATTR_TYPE_MASK indicates if memref, ubuf, or value is
> >> + * used in the union. TEE_PARAM_ATTR_TYPE_VALUE_* indicates value,
> >> + * TEE_PARAM_ATTR_TYPE_MEMREF_* indicates memref, and TEE_PARAM_ATTR_TYPE_UBUF_*
> >> + * indicates ubuf. TEE_PARAM_ATTR_TYPE_NONE indicates that none of the members
> >> + * are used.
> >> *
> >> * Shared memory is allocated with TEE_IOC_SHM_ALLOC which returns an
> >> * identifier representing the shared memory object. A memref can reference
> >>
> >> --
> >> 2.34.1
> >>
>
Hi Amirreza,
On Wed, Apr 9, 2025 at 2:28 AM Amirreza Zarrabi
<amirreza.zarrabi(a)oss.qualcomm.com> wrote:
>
> Hi jens,
>
> On 4/8/2025 10:19 PM, Jens Wiklander wrote:
>
> Hi Amirreza,
>
> On Fri, Mar 28, 2025 at 3:48 AM Amirreza Zarrabi
> <amirreza.zarrabi(a)oss.qualcomm.com> wrote:
>
> For drivers that can transfer data to the TEE without using shared
> memory from client, it is necessary to receive the user address
> directly, bypassing any processing by the TEE subsystem. Introduce
> TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_INPUT/OUTPUT/INOUT to represent
> userspace buffers.
>
> Signed-off-by: Amirreza Zarrabi <amirreza.zarrabi(a)oss.qualcomm.com>
> ---
> drivers/tee/tee_core.c | 33 +++++++++++++++++++++++++++++++++
> include/linux/tee_drv.h | 6 ++++++
> include/uapi/linux/tee.h | 22 ++++++++++++++++------
> 3 files changed, 55 insertions(+), 6 deletions(-)
>
> Is this patch needed now that the QCOMTEE driver supports shared
> memory? I prefer keeping changes to the ABI to a minimum.
>
> Cheers,
> Jens
>
> Unfortunately, this is still required. QTEE supports two types of data transfer:
> (1) using UBUF and (2) memory objects. Even with memory object support, some APIs still
> expect to receive data using UBUF. For instance, to load a TA, QTEE offers two interfaces:
> one where the TA binary is in UBUF and another where the TA binary is in a memory object.
Is this a limitation in the QTEE backend driver or on the secure side?
Can it be fixed? I don't ask for changes in the ABI to the secure
world since I assume you haven't made such changes while this patch
set has evolved.
Cheers,
Jens
>
> Best Regards,
> Amir
>
> diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c
> index 22cc7d624b0c..bc862a11d437 100644
> --- a/drivers/tee/tee_core.c
> +++ b/drivers/tee/tee_core.c
> @@ -404,6 +404,17 @@ static int params_from_user(struct tee_context *ctx, struct tee_param *params,
> params[n].u.value.b = ip.b;
> params[n].u.value.c = ip.c;
> break;
> + case TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_INPUT:
> + case TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_OUTPUT:
> + case TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_INOUT:
> + params[n].u.ubuf.uaddr = u64_to_user_ptr(ip.a);
> + params[n].u.ubuf.size = ip.b;
> +
> + if (!access_ok(params[n].u.ubuf.uaddr,
> + params[n].u.ubuf.size))
> + return -EFAULT;
> +
> + break;
> case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT:
> case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT:
> case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT:
> @@ -472,6 +483,11 @@ static int params_to_user(struct tee_ioctl_param __user *uparams,
> put_user(p->u.value.c, &up->c))
> return -EFAULT;
> break;
> + case TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_OUTPUT:
> + case TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_INOUT:
> + if (put_user((u64)p->u.ubuf.size, &up->b))
> + return -EFAULT;
> + break;
> case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT:
> case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT:
> if (put_user((u64)p->u.memref.size, &up->b))
> @@ -672,6 +688,13 @@ static int params_to_supp(struct tee_context *ctx,
> ip.b = p->u.value.b;
> ip.c = p->u.value.c;
> break;
> + case TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_INPUT:
> + case TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_OUTPUT:
> + case TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_INOUT:
> + ip.a = (u64)p->u.ubuf.uaddr;
> + ip.b = p->u.ubuf.size;
> + ip.c = 0;
> + break;
> case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT:
> case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT:
> case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT:
> @@ -774,6 +797,16 @@ static int params_from_supp(struct tee_param *params, size_t num_params,
> p->u.value.b = ip.b;
> p->u.value.c = ip.c;
> break;
> + case TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_OUTPUT:
> + case TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_INOUT:
> + p->u.ubuf.uaddr = u64_to_user_ptr(ip.a);
> + p->u.ubuf.size = ip.b;
> +
> + if (!access_ok(params[n].u.ubuf.uaddr,
> + params[n].u.ubuf.size))
> + return -EFAULT;
> +
> + break;
> case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT:
> case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT:
> /*
> diff --git a/include/linux/tee_drv.h b/include/linux/tee_drv.h
> index ce23fd42c5d4..d773f91c6bdd 100644
> --- a/include/linux/tee_drv.h
> +++ b/include/linux/tee_drv.h
> @@ -82,6 +82,11 @@ struct tee_param_memref {
> struct tee_shm *shm;
> };
>
> +struct tee_param_ubuf {
> + void * __user uaddr;
> + size_t size;
> +};
> +
> struct tee_param_value {
> u64 a;
> u64 b;
> @@ -92,6 +97,7 @@ struct tee_param {
> u64 attr;
> union {
> struct tee_param_memref memref;
> + struct tee_param_ubuf ubuf;
> struct tee_param_value value;
> } u;
> };
> diff --git a/include/uapi/linux/tee.h b/include/uapi/linux/tee.h
> index d0430bee8292..3e9b1ec5dfde 100644
> --- a/include/uapi/linux/tee.h
> +++ b/include/uapi/linux/tee.h
> @@ -151,6 +151,13 @@ struct tee_ioctl_buf_data {
> #define TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT 6
> #define TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT 7 /* input and output */
>
> +/*
> + * These defines userspace buffer parameters.
> + */
> +#define TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_INPUT 8
> +#define TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_OUTPUT 9
> +#define TEE_IOCTL_PARAM_ATTR_TYPE_UBUF_INOUT 10 /* input and output */
> +
> /*
> * Mask for the type part of the attribute, leaves room for more types
> */
> @@ -186,14 +193,17 @@ struct tee_ioctl_buf_data {
> /**
> * struct tee_ioctl_param - parameter
> * @attr: attributes
> - * @a: if a memref, offset into the shared memory object, else a value parameter
> - * @b: if a memref, size of the buffer, else a value parameter
> + * @a: if a memref, offset into the shared memory object,
> + * else if a ubuf, address of the user buffer,
> + * else a value parameter
> + * @b: if a memref or ubuf, size of the buffer, else a value parameter
> * @c: if a memref, shared memory identifier, else a value parameter
> *
> - * @attr & TEE_PARAM_ATTR_TYPE_MASK indicates if memref or value is used in
> - * the union. TEE_PARAM_ATTR_TYPE_VALUE_* indicates value and
> - * TEE_PARAM_ATTR_TYPE_MEMREF_* indicates memref. TEE_PARAM_ATTR_TYPE_NONE
> - * indicates that none of the members are used.
> + * @attr & TEE_PARAM_ATTR_TYPE_MASK indicates if memref, ubuf, or value is
> + * used in the union. TEE_PARAM_ATTR_TYPE_VALUE_* indicates value,
> + * TEE_PARAM_ATTR_TYPE_MEMREF_* indicates memref, and TEE_PARAM_ATTR_TYPE_UBUF_*
> + * indicates ubuf. TEE_PARAM_ATTR_TYPE_NONE indicates that none of the members
> + * are used.
> *
> * Shared memory is allocated with TEE_IOC_SHM_ALLOC which returns an
> * identifier representing the shared memory object. A memref can reference
>
> --
> 2.34.1
>