lists.linaro.org
Sign In
Sign Up
Sign In
Sign Up
Manage this list
×
Keyboard Shortcuts
Thread View
j
: Next unread message
k
: Previous unread message
j a
: Jump to all threads
j l
: Jump to MailingList overview
2025
April
March
February
January
2024
December
November
October
September
August
July
June
May
April
March
February
January
2023
December
November
October
September
August
July
June
May
April
March
February
January
2022
December
November
October
September
August
July
June
May
April
March
February
January
2021
December
November
October
September
August
July
June
May
April
March
February
January
2020
December
November
October
September
August
July
June
May
April
March
February
January
2019
December
November
October
September
August
July
June
May
April
March
February
January
2018
December
November
October
September
August
July
June
May
April
March
February
January
2017
December
November
October
September
August
July
June
May
April
March
February
January
2016
December
November
October
September
August
July
June
May
April
March
February
January
2015
December
November
October
September
August
July
June
May
April
March
February
January
2014
December
November
October
September
August
July
June
May
April
March
February
January
2013
December
November
October
September
August
July
June
May
April
March
February
January
2012
December
November
October
September
August
July
June
May
April
March
February
January
2011
December
November
October
September
August
July
June
May
April
List overview
Download
Linaro-mm-sig
April 2025
----- 2025 -----
April 2025
March 2025
February 2025
January 2025
----- 2024 -----
December 2024
November 2024
October 2024
September 2024
August 2024
July 2024
June 2024
May 2024
April 2024
March 2024
February 2024
January 2024
----- 2023 -----
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
----- 2022 -----
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
----- 2021 -----
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
----- 2020 -----
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
----- 2019 -----
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
----- 2018 -----
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
----- 2017 -----
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
----- 2016 -----
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
----- 2015 -----
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
----- 2014 -----
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
----- 2013 -----
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
----- 2012 -----
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
----- 2011 -----
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
linaro-mm-sig@lists.linaro.org
10 participants
17 discussions
Start a n
N
ew thread
[PATCH v6] drm/syncobj: Extend EXPORT_SYNC_FILE for timeline syncobjs
by Rob Clark
From: Rob Clark <robdclark(a)chromium.org> Add support for exporting a dma_fence fd for a specific point on a timeline. This is needed for vtest/vpipe[1][2] to implement timeline syncobj support, as it needs a way to turn a point on a timeline back into a dma_fence fd. It also closes an odd omission from the syncobj UAPI. [1]
https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/33433
[2]
https://gitlab.freedesktop.org/virgl/virglrenderer/-/merge_requests/805
v2: Add
…
[View More]
DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_TIMELINE v3: Add unstaged uabi header hunk v4: Also handle IMPORT_SYNC_FILE case v5: Address comments from Dmitry v6: checkpatch.pl nits Signed-off-by: Rob Clark <robdclark(a)chromium.org> Reviewed-by: Christian König <christian.koenig(a)amd.com> Reviewed-by: Dmitry Osipenko <dmitry.osipenko(a)collabora.com> --- drivers/gpu/drm/drm_syncobj.c | 47 +++++++++++++++++++++++++++-------- include/uapi/drm/drm.h | 4 +++ 2 files changed, 41 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c index 4f2ab8a7b50f..636cd83ca29e 100644 --- a/drivers/gpu/drm/drm_syncobj.c +++ b/drivers/gpu/drm/drm_syncobj.c @@ -741,7 +741,7 @@ static int drm_syncobj_fd_to_handle(struct drm_file *file_private, } static int drm_syncobj_import_sync_file_fence(struct drm_file *file_private, - int fd, int handle) + int fd, int handle, u64 point) { struct dma_fence *fence = sync_file_get_fence(fd); struct drm_syncobj *syncobj; @@ -755,14 +755,24 @@ static int drm_syncobj_import_sync_file_fence(struct drm_file *file_private, return -ENOENT; } - drm_syncobj_replace_fence(syncobj, fence); + if (point) { + struct dma_fence_chain *chain = dma_fence_chain_alloc(); + + if (!chain) + return -ENOMEM; + + drm_syncobj_add_point(syncobj, chain, fence, point); + } else { + drm_syncobj_replace_fence(syncobj, fence); + } + dma_fence_put(fence); drm_syncobj_put(syncobj); return 0; } static int drm_syncobj_export_sync_file(struct drm_file *file_private, - int handle, int *p_fd) + int handle, u64 point, int *p_fd) { int ret; struct dma_fence *fence; @@ -772,7 +782,7 @@ static int drm_syncobj_export_sync_file(struct drm_file *file_private, if (fd < 0) return fd; - ret = drm_syncobj_find_fence(file_private, handle, 0, 0, &fence); + ret = drm_syncobj_find_fence(file_private, handle, point, 0, &fence); if (ret) goto err_put_fd; @@ -869,6 +879,9 @@ drm_syncobj_handle_to_fd_ioctl(struct drm_device *dev, void *data, struct drm_file *file_private) { struct drm_syncobj_handle *args = data; + unsigned int valid_flags = DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_TIMELINE | + DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_EXPORT_SYNC_FILE; + u64 point = 0; if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ)) return -EOPNOTSUPP; @@ -876,13 +889,18 @@ drm_syncobj_handle_to_fd_ioctl(struct drm_device *dev, void *data, if (args->pad) return -EINVAL; - if (args->flags != 0 && - args->flags != DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_EXPORT_SYNC_FILE) + if (args->flags & ~valid_flags) return -EINVAL; + if (args->flags & DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_TIMELINE) + point = args->point; + if (args->flags & DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_EXPORT_SYNC_FILE) return drm_syncobj_export_sync_file(file_private, args->handle, - &args->fd); + point, &args->fd); + + if (args->point) + return -EINVAL; return drm_syncobj_handle_to_fd(file_private, args->handle, &args->fd); @@ -893,6 +911,9 @@ drm_syncobj_fd_to_handle_ioctl(struct drm_device *dev, void *data, struct drm_file *file_private) { struct drm_syncobj_handle *args = data; + unsigned int valid_flags = DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_TIMELINE | + DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_IMPORT_SYNC_FILE; + u64 point = 0; if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ)) return -EOPNOTSUPP; @@ -900,14 +921,20 @@ drm_syncobj_fd_to_handle_ioctl(struct drm_device *dev, void *data, if (args->pad) return -EINVAL; - if (args->flags != 0 && - args->flags != DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_IMPORT_SYNC_FILE) + if (args->flags & ~valid_flags) return -EINVAL; + if (args->flags & DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_TIMELINE) + point = args->point; + if (args->flags & DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_IMPORT_SYNC_FILE) return drm_syncobj_import_sync_file_fence(file_private, args->fd, - args->handle); + args->handle, + point); + + if (args->point) + return -EINVAL; return drm_syncobj_fd_to_handle(file_private, args->fd, &args->handle); diff --git a/include/uapi/drm/drm.h b/include/uapi/drm/drm.h index 7fba37b94401..e63a71d3c607 100644 --- a/include/uapi/drm/drm.h +++ b/include/uapi/drm/drm.h @@ -905,13 +905,17 @@ struct drm_syncobj_destroy { }; #define DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_IMPORT_SYNC_FILE (1 << 0) +#define DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_TIMELINE (1 << 1) #define DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_EXPORT_SYNC_FILE (1 << 0) +#define DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_TIMELINE (1 << 1) struct drm_syncobj_handle { __u32 handle; __u32 flags; __s32 fd; __u32 pad; + + __u64 point; }; struct drm_syncobj_transfer { -- 2.49.0
[View Less]
3 days, 21 hours
3
4
0
0
[PATCH v7] drm/syncobj: Extend EXPORT_SYNC_FILE for timeline syncobjs
by Rob Clark
From: Rob Clark <robdclark(a)chromium.org> Add support for exporting a dma_fence fd for a specific point on a timeline. This is needed for vtest/vpipe[1][2] to implement timeline syncobj support, as it needs a way to turn a point on a timeline back into a dma_fence fd. It also closes an odd omission from the syncobj UAPI. [1]
https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/33433
[2]
https://gitlab.freedesktop.org/virgl/virglrenderer/-/merge_requests/805
v2: Add
…
[View More]
DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_TIMELINE v3: Add unstaged uabi header hunk v4: Also handle IMPORT_SYNC_FILE case v5: Address comments from Dmitry v6: checkpatch.pl nits v7: Add check for DRIVER_SYNCOBJ_TIMELINE Signed-off-by: Rob Clark <robdclark(a)chromium.org> Reviewed-by: Christian König <christian.koenig(a)amd.com> Reviewed-by: Dmitry Osipenko <dmitry.osipenko(a)collabora.com> --- drivers/gpu/drm/drm_syncobj.c | 53 ++++++++++++++++++++++++++++------- include/uapi/drm/drm.h | 4 +++ 2 files changed, 47 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c index 4f2ab8a7b50f..3e41461eb9d6 100644 --- a/drivers/gpu/drm/drm_syncobj.c +++ b/drivers/gpu/drm/drm_syncobj.c @@ -741,7 +741,7 @@ static int drm_syncobj_fd_to_handle(struct drm_file *file_private, } static int drm_syncobj_import_sync_file_fence(struct drm_file *file_private, - int fd, int handle) + int fd, int handle, u64 point) { struct dma_fence *fence = sync_file_get_fence(fd); struct drm_syncobj *syncobj; @@ -755,14 +755,24 @@ static int drm_syncobj_import_sync_file_fence(struct drm_file *file_private, return -ENOENT; } - drm_syncobj_replace_fence(syncobj, fence); + if (point) { + struct dma_fence_chain *chain = dma_fence_chain_alloc(); + + if (!chain) + return -ENOMEM; + + drm_syncobj_add_point(syncobj, chain, fence, point); + } else { + drm_syncobj_replace_fence(syncobj, fence); + } + dma_fence_put(fence); drm_syncobj_put(syncobj); return 0; } static int drm_syncobj_export_sync_file(struct drm_file *file_private, - int handle, int *p_fd) + int handle, u64 point, int *p_fd) { int ret; struct dma_fence *fence; @@ -772,7 +782,7 @@ static int drm_syncobj_export_sync_file(struct drm_file *file_private, if (fd < 0) return fd; - ret = drm_syncobj_find_fence(file_private, handle, 0, 0, &fence); + ret = drm_syncobj_find_fence(file_private, handle, point, 0, &fence); if (ret) goto err_put_fd; @@ -869,6 +879,9 @@ drm_syncobj_handle_to_fd_ioctl(struct drm_device *dev, void *data, struct drm_file *file_private) { struct drm_syncobj_handle *args = data; + unsigned int valid_flags = DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_TIMELINE | + DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_EXPORT_SYNC_FILE; + u64 point = 0; if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ)) return -EOPNOTSUPP; @@ -876,13 +889,21 @@ drm_syncobj_handle_to_fd_ioctl(struct drm_device *dev, void *data, if (args->pad) return -EINVAL; - if (args->flags != 0 && - args->flags != DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_EXPORT_SYNC_FILE) + if (args->flags & ~valid_flags) return -EINVAL; + if (args->flags & DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_TIMELINE) { + if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE)) + return -EOPNOTSUPP; + point = args->point; + } + if (args->flags & DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_EXPORT_SYNC_FILE) return drm_syncobj_export_sync_file(file_private, args->handle, - &args->fd); + point, &args->fd); + + if (args->point) + return -EINVAL; return drm_syncobj_handle_to_fd(file_private, args->handle, &args->fd); @@ -893,6 +914,9 @@ drm_syncobj_fd_to_handle_ioctl(struct drm_device *dev, void *data, struct drm_file *file_private) { struct drm_syncobj_handle *args = data; + unsigned int valid_flags = DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_TIMELINE | + DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_IMPORT_SYNC_FILE; + u64 point = 0; if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ)) return -EOPNOTSUPP; @@ -900,14 +924,23 @@ drm_syncobj_fd_to_handle_ioctl(struct drm_device *dev, void *data, if (args->pad) return -EINVAL; - if (args->flags != 0 && - args->flags != DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_IMPORT_SYNC_FILE) + if (args->flags & ~valid_flags) return -EINVAL; + if (args->flags & DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_TIMELINE) { + if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE)) + return -EOPNOTSUPP; + point = args->point; + } + if (args->flags & DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_IMPORT_SYNC_FILE) return drm_syncobj_import_sync_file_fence(file_private, args->fd, - args->handle); + args->handle, + point); + + if (args->point) + return -EINVAL; return drm_syncobj_fd_to_handle(file_private, args->fd, &args->handle); diff --git a/include/uapi/drm/drm.h b/include/uapi/drm/drm.h index 7fba37b94401..e63a71d3c607 100644 --- a/include/uapi/drm/drm.h +++ b/include/uapi/drm/drm.h @@ -905,13 +905,17 @@ struct drm_syncobj_destroy { }; #define DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_IMPORT_SYNC_FILE (1 << 0) +#define DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_TIMELINE (1 << 1) #define DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_EXPORT_SYNC_FILE (1 << 0) +#define DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_TIMELINE (1 << 1) struct drm_syncobj_handle { __u32 handle; __u32 flags; __s32 fd; __u32 pad; + + __u64 point; }; struct drm_syncobj_transfer { -- 2.49.0
[View Less]
4 days, 15 hours
1
0
0
0
[PATCH v5] drm/syncobj: Extend EXPORT_SYNC_FILE for timeline syncobjs
by Rob Clark
From: Rob Clark <robdclark(a)chromium.org> Add support for exporting a dma_fence fd for a specific point on a timeline. This is needed for vtest/vpipe[1][2] to implement timeline syncobj support, as it needs a way to turn a point on a timeline back into a dma_fence fd. It also closes an odd omission from the syncobj UAPI. [1]
https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/33433
[2]
https://gitlab.freedesktop.org/virgl/virglrenderer/-/merge_requests/805
v2: Add
…
[View More]
DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_TIMELINE v3: Add unstaged uabi header hunk v4: Also handle IMPORT_SYNC_FILE case v5: Address comments from Dmitry Signed-off-by: Rob Clark <robdclark(a)chromium.org> --- drivers/gpu/drm/drm_syncobj.c | 45 +++++++++++++++++++++++++++-------- include/uapi/drm/drm.h | 4 ++++ 2 files changed, 39 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c index 4f2ab8a7b50f..b0a4c58fe726 100644 --- a/drivers/gpu/drm/drm_syncobj.c +++ b/drivers/gpu/drm/drm_syncobj.c @@ -741,7 +741,7 @@ static int drm_syncobj_fd_to_handle(struct drm_file *file_private, } static int drm_syncobj_import_sync_file_fence(struct drm_file *file_private, - int fd, int handle) + int fd, int handle, u64 point) { struct dma_fence *fence = sync_file_get_fence(fd); struct drm_syncobj *syncobj; @@ -755,14 +755,22 @@ static int drm_syncobj_import_sync_file_fence(struct drm_file *file_private, return -ENOENT; } - drm_syncobj_replace_fence(syncobj, fence); + if (point) { + struct dma_fence_chain *chain = dma_fence_chain_alloc(); + if (!chain) + return -ENOMEM; + drm_syncobj_add_point(syncobj, chain, fence, point); + } else { + drm_syncobj_replace_fence(syncobj, fence); + } + dma_fence_put(fence); drm_syncobj_put(syncobj); return 0; } static int drm_syncobj_export_sync_file(struct drm_file *file_private, - int handle, int *p_fd) + int handle, u64 point, int *p_fd) { int ret; struct dma_fence *fence; @@ -772,7 +780,7 @@ static int drm_syncobj_export_sync_file(struct drm_file *file_private, if (fd < 0) return fd; - ret = drm_syncobj_find_fence(file_private, handle, 0, 0, &fence); + ret = drm_syncobj_find_fence(file_private, handle, point, 0, &fence); if (ret) goto err_put_fd; @@ -869,6 +877,9 @@ drm_syncobj_handle_to_fd_ioctl(struct drm_device *dev, void *data, struct drm_file *file_private) { struct drm_syncobj_handle *args = data; + unsigned valid_flags = DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_TIMELINE | + DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_EXPORT_SYNC_FILE; + u64 point = 0; if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ)) return -EOPNOTSUPP; @@ -876,13 +887,18 @@ drm_syncobj_handle_to_fd_ioctl(struct drm_device *dev, void *data, if (args->pad) return -EINVAL; - if (args->flags != 0 && - args->flags != DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_EXPORT_SYNC_FILE) + if (args->flags & ~valid_flags) return -EINVAL; + if (args->flags & DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_TIMELINE) + point = args->point; + if (args->flags & DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_EXPORT_SYNC_FILE) return drm_syncobj_export_sync_file(file_private, args->handle, - &args->fd); + point, &args->fd); + + if (args->point) + return -EINVAL; return drm_syncobj_handle_to_fd(file_private, args->handle, &args->fd); @@ -893,6 +909,9 @@ drm_syncobj_fd_to_handle_ioctl(struct drm_device *dev, void *data, struct drm_file *file_private) { struct drm_syncobj_handle *args = data; + unsigned valid_flags = DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_TIMELINE | + DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_IMPORT_SYNC_FILE; + u64 point = 0; if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ)) return -EOPNOTSUPP; @@ -900,14 +919,20 @@ drm_syncobj_fd_to_handle_ioctl(struct drm_device *dev, void *data, if (args->pad) return -EINVAL; - if (args->flags != 0 && - args->flags != DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_IMPORT_SYNC_FILE) + if (args->flags & ~valid_flags) return -EINVAL; + if (args->flags & DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_TIMELINE) + point = args->point; + if (args->flags & DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_IMPORT_SYNC_FILE) return drm_syncobj_import_sync_file_fence(file_private, args->fd, - args->handle); + args->handle, + point); + + if (args->point) + return -EINVAL; return drm_syncobj_fd_to_handle(file_private, args->fd, &args->handle); diff --git a/include/uapi/drm/drm.h b/include/uapi/drm/drm.h index 7fba37b94401..e63a71d3c607 100644 --- a/include/uapi/drm/drm.h +++ b/include/uapi/drm/drm.h @@ -905,13 +905,17 @@ struct drm_syncobj_destroy { }; #define DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_IMPORT_SYNC_FILE (1 << 0) +#define DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_TIMELINE (1 << 1) #define DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_EXPORT_SYNC_FILE (1 << 0) +#define DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_TIMELINE (1 << 1) struct drm_syncobj_handle { __u32 handle; __u32 flags; __s32 fd; __u32 pad; + + __u64 point; }; struct drm_syncobj_transfer { -- 2.49.0
[View Less]
4 days, 21 hours
3
3
0
0
Re: [PATCH v6 06/10] tee: new ioctl to a register tee_shm from a dmabuf file descriptor
by Jens Wiklander
On Tue, Apr 1, 2025 at 10:46 AM Sumit Garg <sumit.garg(a)kernel.org> wrote: > > On Tue, Mar 25, 2025 at 12:17:20PM +0100, Jens Wiklander wrote: > > Hi Sumit, > > > > On Tue, Mar 25, 2025 at 7:50 AM Sumit Garg <sumit.garg(a)kernel.org> wrote: > > > > > > Hi Jens, > > > > > > On Wed, Mar 05, 2025 at 02:04:12PM +0100, Jens Wiklander wrote: > > > > From: Etienne Carriere <etienne.carriere(a)linaro.org> > &
…
[View More]
gt; > > > > > > Enable userspace to create a tee_shm object that refers to a dmabuf > > > > reference. > > > > > > > > Userspace registers the dmabuf file descriptor as in a tee_shm object. > > > > The registration is completed with a tee_shm file descriptor returned to > > > > userspace. > > > > > > > > Userspace is free to close the dmabuf file descriptor now since all the > > > > resources are now held via the tee_shm object. > > > > > > > > Closing the tee_shm file descriptor will release all resources used by the > > > > tee_shm object. > > > > > > > > This change only support dmabuf references that relates to physically > > > > contiguous memory buffers. > > Let's try to reframe this commit message to say that new ioctl allows to > register DMA buffers allocated from restricted/protected heaps with TEE > subsystem. > > > > > > > > > New tee_shm flag to identify tee_shm objects built from a registered > > > > dmabuf, TEE_SHM_DMA_BUF. > > > > > > > > Signed-off-by: Etienne Carriere <etienne.carriere(a)linaro.org> > > > > Signed-off-by: Olivier Masse <olivier.masse(a)nxp.com> > > > > Signed-off-by: Jens Wiklander <jens.wiklander(a)linaro.org> > > > > --- > > > > drivers/tee/tee_core.c | 145 ++++++++++++++++++++++++++----------- > > > > drivers/tee/tee_private.h | 1 + > > > > drivers/tee/tee_shm.c | 146 ++++++++++++++++++++++++++++++++++++-- > > > > include/linux/tee_core.h | 1 + > > > > include/linux/tee_drv.h | 10 +++ > > > > include/uapi/linux/tee.h | 29 ++++++++ > > > > 6 files changed, 288 insertions(+), 44 deletions(-) > > > > > > > > > > I am still trying to find if we really need a separate IOCTL to register > > > DMA heap with TEE subsystem. Can't we initialize tee_shm as a member of > > > struct tee_heap_buffer in tee_dma_heap_alloc() where the allocation > > > happens? > > > > No, that's not possible since we don't have a tee_context availble so > > we can't assign an ID that userspace can use for this shm object. > > > > We could add new attribute types TEE_IOCTL_PARAM_ATTR_TYPE_MEMFD_*, > > but it's a bit more complicated than that since we'd also need to > > update for the life cycle. > > Okay, that's fair enough. Lets take this new IOCTL approach then. > > > > > > > > We can always find a reference back to tee_shm object from DMA > > > buffer. > > > > Yes > > > > Cheers, > > Jens > > > > > > > > -Sumit > > > > > > > diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c > > > > index 685afcaa3ea1..3a71643766d5 100644 > > > > --- a/drivers/tee/tee_core.c > > > > +++ b/drivers/tee/tee_core.c > > > > @@ -353,6 +353,103 @@ tee_ioctl_shm_register(struct tee_context *ctx, > > > > return ret; > > > > } > > > > > > > > +static int > > > > +tee_ioctl_shm_register_fd(struct tee_context *ctx, > > > > + struct tee_ioctl_shm_register_fd_data __user *udata) > > > > +{ > > > > + struct tee_ioctl_shm_register_fd_data data; > > > > + struct tee_shm *shm; > > > > + long ret; > > > > + > > > > + if (copy_from_user(&data, udata, sizeof(data))) > > > > + return -EFAULT; > > > > + > > > > + /* Currently no input flags are supported */ > > > > + if (data.flags) > > > > + return -EINVAL; > > > > + > > > > + shm = tee_shm_register_fd(ctx, data.fd); > > > > + if (IS_ERR(shm)) > > > > + return -EINVAL; > > > > + > > > > + data.id = shm->id; > > > > + data.flags = shm->flags; > > > > + data.size = shm->size; > > > > + > > > > + if (copy_to_user(udata, &data, sizeof(data))) > > > > + ret = -EFAULT; > > > > + else > > > > + ret = tee_shm_get_fd(shm); > > > > + > > > > + /* > > > > + * When user space closes the file descriptor the shared memory > > > > + * should be freed or if tee_shm_get_fd() failed then it will > > > > + * be freed immediately. > > > > + */ > > > > + tee_shm_put(shm); > > > > + return ret; > > > > +} > > > > + > > > > +static int param_from_user_memref(struct tee_context *ctx, > > > > + struct tee_param_memref *memref, > > > > + struct tee_ioctl_param *ip) > > > > +{ > > > > + struct tee_shm *shm; > > > > + size_t offs = 0; > > > > + > > > > + /* > > > > + * If a NULL pointer is passed to a TA in the TEE, > > > > + * the ip.c IOCTL parameters is set to TEE_MEMREF_NULL > > > > + * indicating a NULL memory reference. > > > > + */ > > > > + if (ip->c != TEE_MEMREF_NULL) { > > > > + /* > > > > + * If we fail to get a pointer to a shared > > > > + * memory object (and increase the ref count) > > > > + * from an identifier we return an error. All > > > > + * pointers that has been added in params have > > > > + * an increased ref count. It's the callers > > > > + * responibility to do tee_shm_put() on all > > > > + * resolved pointers. > > > > + */ > > > > + shm = tee_shm_get_from_id(ctx, ip->c); > > > > + if (IS_ERR(shm)) > > > > + return PTR_ERR(shm); > > > > + > > > > + /* > > > > + * Ensure offset + size does not overflow > > > > + * offset and does not overflow the size of > > > > + * the referred shared memory object. > > > > + */ > > > > + if ((ip->a + ip->b) < ip->a || > > > > + (ip->a + ip->b) > shm->size) { > > > > + tee_shm_put(shm); > > > > + return -EINVAL; > > > > + } > > > > + > > > > + if (shm->flags & TEE_SHM_DMA_BUF) { > > This check is already done within tee_shm_get_parent_shm(), is it > redundant here? Yes, I'll remove it. > > > > > + struct tee_shm *parent_shm; > > > > + > > > > + parent_shm = tee_shm_get_parent_shm(shm, &offs); > > > > + if (parent_shm) { > > > > + tee_shm_put(shm); > > > > + shm = parent_shm; > > > > + } > > > > + } > > > > + } else if (ctx->cap_memref_null) { > > > > + /* Pass NULL pointer to OP-TEE */ > > > > + shm = NULL; > > > > + } else { > > > > + return -EINVAL; > > > > + } > > > > + > > > > + memref->shm_offs = ip->a + offs; > > > > + memref->size = ip->b; > > > > + memref->shm = shm; > > > > + > > > > + return 0; > > > > +} > > > > + > > > > static int params_from_user(struct tee_context *ctx, struct tee_param *params, > > > > size_t num_params, > > > > struct tee_ioctl_param __user *uparams) > > > > @@ -360,8 +457,8 @@ static int params_from_user(struct tee_context *ctx, struct tee_param *params, > > > > size_t n; > > > > > > > > for (n = 0; n < num_params; n++) { > > > > - struct tee_shm *shm; > > > > struct tee_ioctl_param ip; > > > > + int rc; > > > > > > > > if (copy_from_user(&ip, uparams + n, sizeof(ip))) > > > > return -EFAULT; > > > > @@ -384,45 +481,10 @@ static int params_from_user(struct tee_context *ctx, struct tee_param *params, > > > > case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: > > > > case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: > > > > case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT: > > > > - /* > > > > - * If a NULL pointer is passed to a TA in the TEE, > > > > - * the ip.c IOCTL parameters is set to TEE_MEMREF_NULL > > > > - * indicating a NULL memory reference. > > > > - */ > > > > - if (ip.c != TEE_MEMREF_NULL) { > > > > - /* > > > > - * If we fail to get a pointer to a shared > > > > - * memory object (and increase the ref count) > > > > - * from an identifier we return an error. All > > > > - * pointers that has been added in params have > > > > - * an increased ref count. It's the callers > > > > - * responibility to do tee_shm_put() on all > > > > - * resolved pointers. > > > > - */ > > > > - shm = tee_shm_get_from_id(ctx, ip.c); > > > > - if (IS_ERR(shm)) > > > > - return PTR_ERR(shm); > > > > - > > > > - /* > > > > - * Ensure offset + size does not overflow > > > > - * offset and does not overflow the size of > > > > - * the referred shared memory object. > > > > - */ > > > > - if ((ip.a + ip.b) < ip.a || > > > > - (ip.a + ip.b) > shm->size) { > > > > - tee_shm_put(shm); > > > > - return -EINVAL; > > > > - } > > > > - } else if (ctx->cap_memref_null) { > > > > - /* Pass NULL pointer to OP-TEE */ > > > > - shm = NULL; > > > > - } else { > > > > - return -EINVAL; > > > > - } > > > > - > > > > - params[n].u.memref.shm_offs = ip.a; > > > > - params[n].u.memref.size = ip.b; > > > > - params[n].u.memref.shm = shm; > > > > + rc = param_from_user_memref(ctx, ¶ms[n].u.memref, > > > > + &ip); > > > > + if (rc) > > > > + return rc; > > This looks more of a refactoring, can we split it as a separate commit? Sure > > > > > break; > > > > default: > > > > /* Unknown attribute */ > > > > @@ -827,6 +889,8 @@ static long tee_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) > > > > return tee_ioctl_shm_alloc(ctx, uarg); > > > > case TEE_IOC_SHM_REGISTER: > > > > return tee_ioctl_shm_register(ctx, uarg); > > > > + case TEE_IOC_SHM_REGISTER_FD: > > > > + return tee_ioctl_shm_register_fd(ctx, uarg); > > > > case TEE_IOC_OPEN_SESSION: > > > > return tee_ioctl_open_session(ctx, uarg); > > > > case TEE_IOC_INVOKE: > > > > @@ -1288,3 +1352,4 @@ MODULE_AUTHOR("Linaro"); > > > > MODULE_DESCRIPTION("TEE Driver"); > > > > MODULE_VERSION("1.0"); > > > > MODULE_LICENSE("GPL v2"); > > > > +MODULE_IMPORT_NS("DMA_BUF"); > > > > diff --git a/drivers/tee/tee_private.h b/drivers/tee/tee_private.h > > > > index 6c6ff5d5eed2..aad7f6c7e0f0 100644 > > > > --- a/drivers/tee/tee_private.h > > > > +++ b/drivers/tee/tee_private.h > > > > @@ -24,6 +24,7 @@ void teedev_ctx_put(struct tee_context *ctx); > > > > struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size); > > > > struct tee_shm *tee_shm_register_user_buf(struct tee_context *ctx, > > > > unsigned long addr, size_t length); > > > > +struct tee_shm *tee_shm_get_parent_shm(struct tee_shm *shm, size_t *offs); > > > > > > > > int tee_heap_update_from_dma_buf(struct tee_device *teedev, > > > > struct dma_buf *dmabuf, size_t *offset, > > > > diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c > > > > index daf6e5cfd59a..8b79918468b5 100644 > > > > --- a/drivers/tee/tee_shm.c > > > > +++ b/drivers/tee/tee_shm.c > > > > @@ -4,6 +4,7 @@ > > > > */ > > > > #include <linux/anon_inodes.h> > > > > #include <linux/device.h> > > > > +#include <linux/dma-buf.h> > > > > #include <linux/idr.h> > > > > #include <linux/io.h> > > > > #include <linux/mm.h> > > > > @@ -15,6 +16,16 @@ > > > > #include <linux/highmem.h> > > > > #include "tee_private.h" > > > > > > > > +/* extra references appended to shm object for registered shared memory */ > > > > +struct tee_shm_dmabuf_ref { > > > > + struct tee_shm shm; > > > > + size_t offset; > > > > + struct dma_buf *dmabuf; > > > > + struct dma_buf_attachment *attach; > > > > + struct sg_table *sgt; > > > > + struct tee_shm *parent_shm; > > > > +}; > > > > + > > > > static void shm_put_kernel_pages(struct page **pages, size_t page_count) > > > > { > > > > size_t n; > > > > @@ -45,7 +56,23 @@ static void release_registered_pages(struct tee_shm *shm) > > > > > > > > static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) > > > > { > > > > - if (shm->flags & TEE_SHM_POOL) { > > > > + struct tee_shm *parent_shm = NULL; > > > > + void *p = shm; > > > > + > > > > + if (shm->flags & TEE_SHM_DMA_BUF) { > > > > + struct tee_shm_dmabuf_ref *ref; > > > > + > > > > + ref = container_of(shm, struct tee_shm_dmabuf_ref, shm); > > > > + parent_shm = ref->parent_shm; > > > > + p = ref; > > > > + if (ref->attach) { > > > > + dma_buf_unmap_attachment(ref->attach, ref->sgt, > > > > + DMA_BIDIRECTIONAL); > > > > + > > > > + dma_buf_detach(ref->dmabuf, ref->attach); > > > > + } > > > > + dma_buf_put(ref->dmabuf); > > > > + } else if (shm->flags & TEE_SHM_POOL) { > > > > teedev->pool->ops->free(teedev->pool, shm); > > > > } else if (shm->flags & TEE_SHM_DYNAMIC) { > > > > int rc = teedev->desc->ops->shm_unregister(shm->ctx, shm); > > > > @@ -57,9 +84,10 @@ static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) > > > > release_registered_pages(shm); > > > > } > > > > > > > > - teedev_ctx_put(shm->ctx); > > > > + if (shm->ctx) > > > > + teedev_ctx_put(shm->ctx); > > > > > > > > - kfree(shm); > > > > + kfree(p); > > > > > > > > tee_device_put(teedev); > > > > } > > > > @@ -169,7 +197,7 @@ struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size) > > > > * tee_client_invoke_func(). The memory allocated is later freed with a > > > > * call to tee_shm_free(). > > > > * > > > > - * @returns a pointer to 'struct tee_shm' > > > > + * @returns a pointer to 'struct tee_shm' on success, and ERR_PTR on failure > > > > */ > > > > struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size) > > > > { > > > > @@ -179,6 +207,116 @@ struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size) > > > > } > > > > EXPORT_SYMBOL_GPL(tee_shm_alloc_kernel_buf); > > > > > > > > +struct tee_shm *tee_shm_register_fd(struct tee_context *ctx, int fd) > > > > +{ > > > > + struct tee_shm_dmabuf_ref *ref; > > > > + int rc; > > > > + > > > > + if (!tee_device_get(ctx->teedev)) > > > > + return ERR_PTR(-EINVAL); > > > > + > > > > + teedev_ctx_get(ctx); > > > > + > > > > + ref = kzalloc(sizeof(*ref), GFP_KERNEL); > > > > + if (!ref) { > > > > + rc = -ENOMEM; > > > > + goto err_put_tee; > > > > + } > > > > + > > > > + refcount_set(&ref->shm.refcount, 1); > > > > + ref->shm.ctx = ctx; > > > > + ref->shm.id = -1; > > > > + ref->shm.flags = TEE_SHM_DMA_BUF; > > > > + > > > > + ref->dmabuf = dma_buf_get(fd); > > > > + if (IS_ERR(ref->dmabuf)) { > > > > + rc = PTR_ERR(ref->dmabuf); > > > > + goto err_kfree_ref; > > > > + } > > > > + > > > > + rc = tee_heap_update_from_dma_buf(ctx->teedev, ref->dmabuf, > > > > + &ref->offset, &ref->shm, > > > > + &ref->parent_shm); > > > > + if (!rc) > > > > + goto out; > > > > + if (rc != -EINVAL) > > > > + goto err_put_dmabuf; > > > > + > > > > + ref->attach = dma_buf_attach(ref->dmabuf, &ctx->teedev->dev); > > > > + if (IS_ERR(ref->attach)) { > > > > + rc = PTR_ERR(ref->attach); > > > > + goto err_put_dmabuf; > > > > + } > > > > + > > > > + ref->sgt = dma_buf_map_attachment(ref->attach, DMA_BIDIRECTIONAL); > > > > + if (IS_ERR(ref->sgt)) { > > > > + rc = PTR_ERR(ref->sgt); > > > > + goto err_detach; > > > > + } > > > > + > > > > + if (sg_nents(ref->sgt->sgl) != 1) { > > > > + rc = PTR_ERR(ref->sgt->sgl); > > > > + goto err_unmap_attachement; > > > > + } > > > > + > > > > + ref->shm.paddr = page_to_phys(sg_page(ref->sgt->sgl)); > > > > + ref->shm.size = ref->sgt->sgl->length; > > > > + > > > > +out: > > > > + mutex_lock(&ref->shm.ctx->teedev->mutex); > > > > + ref->shm.id = idr_alloc(&ref->shm.ctx->teedev->idr, &ref->shm, > > > > + 1, 0, GFP_KERNEL); > > > > + mutex_unlock(&ref->shm.ctx->teedev->mutex); > > > > + if (ref->shm.id < 0) { > > > > + rc = ref->shm.id; > > > > + if (ref->attach) > > > > + goto err_unmap_attachement; > > > > + goto err_put_dmabuf; > > > > + } > > > > + > > > > + return &ref->shm; > > > > + > > > > +err_unmap_attachement: > > > > + dma_buf_unmap_attachment(ref->attach, ref->sgt, DMA_BIDIRECTIONAL); > > > > +err_detach: > > > > + dma_buf_detach(ref->dmabuf, ref->attach); > > > > +err_put_dmabuf: > > > > + dma_buf_put(ref->dmabuf); > > > > +err_kfree_ref: > > > > + kfree(ref); > > > > +err_put_tee: > > > > + teedev_ctx_put(ctx); > > > > + tee_device_put(ctx->teedev); > > > > + > > > > + return ERR_PTR(rc); > > > > +} > > > > +EXPORT_SYMBOL_GPL(tee_shm_register_fd); > > > > + > > > > +struct tee_shm *tee_shm_get_parent_shm(struct tee_shm *shm, size_t *offs) > > > > +{ > > > > + struct tee_shm *parent_shm = NULL; > > > > + > > > > + if (shm->flags & TEE_SHM_DMA_BUF) { > > > > + struct tee_shm_dmabuf_ref *ref; > > > > + > > > > + ref = container_of(shm, struct tee_shm_dmabuf_ref, shm); > > > > + if (ref->parent_shm) { > > > > + /* > > > > + * the shm already has one reference to > > > > + * ref->parent_shm so we should be clear of 0. > > > > + * We're getting another reference since the caller > > > > + * of this function expects to put the returned > > > > + * parent_shm when it's done with it. > > This seems a bit complicated, can we rather inline this API as I can't > see any other current user? OK Cheers, Jens > > -Sumit > > > > > + */ > > > > + parent_shm = ref->parent_shm; > > > > + refcount_inc(&parent_shm->refcount); > > > > + *offs = ref->offset; > > > > + } > > > > + } > > > > + > > > > + return parent_shm; > > > > +} > > > > + > > > > /** > > > > * tee_shm_alloc_priv_buf() - Allocate shared memory for a privately shared > > > > * kernel buffer > > > > diff --git a/include/linux/tee_core.h b/include/linux/tee_core.h > > > > index 16ef078247ae..6bd833b6d0e1 100644 > > > > --- a/include/linux/tee_core.h > > > > +++ b/include/linux/tee_core.h > > > > @@ -28,6 +28,7 @@ > > > > #define TEE_SHM_USER_MAPPED BIT(1) /* Memory mapped in user space */ > > > > #define TEE_SHM_POOL BIT(2) /* Memory allocated from pool */ > > > > #define TEE_SHM_PRIV BIT(3) /* Memory private to TEE driver */ > > > > +#define TEE_SHM_DMA_BUF BIT(4) /* Memory with dma-buf handle */ > > > > > > > > #define TEE_DEVICE_FLAG_REGISTERED 0x1 > > > > #define TEE_MAX_DEV_NAME_LEN 32 > > > > diff --git a/include/linux/tee_drv.h b/include/linux/tee_drv.h > > > > index a54c203000ed..824f1251de60 100644 > > > > --- a/include/linux/tee_drv.h > > > > +++ b/include/linux/tee_drv.h > > > > @@ -116,6 +116,16 @@ struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size); > > > > struct tee_shm *tee_shm_register_kernel_buf(struct tee_context *ctx, > > > > void *addr, size_t length); > > > > > > > > +/** > > > > + * tee_shm_register_fd() - Register shared memory from file descriptor > > > > + * > > > > + * @ctx: Context that allocates the shared memory > > > > + * @fd: Shared memory file descriptor reference > > > > + * > > > > + * @returns a pointer to 'struct tee_shm' on success, and ERR_PTR on failure > > > > + */ > > > > +struct tee_shm *tee_shm_register_fd(struct tee_context *ctx, int fd); > > > > + > > > > /** > > > > * tee_shm_free() - Free shared memory > > > > * @shm: Handle to shared memory to free > > > > diff --git a/include/uapi/linux/tee.h b/include/uapi/linux/tee.h > > > > index d0430bee8292..1f9a4ac2b211 100644 > > > > --- a/include/uapi/linux/tee.h > > > > +++ b/include/uapi/linux/tee.h > > > > @@ -118,6 +118,35 @@ struct tee_ioctl_shm_alloc_data { > > > > #define TEE_IOC_SHM_ALLOC _IOWR(TEE_IOC_MAGIC, TEE_IOC_BASE + 1, \ > > > > struct tee_ioctl_shm_alloc_data) > > > > > > > > +/** > > > > + * struct tee_ioctl_shm_register_fd_data - Shared memory registering argument > > > > + * @fd: [in] File descriptor identifying the shared memory > > > > + * @size: [out] Size of shared memory to allocate > > > > + * @flags: [in] Flags to/from allocation. > > > > + * @id: [out] Identifier of the shared memory > > > > + * > > > > + * The flags field should currently be zero as input. Updated by the call > > > > + * with actual flags as defined by TEE_IOCTL_SHM_* above. > > > > + * This structure is used as argument for TEE_IOC_SHM_REGISTER_FD below. > > > > + */ > > > > +struct tee_ioctl_shm_register_fd_data { > > > > + __s64 fd; > > > > + __u64 size; > > > > + __u32 flags; > > > > + __s32 id; > > > > +}; > > > > + > > > > +/** > > > > + * TEE_IOC_SHM_REGISTER_FD - register a shared memory from a file descriptor > > > > + * > > > > + * Returns a file descriptor on success or < 0 on failure > > > > + * > > > > + * The returned file descriptor refers to the shared memory object in kernel > > > > + * land. The shared memory is freed when the descriptor is closed. > > > > + */ > > > > +#define TEE_IOC_SHM_REGISTER_FD _IOWR(TEE_IOC_MAGIC, TEE_IOC_BASE + 8, \ > > > > + struct tee_ioctl_shm_register_fd_data) > > > > + > > > > /** > > > > * struct tee_ioctl_buf_data - Variable sized buffer > > > > * @buf_ptr: [in] A __user pointer to a buffer > > > > -- > > > > 2.43.0 > > > >
[View Less]
4 days, 23 hours
1
0
0
0
Re: [PATCH v6 09/10] optee: FF-A: dynamic restricted memory allocation
by Jens Wiklander
On Tue, Apr 1, 2025 at 12:13 PM Sumit Garg <sumit.garg(a)kernel.org> wrote: > > + MM folks to seek guidance here. > > On Thu, Mar 27, 2025 at 09:07:34AM +0100, Jens Wiklander wrote: > > Hi Sumit, > > > > On Tue, Mar 25, 2025 at 8:42 AM Sumit Garg <sumit.garg(a)kernel.org> wrote: > > > > > > On Wed, Mar 05, 2025 at 02:04:15PM +0100, Jens Wiklander wrote: > > > > Add support in the OP-TEE backend driver dynamic restricted
…
[View More]
memory > > > > allocation with FF-A. > > > > > > > > The restricted memory pools for dynamically allocated restrict memory > > > > are instantiated when requested by user-space. This instantiation can > > > > fail if OP-TEE doesn't support the requested use-case of restricted > > > > memory. > > > > > > > > Restricted memory pools based on a static carveout or dynamic allocation > > > > can coexist for different use-cases. We use only dynamic allocation with > > > > FF-A. > > > > > > > > Signed-off-by: Jens Wiklander <jens.wiklander(a)linaro.org> > > > > --- > > > > drivers/tee/optee/Makefile | 1 + > > > > drivers/tee/optee/ffa_abi.c | 143 ++++++++++++- > > > > drivers/tee/optee/optee_private.h | 13 +- > > > > drivers/tee/optee/rstmem.c | 329 ++++++++++++++++++++++++++++++ > > > > 4 files changed, 483 insertions(+), 3 deletions(-) > > > > create mode 100644 drivers/tee/optee/rstmem.c > > > > > > <snip> > > > > > diff --git a/drivers/tee/optee/rstmem.c b/drivers/tee/optee/rstmem.c > > > > new file mode 100644 > > > > index 000000000000..ea27769934d4 > > > > --- /dev/null > > > > +++ b/drivers/tee/optee/rstmem.c > > > > @@ -0,0 +1,329 @@ > > > > +// SPDX-License-Identifier: GPL-2.0-only > > > > +/* > > > > + * Copyright (c) 2025, Linaro Limited > > > > + */ > > > > +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt > > > > + > > > > +#include <linux/errno.h> > > > > +#include <linux/genalloc.h> > > > > +#include <linux/slab.h> > > > > +#include <linux/string.h> > > > > +#include <linux/tee_core.h> > > > > +#include <linux/types.h> > > > > +#include "optee_private.h" > > > > + > > > > +struct optee_rstmem_cma_pool { > > > > + struct tee_rstmem_pool pool; > > > > + struct gen_pool *gen_pool; > > > > + struct optee *optee; > > > > + size_t page_count; > > > > + u16 *end_points; > > > > + u_int end_point_count; > > > > + u_int align; > > > > + refcount_t refcount; > > > > + u32 use_case; > > > > + struct tee_shm *rstmem; > > > > + /* Protects when initializing and tearing down this struct */ > > > > + struct mutex mutex; > > > > +}; > > > > + > > > > +static struct optee_rstmem_cma_pool * > > > > +to_rstmem_cma_pool(struct tee_rstmem_pool *pool) > > > > +{ > > > > + return container_of(pool, struct optee_rstmem_cma_pool, pool); > > > > +} > > > > + > > > > +static int init_cma_rstmem(struct optee_rstmem_cma_pool *rp) > > > > +{ > > > > + int rc; > > > > + > > > > + rp->rstmem = tee_shm_alloc_cma_phys_mem(rp->optee->ctx, rp->page_count, > > > > + rp->align); > > > > + if (IS_ERR(rp->rstmem)) { > > > > + rc = PTR_ERR(rp->rstmem); > > > > + goto err_null_rstmem; > > > > + } > > > > + > > > > + /* > > > > + * TODO unmap the memory range since the physical memory will > > > > + * become inaccesible after the lend_rstmem() call. > > > > + */ > > > > > > What's your plan for this TODO? I think we need a CMA allocator here > > > which can allocate un-mapped memory such that any cache speculation > > > won't lead to CPU hangs once the memory restriction comes into picture. > > > > What happens is platform-specific. For some platforms, it might be > > enough to avoid explicit access. Yes, a CMA allocator with unmapped > > memory or where memory can be unmapped is one option. > > Did you get a chance to enable real memory protection on RockPi board? No, I don't think I have access to the needed documentation for the board to set it up for relevant peripherals. > This will atleast ensure that mapped restricted memory without explicit > access works fine. Since otherwise once people start to enable real > memory restriction in OP-TEE, there can be chances of random hang ups > due to cache speculation. A hypervisor in the normal world can also make the memory inaccessible to the kernel. That shouldn't cause any hangups due to cache speculation. Cheers, Jens > > MM folks, > > Basically what we are trying to achieve here is a "no-map" DT behaviour > [1] which is rather dynamic in nature. The use-case here is that a memory > block allocated from CMA can be marked restricted at runtime where we > would like the Linux not being able to directly or indirectly (cache > speculation) access it. Once memory restriction use-case has been > completed, the memory block can be marked as normal and freed for > further CMA allocation. > > It will be apprciated if you can guide us regarding the appropriate APIs > to use for un-mapping/mamping CMA allocations for this use-case. > > [1]
https://github.com/devicetree-org/dt-schema/blob/main/dtschema/schemas/rese…
> > -Sumit > > > > > > > > > > + rc = rp->optee->ops->lend_rstmem(rp->optee, rp->rstmem, rp->end_points, > > > > + rp->end_point_count, rp->use_case); > > > > + if (rc) > > > > + goto err_put_shm; > > > > + rp->rstmem->flags |= TEE_SHM_DYNAMIC; > > > > + > > > > + rp->gen_pool = gen_pool_create(PAGE_SHIFT, -1); > > > > + if (!rp->gen_pool) { > > > > + rc = -ENOMEM; > > > > + goto err_reclaim; > > > > + } > > > > + > > > > + rc = gen_pool_add(rp->gen_pool, rp->rstmem->paddr, > > > > + rp->rstmem->size, -1); > > > > + if (rc) > > > > + goto err_free_pool; > > > > + > > > > + refcount_set(&rp->refcount, 1); > > > > + return 0; > > > > + > > > > +err_free_pool: > > > > + gen_pool_destroy(rp->gen_pool); > > > > + rp->gen_pool = NULL; > > > > +err_reclaim: > > > > + rp->optee->ops->reclaim_rstmem(rp->optee, rp->rstmem); > > > > +err_put_shm: > > > > + tee_shm_put(rp->rstmem); > > > > +err_null_rstmem: > > > > + rp->rstmem = NULL; > > > > + return rc; > > > > +} > > > > + > > > > +static int get_cma_rstmem(struct optee_rstmem_cma_pool *rp) > > > > +{ > > > > + int rc = 0; > > > > + > > > > + if (!refcount_inc_not_zero(&rp->refcount)) { > > > > + mutex_lock(&rp->mutex); > > > > + if (rp->gen_pool) { > > > > + /* > > > > + * Another thread has already initialized the pool > > > > + * before us, or the pool was just about to be torn > > > > + * down. Either way we only need to increase the > > > > + * refcount and we're done. > > > > + */ > > > > + refcount_inc(&rp->refcount); > > > > + } else { > > > > + rc = init_cma_rstmem(rp); > > > > + } > > > > + mutex_unlock(&rp->mutex); > > > > + } > > > > + > > > > + return rc; > > > > +} > > > > + > > > > +static void release_cma_rstmem(struct optee_rstmem_cma_pool *rp) > > > > +{ > > > > + gen_pool_destroy(rp->gen_pool); > > > > + rp->gen_pool = NULL; > > > > + > > > > + rp->optee->ops->reclaim_rstmem(rp->optee, rp->rstmem); > > > > + rp->rstmem->flags &= ~TEE_SHM_DYNAMIC; > > > > + > > > > + WARN(refcount_read(&rp->rstmem->refcount) != 1, "Unexpected refcount"); > > > > + tee_shm_put(rp->rstmem); > > > > + rp->rstmem = NULL; > > > > +} > > > > + > > > > +static void put_cma_rstmem(struct optee_rstmem_cma_pool *rp) > > > > +{ > > > > + if (refcount_dec_and_test(&rp->refcount)) { > > > > + mutex_lock(&rp->mutex); > > > > + if (rp->gen_pool) > > > > + release_cma_rstmem(rp); > > > > + mutex_unlock(&rp->mutex); > > > > + } > > > > +} > > > > + > > > > +static int rstmem_pool_op_cma_alloc(struct tee_rstmem_pool *pool, > > > > + struct sg_table *sgt, size_t size, > > > > + size_t *offs) > > > > +{ > > > > + struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool); > > > > + size_t sz = ALIGN(size, PAGE_SIZE); > > > > + phys_addr_t pa; > > > > + int rc; > > > > + > > > > + rc = get_cma_rstmem(rp); > > > > + if (rc) > > > > + return rc; > > > > + > > > > + pa = gen_pool_alloc(rp->gen_pool, sz); > > > > + if (!pa) { > > > > + rc = -ENOMEM; > > > > + goto err_put; > > > > + } > > > > + > > > > + rc = sg_alloc_table(sgt, 1, GFP_KERNEL); > > > > + if (rc) > > > > + goto err_free; > > > > + > > > > + sg_set_page(sgt->sgl, phys_to_page(pa), size, 0); > > > > + *offs = pa - rp->rstmem->paddr; > > > > + > > > > + return 0; > > > > +err_free: > > > > + gen_pool_free(rp->gen_pool, pa, size); > > > > +err_put: > > > > + put_cma_rstmem(rp); > > > > + > > > > + return rc; > > > > +} > > > > + > > > > +static void rstmem_pool_op_cma_free(struct tee_rstmem_pool *pool, > > > > + struct sg_table *sgt) > > > > +{ > > > > + struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool); > > > > + struct scatterlist *sg; > > > > + int i; > > > > + > > > > + for_each_sgtable_sg(sgt, sg, i) > > > > + gen_pool_free(rp->gen_pool, sg_phys(sg), sg->length); > > > > + sg_free_table(sgt); > > > > + put_cma_rstmem(rp); > > > > +} > > > > + > > > > +static int rstmem_pool_op_cma_update_shm(struct tee_rstmem_pool *pool, > > > > + struct sg_table *sgt, size_t offs, > > > > + struct tee_shm *shm, > > > > + struct tee_shm **parent_shm) > > > > +{ > > > > + struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool); > > > > + > > > > + *parent_shm = rp->rstmem; > > > > + > > > > + return 0; > > > > +} > > > > + > > > > +static void pool_op_cma_destroy_pool(struct tee_rstmem_pool *pool) > > > > +{ > > > > + struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool); > > > > + > > > > + mutex_destroy(&rp->mutex); > > > > + kfree(rp); > > > > +} > > > > + > > > > +static struct tee_rstmem_pool_ops rstmem_pool_ops_cma = { > > > > + .alloc = rstmem_pool_op_cma_alloc, > > > > + .free = rstmem_pool_op_cma_free, > > > > + .update_shm = rstmem_pool_op_cma_update_shm, > > > > + .destroy_pool = pool_op_cma_destroy_pool, > > > > +}; > > > > + > > > > +static int get_rstmem_config(struct optee *optee, u32 use_case, > > > > + size_t *min_size, u_int *min_align, > > > > + u16 *end_points, u_int *ep_count) > > > > > > I guess this end points terminology is specific to FF-A ABI. Is there > > > any relevance for this in the common APIs? > > > > Yes, endpoints are specific to FF-A ABI. The list of end-points must > > be presented to FFA_MEM_LEND. We're relying on the secure world to > > know which endpoints are needed for a specific use case. > > > > Cheers, > > Jens > > > > > > > > -Sumit > > > > > > > +{ > > > > + struct tee_param params[2] = { > > > > + [0] = { > > > > + .attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT, > > > > + .u.value.a = use_case, > > > > + }, > > > > + [1] = { > > > > + .attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT, > > > > + }, > > > > + }; > > > > + struct optee_shm_arg_entry *entry; > > > > + struct tee_shm *shm_param = NULL; > > > > + struct optee_msg_arg *msg_arg; > > > > + struct tee_shm *shm; > > > > + u_int offs; > > > > + int rc; > > > > + > > > > + if (end_points && *ep_count) { > > > > + params[1].u.memref.size = *ep_count * sizeof(*end_points); > > > > + shm_param = tee_shm_alloc_priv_buf(optee->ctx, > > > > + params[1].u.memref.size); > > > > + if (IS_ERR(shm_param)) > > > > + return PTR_ERR(shm_param); > > > > + params[1].u.memref.shm = shm_param; > > > > + } > > > > + > > > > + msg_arg = optee_get_msg_arg(optee->ctx, ARRAY_SIZE(params), &entry, > > > > + &shm, &offs); > > > > + if (IS_ERR(msg_arg)) { > > > > + rc = PTR_ERR(msg_arg); > > > > + goto out_free_shm; > > > > + } > > > > + msg_arg->cmd = OPTEE_MSG_CMD_GET_RSTMEM_CONFIG; > > > > + > > > > + rc = optee->ops->to_msg_param(optee, msg_arg->params, > > > > + ARRAY_SIZE(params), params, > > > > + false /*!update_out*/); > > > > + if (rc) > > > > + goto out_free_msg; > > > > + > > > > + rc = optee->ops->do_call_with_arg(optee->ctx, shm, offs, false); > > > > + if (rc) > > > > + goto out_free_msg; > > > > + if (msg_arg->ret && msg_arg->ret != TEEC_ERROR_SHORT_BUFFER) { > > > > + rc = -EINVAL; > > > > + goto out_free_msg; > > > > + } > > > > + > > > > + rc = optee->ops->from_msg_param(optee, params, ARRAY_SIZE(params), > > > > + msg_arg->params, true /*update_out*/); > > > > + if (rc) > > > > + goto out_free_msg; > > > > + > > > > + if (!msg_arg->ret && end_points && > > > > + *ep_count < params[1].u.memref.size / sizeof(u16)) { > > > > + rc = -EINVAL; > > > > + goto out_free_msg; > > > > + } > > > > + > > > > + *min_size = params[0].u.value.a; > > > > + *min_align = params[0].u.value.b; > > > > + *ep_count = params[1].u.memref.size / sizeof(u16); > > > > + > > > > + if (msg_arg->ret == TEEC_ERROR_SHORT_BUFFER) { > > > > + rc = -ENOSPC; > > > > + goto out_free_msg; > > > > + } > > > > + > > > > + if (end_points) > > > > + memcpy(end_points, tee_shm_get_va(shm_param, 0), > > > > + params[1].u.memref.size); > > > > + > > > > +out_free_msg: > > > > + optee_free_msg_arg(optee->ctx, entry, offs); > > > > +out_free_shm: > > > > + if (shm_param) > > > > + tee_shm_free(shm_param); > > > > + return rc; > > > > +} > > > > + > > > > +struct tee_rstmem_pool *optee_rstmem_alloc_cma_pool(struct optee *optee, > > > > + enum tee_dma_heap_id id) > > > > +{ > > > > + struct optee_rstmem_cma_pool *rp; > > > > + u32 use_case = id; > > > > + size_t min_size; > > > > + int rc; > > > > + > > > > + rp = kzalloc(sizeof(*rp), GFP_KERNEL); > > > > + if (!rp) > > > > + return ERR_PTR(-ENOMEM); > > > > + rp->use_case = use_case; > > > > + > > > > + rc = get_rstmem_config(optee, use_case, &min_size, &rp->align, NULL, > > > > + &rp->end_point_count); > > > > + if (rc) { > > > > + if (rc != -ENOSPC) > > > > + goto err; > > > > + rp->end_points = kcalloc(rp->end_point_count, > > > > + sizeof(*rp->end_points), GFP_KERNEL); > > > > + if (!rp->end_points) { > > > > + rc = -ENOMEM; > > > > + goto err; > > > > + } > > > > + rc = get_rstmem_config(optee, use_case, &min_size, &rp->align, > > > > + rp->end_points, &rp->end_point_count); > > > > + if (rc) > > > > + goto err_kfree_eps; > > > > + } > > > > + > > > > + rp->pool.ops = &rstmem_pool_ops_cma; > > > > + rp->optee = optee; > > > > + rp->page_count = min_size / PAGE_SIZE; > > > > + mutex_init(&rp->mutex); > > > > + > > > > + return &rp->pool; > > > > + > > > > +err_kfree_eps: > > > > + kfree(rp->end_points); > > > > +err: > > > > + kfree(rp); > > > > + return ERR_PTR(rc); > > > > +} > > > > -- > > > > 2.43.0 > > > >
[View Less]
5 days, 1 hour
1
0
0
0
Re: [PATCH v6 05/10] tee: implement restricted DMA-heap
by Jens Wiklander
On Tue, Apr 1, 2025 at 9:58 AM Sumit Garg <sumit.garg(a)kernel.org> wrote: > > On Tue, Mar 25, 2025 at 11:55:46AM +0100, Jens Wiklander wrote: > > Hi Sumit, > > > > <snip> > > > > > > > > > > + > > > > +#include "tee_private.h" > > > > + > > > > +struct tee_dma_heap { > > > > + struct dma_heap *heap; > > > > + enum tee_dma_heap_id id; > > > > +
…
[View More]
struct tee_rstmem_pool *pool; > > > > + struct tee_device *teedev; > > > > + /* Protects pool and teedev above */ > > > > + struct mutex mu; > > > > +}; > > > > + > > > > +struct tee_heap_buffer { > > > > + struct tee_rstmem_pool *pool; > > > > + struct tee_device *teedev; > > > > + size_t size; > > > > + size_t offs; > > > > + struct sg_table table; > > > > +}; > > > > + > > > > +struct tee_heap_attachment { > > > > + struct sg_table table; > > > > + struct device *dev; > > > > +}; > > > > + > > > > +struct tee_rstmem_static_pool { > > > > + struct tee_rstmem_pool pool; > > > > + struct gen_pool *gen_pool; > > > > + phys_addr_t pa_base; > > > > +}; > > > > + > > > > +#if !IS_MODULE(CONFIG_TEE) && IS_ENABLED(CONFIG_DMABUF_HEAPS) > > > > > > Can this dependency rather be better managed via Kconfig? > > > > This was the easiest yet somewhat flexible solution I could find. If > > you have something better, let's use that instead. > > > > --- a/drivers/tee/optee/Kconfig > +++ b/drivers/tee/optee/Kconfig > @@ -5,6 +5,7 @@ config OPTEE > depends on HAVE_ARM_SMCCC > depends on MMU > depends on RPMB || !RPMB > + select DMABUF_HEAPS > help > This implements the OP-TEE Trusted Execution Environment (TEE) > driver. I wanted to avoid that since there are plenty of use cases where DMABUF_HEAPS aren't needed. This seems to do the job: +config TEE_DMABUF_HEAP + bool + depends on TEE = y && DMABUF_HEAPS We can only use DMABUF_HEAPS if the TEE subsystem is compiled into the kernel. Cheers, Jens
[View Less]
5 days, 5 hours
1
0
0
0
Re: [PATCH v6 03/10] optee: account for direction while converting parameters
by Jens Wiklander
On Tue, Apr 1, 2025 at 9:45 AM Sumit Garg <sumit.garg(a)kernel.org> wrote: > > On Tue, Mar 25, 2025 at 09:50:35AM +0100, Jens Wiklander wrote: > > On Tue, Mar 25, 2025 at 6:56 AM Sumit Garg <sumit.garg(a)kernel.org> wrote: > > > > > > On Thu, Mar 20, 2025 at 02:00:57PM +0100, Jens Wiklander wrote: > > > > Hi Sumit, > > > > > > > > On Thu, Mar 20, 2025 at 10:25 AM Sumit Garg <sumit.garg(a)kernel.org> wrote: >
…
[View More]
> > > > > > > > > Hi Jens, > > > > > > > > > > On Mon, Mar 17, 2025 at 08:42:01AM +0100, Jens Wiklander wrote: > > > > > > Hi Sumit, > > > > > > > > > > > > On Thu, Mar 13, 2025 at 11:41 AM Sumit Garg <sumit.garg(a)kernel.org> wrote: > > > > > > > > > > > > > > Hi Jens, > > > > > > > > > > > > > > On Wed, Mar 05, 2025 at 02:04:09PM +0100, Jens Wiklander wrote: > > > > > > > > The OP-TEE backend driver has two internal function pointers to convert > > > > > > > > between the subsystem type struct tee_param and the OP-TEE type struct > > > > > > > > optee_msg_param. > > > > > > > > > > > > > > > > The conversion is done from one of the types to the other, which is then > > > > > > > > involved in some operation and finally converted back to the original > > > > > > > > type. When converting to prepare the parameters for the operation, all > > > > > > > > fields must be taken into account, but then converting back, it's enough > > > > > > > > to update only out-values and out-sizes. So, an update_out parameter is > > > > > > > > added to the conversion functions to tell if all or only some fields > > > > > > > > must be copied. > > > > > > > > > > > > > > > > This is needed in a later patch where it might get confusing when > > > > > > > > converting back in from_msg_param() callback since an allocated > > > > > > > > restricted SHM can be using the sec_world_id of the used restricted > > > > > > > > memory pool and that doesn't translate back well. > > > > > > > > > > > > > > > > Signed-off-by: Jens Wiklander <jens.wiklander(a)linaro.org> > > > > > > > > --- > > > > > > > > drivers/tee/optee/call.c | 10 ++-- > > > > > > > > drivers/tee/optee/ffa_abi.c | 43 +++++++++++++---- > > > > > > > > drivers/tee/optee/optee_private.h | 42 +++++++++++------ > > > > > > > > drivers/tee/optee/rpc.c | 31 +++++++++---- > > > > > > > > drivers/tee/optee/smc_abi.c | 76 +++++++++++++++++++++++-------- > > > > > > > > 5 files changed, 144 insertions(+), 58 deletions(-) > > > > > > > > > > > > > > > > diff --git a/drivers/tee/optee/call.c b/drivers/tee/optee/call.c > > > > > > > > index 16eb953e14bb..f1533b894726 100644 > > > > > > > > --- a/drivers/tee/optee/call.c > > > > > > > > +++ b/drivers/tee/optee/call.c > > > > > > > > @@ -400,7 +400,8 @@ int optee_open_session(struct tee_context *ctx, > > > > > > > > export_uuid(msg_arg->params[1].u.octets, &client_uuid); > > > > > > > > > > > > > > > > rc = optee->ops->to_msg_param(optee, msg_arg->params + 2, > > > > > > > > - arg->num_params, param); > > > > > > > > + arg->num_params, param, > > > > > > > > + false /*!update_out*/); > > > > > > > > if (rc) > > > > > > > > goto out; > > > > > > > > > > > > > > > > @@ -427,7 +428,8 @@ int optee_open_session(struct tee_context *ctx, > > > > > > > > } > > > > > > > > > > > > > > > > if (optee->ops->from_msg_param(optee, param, arg->num_params, > > > > > > > > - msg_arg->params + 2)) { > > > > > > > > + msg_arg->params + 2, > > > > > > > > + true /*update_out*/)) { > > > > > > > > arg->ret = TEEC_ERROR_COMMUNICATION; > > > > > > > > arg->ret_origin = TEEC_ORIGIN_COMMS; > > > > > > > > /* Close session again to avoid leakage */ > > > > > > > > @@ -541,7 +543,7 @@ int optee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, > > > > > > > > msg_arg->cancel_id = arg->cancel_id; > > > > > > > > > > > > > > > > rc = optee->ops->to_msg_param(optee, msg_arg->params, arg->num_params, > > > > > > > > - param); > > > > > > > > + param, false /*!update_out*/); > > > > > > > > if (rc) > > > > > > > > goto out; > > > > > > > > > > > > > > > > @@ -551,7 +553,7 @@ int optee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, > > > > > > > > } > > > > > > > > > > > > > > > > if (optee->ops->from_msg_param(optee, param, arg->num_params, > > > > > > > > - msg_arg->params)) { > > > > > > > > + msg_arg->params, true /*update_out*/)) { > > > > > > > > msg_arg->ret = TEEC_ERROR_COMMUNICATION; > > > > > > > > msg_arg->ret_origin = TEEC_ORIGIN_COMMS; > > > > > > > > } > > > > > > > > diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c > > > > > > > > index 4ca1d5161b82..e4b08cd195f3 100644 > > > > > > > > --- a/drivers/tee/optee/ffa_abi.c > > > > > > > > +++ b/drivers/tee/optee/ffa_abi.c > > > > > > > > @@ -122,15 +122,21 @@ static int optee_shm_rem_ffa_handle(struct optee *optee, u64 global_id) > > > > > > > > */ > > > > > > > > > > > > > > > > static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p, > > > > > > > > - u32 attr, const struct optee_msg_param *mp) > > > > > > > > + u32 attr, const struct optee_msg_param *mp, > > > > > > > > + bool update_out) > > > > > > > > { > > > > > > > > struct tee_shm *shm = NULL; > > > > > > > > u64 offs_high = 0; > > > > > > > > u64 offs_low = 0; > > > > > > > > > > > > > > > > + if (update_out) { > > > > > > > > + if (attr == OPTEE_MSG_ATTR_TYPE_FMEM_INPUT) > > > > > > > > + return; > > > > > > > > + goto out; > > > > > > > > + } > > > > > > > > + > > > > > > > > p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + > > > > > > > > attr - OPTEE_MSG_ATTR_TYPE_FMEM_INPUT; > > > > > > > > - p->u.memref.size = mp->u.fmem.size; > > > > > > > > > > > > > > > > if (mp->u.fmem.global_id != OPTEE_MSG_FMEM_INVALID_GLOBAL_ID) > > > > > > > > shm = optee_shm_from_ffa_handle(optee, mp->u.fmem.global_id); > > > > > > > > @@ -141,6 +147,8 @@ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p, > > > > > > > > offs_high = mp->u.fmem.offs_high; > > > > > > > > } > > > > > > > > p->u.memref.shm_offs = offs_low | offs_high << 32; > > > > > > > > +out: > > > > > > > > + p->u.memref.size = mp->u.fmem.size; > > > > > > > > } > > > > > > > > > > > > > > > > /** > > > > > > > > @@ -150,12 +158,14 @@ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p, > > > > > > > > * @params: subsystem internal parameter representation > > > > > > > > * @num_params: number of elements in the parameter arrays > > > > > > > > * @msg_params: OPTEE_MSG parameters > > > > > > > > + * @update_out: update parameter for output only > > > > > > > > * > > > > > > > > * Returns 0 on success or <0 on failure > > > > > > > > */ > > > > > > > > static int optee_ffa_from_msg_param(struct optee *optee, > > > > > > > > struct tee_param *params, size_t num_params, > > > > > > > > - const struct optee_msg_param *msg_params) > > > > > > > > + const struct optee_msg_param *msg_params, > > > > > > > > + bool update_out) > > > > > > > > { > > > > > > > > size_t n; > > > > > > > > > > > > > > > > @@ -166,18 +176,20 @@ static int optee_ffa_from_msg_param(struct optee *optee, > > > > > > > > > > > > > > > > switch (attr) { > > > > > > > > case OPTEE_MSG_ATTR_TYPE_NONE: > > > > > > > > + if (update_out) > > > > > > > > + break; > > > > > > > > p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; > > > > > > > > memset(&p->u, 0, sizeof(p->u)); > > > > > > > > break; > > > > > > > > case OPTEE_MSG_ATTR_TYPE_VALUE_INPUT: > > > > > > > > case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT: > > > > > > > > case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT: > > > > > > > > - optee_from_msg_param_value(p, attr, mp); > > > > > > > > + optee_from_msg_param_value(p, attr, mp, update_out); > > > > > > > > break; > > > > > > > > case OPTEE_MSG_ATTR_TYPE_FMEM_INPUT: > > > > > > > > case OPTEE_MSG_ATTR_TYPE_FMEM_OUTPUT: > > > > > > > > case OPTEE_MSG_ATTR_TYPE_FMEM_INOUT: > > > > > > > > - from_msg_param_ffa_mem(optee, p, attr, mp); > > > > > > > > + from_msg_param_ffa_mem(optee, p, attr, mp, update_out); > > > > > > > > break; > > > > > > > > default: > > > > > > > > return -EINVAL; > > > > > > > > @@ -188,10 +200,16 @@ static int optee_ffa_from_msg_param(struct optee *optee, > > > > > > > > } > > > > > > > > > > > > > > > > static int to_msg_param_ffa_mem(struct optee_msg_param *mp, > > > > > > > > - const struct tee_param *p) > > > > > > > > + const struct tee_param *p, bool update_out) > > > > > > > > { > > > > > > > > struct tee_shm *shm = p->u.memref.shm; > > > > > > > > > > > > > > > > + if (update_out) { > > > > > > > > + if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT) > > > > > > > > + return 0; > > > > > > > > + goto out; > > > > > > > > + } > > > > > > > > + > > > > > > > > mp->attr = OPTEE_MSG_ATTR_TYPE_FMEM_INPUT + p->attr - > > > > > > > > TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT; > > > > > > > > > > > > > > > > @@ -211,6 +229,7 @@ static int to_msg_param_ffa_mem(struct optee_msg_param *mp, > > > > > > > > memset(&mp->u, 0, sizeof(mp->u)); > > > > > > > > mp->u.fmem.global_id = OPTEE_MSG_FMEM_INVALID_GLOBAL_ID; > > > > > > > > } > > > > > > > > +out: > > > > > > > > mp->u.fmem.size = p->u.memref.size; > > > > > > > > > > > > > > > > return 0; > > > > > > > > @@ -222,13 +241,15 @@ static int to_msg_param_ffa_mem(struct optee_msg_param *mp, > > > > > > > > * @optee: main service struct > > > > > > > > * @msg_params: OPTEE_MSG parameters > > > > > > > > * @num_params: number of elements in the parameter arrays > > > > > > > > - * @params: subsystem itnernal parameter representation > > > > > > > > + * @params: subsystem internal parameter representation > > > > > > > > + * @update_out: update parameter for output only > > > > > > > > * Returns 0 on success or <0 on failure > > > > > > > > */ > > > > > > > > static int optee_ffa_to_msg_param(struct optee *optee, > > > > > > > > struct optee_msg_param *msg_params, > > > > > > > > size_t num_params, > > > > > > > > - const struct tee_param *params) > > > > > > > > + const struct tee_param *params, > > > > > > > > + bool update_out) > > > > > > > > { > > > > > > > > size_t n; > > > > > > > > > > > > > > > > @@ -238,18 +259,20 @@ static int optee_ffa_to_msg_param(struct optee *optee, > > > > > > > > > > > > > > > > switch (p->attr) { > > > > > > > > case TEE_IOCTL_PARAM_ATTR_TYPE_NONE: > > > > > > > > + if (update_out) > > > > > > > > + break; > > > > > > > > mp->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; > > > > > > > > memset(&mp->u, 0, sizeof(mp->u)); > > > > > > > > break; > > > > > > > > case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT: > > > > > > > > case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT: > > > > > > > > case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT: > > > > > > > > - optee_to_msg_param_value(mp, p); > > > > > > > > + optee_to_msg_param_value(mp, p, update_out); > > > > > > > > break; > > > > > > > > case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: > > > > > > > > case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: > > > > > > > > case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT: > > > > > > > > - if (to_msg_param_ffa_mem(mp, p)) > > > > > > > > + if (to_msg_param_ffa_mem(mp, p, update_out)) > > > > > > > > return -EINVAL; > > > > > > > > break; > > > > > > > > default: > > > > > > > > > > > > > > Can we rather handle it as follows to improve code readability and > > > > > > > maintainence long term? Ditto for all other places. > > > > > > > > > > > > > > static int optee_ffa_to_msg_param(struct optee *optee, > > > > > > > struct optee_msg_param *msg_params, > > > > > > > size_t num_params, > > > > > > > const struct tee_param *params, > > > > > > > bool update_out) > > > > > > > { > > > > > > > size_t n; > > > > > > > > > > > > > > for (n = 0; n < num_params; n++) { > > > > > > > const struct tee_param *p = params + n; > > > > > > > struct optee_msg_param *mp = msg_params + n; > > > > > > > > > > > > > > if (update_out && (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_NONE || > > > > > > > p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT || > > > > > > > p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT)) > > > > > > > continue; > > > > > > > > > > > > You're missing updating the length field for memrefs. > > > > > > > > > > > > > > > > Do we need to update length field for input memrefs when update_out is > > > > > set? I don't see that happening in your existing patch too. > > > > > > > > I'm sorry, I was unclear. The update_out parameter means only the > > > > output fields should be updated, not the attribute, offsets, ids, etc. > > > > That is, the length field for memrefs, and the value fields a, b, c > > > > for value params. Some of the memrefs aren't translated one-to-one > > > > with SDP, but the length field can and must be updated. > > > > > > Isn't it rather better to add another attribute type to handled SDP > > > special handling? > > > > This isn't special handling, all parameters get the same treatment. > > When updating a parameter after it has been used, this is all that > > needs to be done, regardless of whether it's an SDP buffer. The > > updates we did before this patch were redundant. > > > > This patch was introduced in the v3 of this patch set, but I don't > > think it's strictly needed any longer since SDP buffers are allocated > > differently now. I think it's nice to only update what's needed when > > translating back a parameter (just as in params_to_user() in > > drivers/tee/tee_core.c), but if you don't like it, we can drop this > > patch. > > params_to_user() doesn't take any additional parameter like "update_out" > which is complicating the program flow here. Can't we rather follow > similar practice for {to/from}_msg_param() APIs? I'm afraid something special is needed. handle_rpc_supp_cmd() needs all the fields to be updated, from_msg_param() prepares input and attribute type may have changed when to_msg_param() is called. Cheers, Jens
[View Less]
5 days, 5 hours
1
0
0
0
← Newer
1
2
Older →
Jump to page:
1
2
Results per page:
10
25
50
100
200