Hello Julien,
Julien Grall writes:
Hi Volodymyr,
On 18/12/2018 21:11, Volodymyr Babchuk wrote:
From: Volodymyr Babchuk vlad.babchuk@gmail.com
Shared memory is widely used by NW to communicate with TAs in OP-TEE. NW can share part of own memory with TA or OP-TEE core, by registering it OP-TEE, or by providing a temporal reference. Anyways, information about such memory buffers are sent to OP-TEE as a list of pages. This mechanism is described in optee_msg.h.
Mediator should step in when NW tries to share memory with OP-TEE for two reasons:
- Do address translation from IPA to PA.
- Pin domain pages till they are mapped into OP-TEE or TA
Looking at the code, I think the page are mapped while OP-TEE is using them. If so, it should be s/till/while/.
address space, so domain can't transfer this pages to other domain or balloon out them. >
Address translation is done by translate_noncontig(...) function. It allocates new buffer from xenheap and then walks on guest provided list of pages, translates addresses and stores PAs into newly allocated buffer. This buffer will be provided to OP-TEE instead of original buffer from the guest. This buffer will be free at the end of standard call.
In the same time this function pins pages and stores them in struct optee_shm_buf object. This object will live all the time, when given SHM buffer is known to OP-TEE. It will be freed after guest unregisters shared buffer. At this time pages will be unpinned.
We don't need to do any special reference counting because OP-TEE tracks buffer on its side. So, mediator will unpin pages only when OP-TEE returns successfully from OPTEE_MSG_CMD_UNREGISTER_SHM call.
Signed-off-by: Volodymyr Babchuk vlad.babchuk@gmail.com
Changes from v2:
- Made sure that guest does not tries to register shared buffer with the same cookie twice
- Fixed coding style
- Use access_guest_memory_by_ipa() instead of direct memory mapping
xen/arch/arm/tee/optee.c | 274 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 274 insertions(+)
diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c index 771148e940..cfc3b34df7 100644 --- a/xen/arch/arm/tee/optee.c +++ b/xen/arch/arm/tee/optee.c @@ -37,6 +37,10 @@ */ #define MAX_RPC_SHMS MAX_STD_CALLS +/* Maximum total number of pages that guest can share with OP-TEE */ +#define MAX_TOTAL_SMH_BUF_PG 16384
Please explain in the commit message and code how you came up to this value.
Basically it is taken from head. I just needed some number. You see, number of registered shared memory buffer solely depends on free heap in OP-TEE. But the same heap is used for other purposes, so it is hard to tell how much pages can be shared with OP-TEE. I assumed that 64M ought to be enough for anybody.
Probably it is worth to make this configurable via domctl interface.
+#define MAX_NONCONTIG_ENTRIES 5
If I understand correctly the code below, MAX_NONCONTIG_ENTIRES is basically linked to the number of parameters. The maximum number of parameters is 5. I see in patch #6 you check have the following check
OPTEE_MSG_GET_ARG_SIZE(num_params) > OPTEE_MSG_NONCONTIG_PAGE_SIZE It is not entirely obvious how this ensure you have no more than 5 parameters neither how you came up to this value. Can you please at least provide more documentation in the code explaining the origin of the value?
Sadly, it is not mentioned in standard. But it is defined locally in OP-TEE, in "tee_api_defines.h" file in the following way:
/* Not specified in the standard */ #define TEE_NUM_PARAMS 4
Fifth parameter is added to handle RPC buffer in the same way as other parameters. Actually, I think, I need to rework this, because there only hard limit to number of parameters is mentioned check
OPTEE_MSG_GET_ARG_SIZE(num_params) > OPTEE_MSG_NONCONTIG_PAGE_SIZE
You might also want a
BUILD_BUG_ON(OPTEE_MSG_GET_ARG_SIZE(MAX_NONCONTIG_ENTRIES) == OPTEE_MSG_NONCONTIG_PAGE_SIZE).
I think that
BUILD_BUG_ON(OPTEE_MSG_GET_ARG_SIZE(MAX_NONCONTIG_ENTRIES) <= OPTEE_MSG_NONCONTIG_PAGE_SIZE).
would be better in this case. But yes, I think I need to revisit this part.
[...]
if ( new >= MAX_TOTAL_SMH_BUF_PG )
return NULL;
The limitation is in number of page and quite high. What would prevent a guest to register shared memory page by page? If nothing, then I think you can end up to something interesting issue in Xen because of the growing list and memory used.
Are you suggesting to introduce limit both on number of buffers and the total number of pages?
[...]
- struct page_info *guest_page;
- struct {
uint64_t pages_list[PAGELIST_ENTRIES_PER_PAGE];
uint64_t next_page_data;
- } *pages_data_guest, *pages_data_xen, *pages_data_xen_start;
The structure is the internal of OPTEE_MSG_ATTR_NONCONTIG, am I correct? If so, the comment at the beginning of the function should be on top of the structure with further clarification (i.e this is the layout of the memory).
Yes, this is the layout of memory which is described in optee_msg.h. I'll add reference to this file in comment.
[...]
- page_offset = param->u.tmem.buf_ptr & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1);
- /* Size of the user buffer in bytes */
- size = ROUNDUP(param->u.tmem.size + page_offset,
OPTEE_MSG_NONCONTIG_PAGE_SIZE);
- num_pages = DIV_ROUND_UP(size, OPTEE_MSG_NONCONTIG_PAGE_SIZE);
- order = get_order_from_bytes(get_pages_list_size(num_pages));
- pages_data_xen_start = alloc_xenheap_pages(order, 0);
By using alloc_xenheap_pages, you may end-up allocation more memory than necessary when the order is getting bigger. What is the bigger order you expect here?
It depend on MAX_TOTAL_SMH_BUF_PG. One page can contain up to 511 entries, plus reference to the next one. So, basically at max I will need about 32 pages, which gives order 5-6. I think, I can try xzalloc or domheap there. But looks like there is no xmem_pool for the domheap. So, I still will be forced to use alloc_domheap_pages().
[...]