Hi, I don't know whether this is the right place to discuss, sorry for bothering.
OP-TEE OS has already support virtualization, but modification to the hypervisor is also necessary. But the proprietary Hypervisors are close sourced and some TEE OSes are alos close-sourced, such as QSEE from QualComm. So maybe virtio-tee is an alternative solution for the Guest VM to access the OP-TEE.
In detail, CA from Guest VM --> libteec.so (GuestVM) --> tee driver(GuestVM) -->optee_do_call_with_arg() --> invoke_fn()--> virtio-tee driver --> virtio-tee device(HostVM) --> libteec.so (HostVM) --> tee driver(HostVM) -->optee_do_call_with_arg() -->invoke_fn() --> TEEOS.
I think the virtio-tee device must transfer the RPC to the virtio-tee driver in the GuestVM, then to the tee-supplicant in the GuestVM, in order to load the TAs in the GuestVM.
In the HostVM, the tee-supplicant accesses the tee-driver through /dev/teepriv0, and the virtio-tee device accesses the tee-driver through /dev/teepriv1, So I wonder how the HostVM tee driver can dispatch the RPC from OP-TEE to the correct receiver , tee-supplicant or the virtio-tee device?
Best Regards, Li Cheng
lchina77 writes:
Hi,
Hi,
I don't know whether this is the right place to discuss, sorry for bothering.
OP-TEE OS has already support virtualization, but modification to the hypervisor is also necessary. But the proprietary Hypervisors are close sourced and some TEE OSes are alos close-sourced, such as QSEE from QualComm. So maybe virtio-tee is an alternative solution for the Guest VM to access the OP-TEE.
Well, it depends on your requirements regarding security. For example, if you are trusting your hypervisor, you can run your TEE as an additional VM. In this case of course you will not get benefits of secure mode, but hypervisor extensions also provide quite serious degree of isolation
In detail, CA from Guest VM --> libteec.so (GuestVM) --> tee driver(GuestVM) -->optee_do_call_with_arg() --> invoke_fn()--> virtio-tee driver --> virtio-tee device(HostVM) --> libteec.so (HostVM) --> tee driver(HostVM) -->optee_do_call_with_arg() -->invoke_fn() --> TEEOS.
Well, this is doable, of course. I believe, KVM guys tried to use the similar approach. In this case you must trust your HostVM, which, again, can be or can not be an issue.
I think the virtio-tee device must transfer the RPC to the virtio-tee driver in the GuestVM, then to the tee-supplicant in the GuestVM, in order to load the TAs in the GuestVM.
Yes, virtio-tee driver can act as TEE mediator in Xen hypervisor.
In the HostVM, the tee-supplicant accesses the tee-driver through /dev/teepriv0, and the virtio-tee device accesses the tee-driver through /dev/teepriv1, So I wonder how the HostVM tee driver can dispatch the RPC from OP-TEE to the correct receiver , tee-supplicant or the virtio-tee device?
As I said, this is completely doable. It would require some careful design, but I can't see any serious obstacles on this way.
At 2020-11-21 00:52:18, "Volodymyr Babchuk" Volodymyr_Babchuk@epam.com wrote:
lchina77 writes:
Hi,
Hi,
I don't know whether this is the right place to discuss, sorry for bothering.
OP-TEE OS has already support virtualization, but modification to the hypervisor is also necessary. But the proprietary Hypervisors are close sourced and some TEE OSes are alos close-sourced, such as QSEE from QualComm. So maybe virtio-tee is an alternative solution for the Guest VM to access the OP-TEE.
Well, it depends on your requirements regarding security. For example, if you are trusting your hypervisor, you can run your TEE as an additional VM. In this case of course you will not get benefits of secure mode, but hypervisor extensions also provide quite serious degree of isolation
In detail, CA from Guest VM --> libteec.so (GuestVM) --> tee driver(GuestVM) -->optee_do_call_with_arg() --> invoke_fn()--> virtio-tee driver --> virtio-tee device(HostVM) --> libteec.so (HostVM) --> tee driver(HostVM) -->optee_do_call_with_arg() -->invoke_fn() --> TEEOS.
Well, this is doable, of course. I believe, KVM guys tried to use the similar approach. In this case you must trust your HostVM, which, again, can be or can not be an issue.
I think the virtio-tee device must transfer the RPC to the virtio-tee driver in the GuestVM, then to the tee-supplicant in the GuestVM, in order to load the TAs in the GuestVM.
Yes, virtio-tee driver can act as TEE mediator in Xen hypervisor.
You mean , the virtio-tee device in the Host VM will act as TEE mediator, in order to transfer data between the correct Guest VM and OP-TEE , is it right ?
In the HostVM, the tee-supplicant accesses the tee-driver through /dev/teepriv0, and the virtio-tee device accesses the tee-driver through /dev/teepriv1, So I wonder how the HostVM tee driver can dispatch the RPC from OP-TEE to the correct receiver , tee-supplicant or the virtio-tee device?
As I said, this is completely doable. It would require some careful design, but I can't see any serious obstacles on this way.
Well the problem is how the HostVM tee driver can dispatch the RPC from OP-TEE to the correct receiver. That is , the Host VM tee driver calls invoke_fn() and get RPC from OP-TEE, in which case OPTEE_SMC_RETURN_IS_RPC(res.a0) is true, then how the function optee_handle_rpc(ctx, ¶m, &call_ctx); get the correct handler , the virtio-tee device or the Host VM tee-supplicant ?
It seems that only the paramter struct tee_context *ctx can be the judging condition, but I don't find the way to use it or add more info to it .
Could you please give me some suggestion ?
-- Volodymyr Babchuk at EPAM
Hi Li Cheng,
Could you please elaborate the problem you are trying to solve?
Is the issue that it is difficult to integrate a OP-TEE specific driver into a Hypervisor? You would need that in any case so that the Host VM can access the OP-TEE in Secure world through the Hypervisor. In the call sequence you have described, it seems that communication between the Guest VM and OP-TEE will now go via the Host VM. Could you please help me understand how that helps.
Routing Guest VM to TEE data via the Host seems quite opposite to the direction of travel where there is no trust between the Guest and Host and Hypervisor and Host as far as address space isolation goes. The Host now gets dibs on every message between the Guest and TEE.
Virtio (as it stands) either requires the Guest to make its address space visible to the Host or bounce buffers in the Hypervisor. The former does not fly if address space isolation is the security goal (as above). The latter could run into performance issues but I am not an expert on this.
The approach we are working on is to replace a TEE specific driver in the Hypervisor with a driver that is agnostic of the TEE. This is achieved by standardising the role that the Hypervisor plays in communication between a Guest VM and the TEE. So you write the driver once and it works with all TEEs that follow the standard.
Hence my original question i.e. is this the problem you are looking to solve?
Cheers, Achin
From: Tee-dev tee-dev-bounces@lists.linaro.org on behalf of lchina77 lchina77@163.com Date: Friday, 20 November 2020 at 12:53 To: "tee-dev@lists.linaro.org" tee-dev@lists.linaro.org Subject: [Tee-dev] virtio device for OP-TEE
Hi, I don't know whether this is the right place to discuss, sorry for bothering.
OP-TEE OS has already support virtualization, but modification to the hypervisor is also necessary. But the proprietary Hypervisors are close sourced and some TEE OSes are alos close-sourced, such as QSEE from QualComm. So maybe virtio-tee is an alternative solution for the Guest VM to access the OP-TEE.
In detail, CA from Guest VM --> libteec.so (GuestVM) --> tee driver(GuestVM) -->optee_do_call_with_arg() --> invoke_fn()--> virtio-tee driver --> virtio-tee device(HostVM) --> libteec.so (HostVM) --> tee driver(HostVM) -->optee_do_call_with_arg() -->invoke_fn() --> TEEOS.
I think the virtio-tee device must transfer the RPC to the virtio-tee driver in the GuestVM, then to the tee-supplicant in the GuestVM, in order to load the TAs in the GuestVM.
In the HostVM, the tee-supplicant accesses the tee-driver through /dev/teepriv0, and the virtio-tee device accesses the tee-driver through /dev/teepriv1, So I wonder how the HostVM tee driver can dispatch the RPC from OP-TEE to the correct receiver , tee-supplicant or the virtio-tee device?
Best Regards, Li Cheng
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Hi, Achin
At 2020-11-21 02:32:47, "Achin Gupta" Achin.Gupta@arm.com wrote:
Hi Li Cheng,
Could you please elaborate the problem you are trying to solve?
Is the issue that it is difficult to integrate a OP-TEE specific driver into a Hypervisor? You would need that in any case so that the Host VM can access the OP-TEE in Secure wor> ld through the Hypervisor. In the call sequence you have described, it seems that communication between the Guest VM and OP-TEE will now go via the Host VM. Could you please help me understand > how that helps.
In my case, the TEE specific driver in the proprietary hypervisor ONLY supports the Host VM to access the OP-TEE, while the Guest VM cannot. So we propose the virtio solution for the Guest VM to access the OP-TEE.
Routing Guest VM to TEE data via the Host seems quite opposite to the direction of travel where there is no trust between the Guest and Host and Hypervisor and Host as far a> s address space isolation goes. The Host now gets dibs on every message between the Guest and TEE.
Yes, but this is not a serious concern for us, because we are the provider of both the Host VM and Guest VM, and all the secret data resides in the OP-TEE.
Virtio (as it stands) either requires the Guest to make its address space visible to the Host or bounce buffers in the Hypervisor. The former does not fly if address space isolatio> n is the security goal (as above). The latter could run into performance issues but I am not an expert on this.
The approach we are working on is to replace a TEE specific driver in the Hypervisor with a driver that is agnostic of the TEE. This is achieved by standardising the role that the > Hypervisor plays in communication between a Guest VM and the TEE. So you write the driver once and it works with all TEEs that follow the standard.
Where does your TEE-agnostic driver run in ? the hypervisor or the Host VM or the Guest VM ? If the Guest VM can access OP-TEE with the help of the TEE-agnostic driver, whether the address space isolation between Host VM and Guest VM is still guaranteed ?
Hence my original question i.e. is this the problem you are looking to solve?
We need to assure that the Guest VM can access the OP-TEE without the dependency on the TEE driver in the Hypervisor, it seems to be the same goal with your TEE-agnostic driver.
Cheers,
Achin
From: Tee-dev tee-dev-bounces@lists.linaro.org on behalf of lchina77 lchina77@163.com
Date: Friday, 20 November 2020 at 12:53
To: "tee-dev@lists.linaro.org" tee-dev@lists.linaro.org
Subject: [Tee-dev] virtio device for OP-TEE
Hi,
I don't know whether this is the right place to discuss, sorry for bothering.
OP-TEE OS has already support virtualization, but modification to the hypervisor is also necessary. But the proprietary Hypervisors are close sourced and some TEE OSes are alos close-sourced, such as QSEE from QualComm. So maybe virtio-tee is an alternative solution for the Guest VM to access the OP-TEE.
In detail, CA from Guest VM --> libteec.so (GuestVM) --> tee driver(GuestVM) -->optee_do_call_with_arg() --> invoke_fn()--> virtio-tee driver --> virtio-tee device(HostVM) --> libteec.so (HostVM) --> tee driver(HostVM) -->optee_do_call_with_arg() -->invoke_fn() --> TEEOS.
I think the virtio-tee device must transfer the RPC to the virtio-tee driver in the GuestVM, then to the tee-supplicant in the GuestVM, in order to load the TAs in the GuestVM.
In the HostVM, the tee-supplicant accesses the tee-driver through /dev/teepriv0, and the virtio-tee device accesses the tee-driver through /dev/teepriv1, So I wonder how the HostVM tee driver can dispatch the RPC from OP-TEE to the correct receiver , tee-supplicant or the virtio-tee device?
Best Regards,
Li Cheng
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Hi Li Cheng,
On Sat, Nov 21, 2020 at 11:25:53PM +0800, lchina77 wrote:
Hi, Achin
At 2020-11-21 02:32:47, "Achin Gupta" Achin.Gupta@arm.com wrote:
Hi Li Cheng,
Could you please elaborate the problem you are trying to solve?
Is the issue that it is difficult to integrate a OP-TEE specific driver into a Hypervisor? You would need that in any case so that the Host VM can access the OP-TEE in Secure wor> ld through the Hypervisor. In the call sequence you have described, it seems that communication between the Guest VM and OP-TEE will now go via the Host VM. Could you please help me understand > how that helps.
In my case, the TEE specific driver in the proprietary hypervisor ONLY supports the Host VM to access the OP-TEE, while the Guest VM cannot. So we propose the virtio solution for the Guest VM to access the OP-TEE.
Thanks. I get it now.
Routing Guest VM to TEE data via the Host seems quite opposite to the direction of travel where there is no trust between the Guest and Host and Hypervisor and Host as far a> s address space isolation goes. The Host now gets dibs on every message between the Guest and TEE.
Yes, but this is not a serious concern for us, because we are the provider of both the Host VM and Guest VM, and all the secret data resides in the OP-TEE.
Fair enough.
Virtio (as it stands) either requires the Guest to make its address space visible to the Host or bounce buffers in the Hypervisor. The former does not fly if address space isolatio> n is the security goal (as above). The latter could run into performance issues but I am not an expert on this.
The approach we are working on is to replace a TEE specific driver in the Hypervisor with a driver that is agnostic of the TEE. This is achieved by standardising the role that the > Hypervisor plays in communication between a Guest VM and the TEE. So you write the driver once and it works with all TEEs that follow the standard.
Where does your TEE-agnostic driver run in ? the hypervisor or the Host VM or the Guest VM ? If the Guest VM can access OP-TEE with the help of the TEE-agnostic driver, whether the address space isolation between Host VM and Guest VM is still guaranteed ?
The TEE-agnostic driver resides in:
1. The Hypervisor in EL2. Its job is to, - Enable a Guest VM to share/unshare memory with a TEE - Forward SMC calls between a Guest VM and the TEE 2. The Guest VM. Its job is to, - Communicate with the driver in the Hypervisor to enable communication and memory management with the TEE as stated above
Address space isolation between the Guest and Host VMs is the Hypervisor's job anyways. The point of the TEE-agnostic driver is that memory management and message forwarding can be done in a generic way in the Hypervisor.
Hence my original question i.e. is this the problem you are looking to solve?
We need to assure that the Guest VM can access the OP-TEE without the dependency on the TEE driver in the Hypervisor, it seems to be the same goal with your TEE-agnostic driver.
Our goal is to avoid the need to integrate a TEE specific driver in the Hypervisor and in TF-A while allowing any VM to access the TEE.
In your case, it seems that the Hypervisor implements an access control policy where only the Host VM can talk to the TEE. The TEE-agnostic driver will not solve this problem as it could be subject to the same access control by the Hypervisor.
In any case, a more efficient approach would have been to:
1. Share memory between the Guest VM and OP-TEE for the data path.
2. Use the OP-TEE driver in the Host VM to issue SMCs to run OP-TEE i.e. implement the control path.
It looks like that is not possible either due to the restrictions imposed by the Hypervisor.
cheers, Achin
Cheers,
Achin
From: Tee-dev tee-dev-bounces@lists.linaro.org on behalf of lchina77 lchina77@163.com
Date: Friday, 20 November 2020 at 12:53
To: "tee-dev@lists.linaro.org" tee-dev@lists.linaro.org
Subject: [Tee-dev] virtio device for OP-TEE
Hi,
I don't know whether this is the right place to discuss, sorry for bothering. OP-TEE OS has already support virtualization, but modification to the hypervisor is also necessary. But the proprietary Hypervisors
are close sourced and some TEE OSes are alos close-sourced, such as QSEE from QualComm. So maybe virtio-tee is an alternative solution for the Guest VM to access the OP-TEE.
In detail, CA from Guest VM --> libteec.so (GuestVM) --> tee driver(GuestVM) -->optee_do_call_with_arg() --> invoke_fn()--> virtio-tee driver --> virtio-tee device(HostVM) --> libteec.so (HostVM) --> tee driver(HostVM) -->optee_do_call_with_arg() -->invoke_fn() --> TEEOS.
I think the virtio-tee device must transfer the RPC to the virtio-tee driver in the GuestVM, then to the tee-supplicant in the GuestVM, in order to load the TAs in the GuestVM.
In the HostVM, the tee-supplicant accesses the tee-driver through /dev/teepriv0, and the virtio-tee device accesses the tee-driver through /dev/teepriv1, So I wonder how the HostVM tee driver can dispatch the RPC from OP-TEE to the correct receiver , tee-supplicant or the virtio-tee device?
Best Regards,
Li Cheng
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Hi Achin,
Could you please tell me which hypervisor does your TEE-agnostic driver work on ? What do you think about a TEE-agnostic and Hypervisor-agnostic solution for the Guest VM to access the OP-TEE? As there are many hypervisors on kinds of platforms, sometimes even close-sourced ones. Maybe the performance is a little poor, but the convenience will be very big gain.
Best Regards, Li Cheng
At 2020-11-23 18:07:03, "Achin Gupta" achin.gupta@arm.com wrote:
Hi Li Cheng,
On Sat, Nov 21, 2020 at 11:25:53PM +0800, lchina77 wrote:
Hi, Achin
At 2020-11-21 02:32:47, "Achin Gupta" Achin.Gupta@arm.com wrote:
Hi Li Cheng,
Could you please elaborate the problem you are trying to solve?
Is the issue that it is difficult to integrate a OP-TEE specific driver into a Hypervisor? You would need that in any case so that the Host VM can access the OP-TEE in Secure wor> ld through the Hypervisor. In the call sequence you have described, it seems that communication between the Guest VM and OP-TEE will now go via the Host VM. Could you please help me understand > how that helps.
In my case, the TEE specific driver in the proprietary hypervisor ONLY supports the Host VM to access the OP-TEE, while the Guest VM cannot. So we propose the virtio solution for the Guest VM to access the OP-TEE.
Thanks. I get it now.
Routing Guest VM to TEE data via the Host seems quite opposite to the direction of travel where there is no trust between the Guest and Host and Hypervisor and Host as far a> s address space isolation goes. The Host now gets dibs on every message between the Guest and TEE.
Yes, but this is not a serious concern for us, because we are the provider of both the Host VM and Guest VM, and all the secret data resides in the OP-TEE.
Fair enough.
Virtio (as it stands) either requires the Guest to make its address space visible to the Host or bounce buffers in the Hypervisor. The former does not fly if address space isolatio> n is the security goal (as above). The latter could run into performance issues but I am not an expert on this.
The approach we are working on is to replace a TEE specific driver in the Hypervisor with a driver that is agnostic of the TEE. This is achieved by standardising the role that the > Hypervisor plays in communication between a Guest VM and the TEE. So you write the driver once and it works with all TEEs that follow the standard.
Where does your TEE-agnostic driver run in ? the hypervisor or the Host VM or the Guest VM ? If the Guest VM can access OP-TEE with the help of the TEE-agnostic driver, whether the address space isolation between Host VM and Guest VM is still guaranteed ?
The TEE-agnostic driver resides in:
- The Hypervisor in EL2. Its job is to,
- Enable a Guest VM to share/unshare memory with a TEE
- Forward SMC calls between a Guest VM and the TEE
- The Guest VM. Its job is to,
- Communicate with the driver in the Hypervisor to enable communication and memory management with the TEE as stated above
Address space isolation between the Guest and Host VMs is the Hypervisor's job anyways. The point of the TEE-agnostic driver is that memory management and message forwarding can be done in a generic way in the Hypervisor.
Hence my original question i.e. is this the problem you are looking to solve?
We need to assure that the Guest VM can access the OP-TEE without the dependency on the TEE driver in the Hypervisor, it seems to be the same goal with your TEE-agnostic driver.
Our goal is to avoid the need to integrate a TEE specific driver in the Hypervisor and in TF-A while allowing any VM to access the TEE.
In your case, it seems that the Hypervisor implements an access control policy where only the Host VM can talk to the TEE. The TEE-agnostic driver will not solve this problem as it could be subject to the same access control by the Hypervisor.
In any case, a more efficient approach would have been to:
Share memory between the Guest VM and OP-TEE for the data path.
Use the OP-TEE driver in the Host VM to issue SMCs to run OP-TEE
i.e. implement the control path.
It looks like that is not possible either due to the restrictions imposed by the Hypervisor.
cheers, Achin
Cheers,
Achin
From: Tee-dev tee-dev-bounces@lists.linaro.org on behalf of lchina77 lchina77@163.com
Date: Friday, 20 November 2020 at 12:53
To: "tee-dev@lists.linaro.org" tee-dev@lists.linaro.org
Subject: [Tee-dev] virtio device for OP-TEE
Hi,
I don't know whether this is the right place to discuss, sorry for bothering. OP-TEE OS has already support virtualization, but modification to the hypervisor is also necessary. But the proprietary Hypervisors
are close sourced and some TEE OSes are alos close-sourced, such as QSEE from QualComm. So maybe virtio-tee is an alternative solution for the Guest VM to access the OP-TEE.
In detail, CA from Guest VM --> libteec.so (GuestVM) --> tee driver(GuestVM) -->optee_do_call_with_arg() --> invoke_fn()--> virtio-tee driver --> virtio-tee device(HostVM) --> libteec.so (HostVM) --> tee driver(HostVM) -->optee_do_call_with_arg() -->invoke_fn() --> TEEOS.
I think the virtio-tee device must transfer the RPC to the virtio-tee driver in the GuestVM, then to the tee-supplicant in the GuestVM, in order to load the TAs in the GuestVM.
In the HostVM, the tee-supplicant accesses the tee-driver through /dev/teepriv0, and the virtio-tee device accesses the tee-driver through /dev/teepriv1, So I wonder how the HostVM tee driver can dispatch the RPC from OP-TEE to the correct receiver , tee-supplicant or the virtio-tee device?
Best Regards,
Li Cheng
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.