If CRYPTO_DEV_CCP_DD=m and AMDTEE=y, the following error is seen
while building call.c or core.c
drivers/tee/amdtee/call.o: In function `handle_unload_ta':
call.c:(.text+0x35f): undefined reference to `psp_tee_process_cmd'
drivers/tee/amdtee/core.o: In function `amdtee_driver_init':
core.c:(.init.text+0xf): undefined reference to `psp_check_tee_status
Fix the config dependency for AMDTEE here.
Reported-by: Hulk Robot <hulkci(a)huawei.com>
Fixes: 757cc3e9ff ("tee: add AMD-TEE driver")
Signed-off-by: Hongbo Yao <yaohongbo(a)huawei.com>
Reviewed-by: Rijo Thomas <Rijo-john.Thomas(a)amd.com>
---
drivers/tee/amdtee/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/tee/amdtee/Kconfig b/drivers/tee/amdtee/Kconfig
index 4e32b6413b41..191f9715fa9a 100644
--- a/drivers/tee/amdtee/Kconfig
+++ b/drivers/tee/amdtee/Kconfig
@@ -3,6 +3,6 @@
config AMDTEE
tristate "AMD-TEE"
default m
- depends on CRYPTO_DEV_SP_PSP
+ depends on CRYPTO_DEV_SP_PSP && CRYPTO_DEV_CCP_DD
help
This implements AMD's Trusted Execution Environment (TEE) driver.
--
2.20.1
Hi,
Here's a group of patches to the tee subsystem with cleanups and not so
urgent fixes related to tee shared memory.
- Unused parts of the shared memory object (struct tee_shm) are removed.
- Shared memory ids usable from user space are not assigned to driver
private shared memory objects
- The TEE_SHM_USER_MAPPED is used instead of TEE_SHM_REGISTER to accurately
tell if a shared memory object originates from another user space mapping.
Only unused "features" should be removed with this patch set, there should
be no change in behaviour or breakage with other code.
Thanks,
Jens
Jens Wiklander (5):
tee: remove linked list of struct tee_shm
tee: remove unused tee_shm_priv_alloc()
tee: don't assign shm id for private shms
tee: remove redundant teedev in struct tee_shm
tee: tee_shm_op_mmap(): use TEE_SHM_USER_MAPPED
drivers/tee/tee_core.c | 1 -
drivers/tee/tee_private.h | 3 +-
drivers/tee/tee_shm.c | 85 +++++++++++----------------------------
include/linux/tee_drv.h | 19 +--------
4 files changed, 27 insertions(+), 81 deletions(-)
--
2.17.1
The optee driver uses specific page table types to verify if a memory
region is normal. These types are not defined in nommu systems. Trying
to compile the driver in these systems results in a build error:
linux/drivers/tee/optee/call.c: In function ‘is_normal_memory’:
linux/drivers/tee/optee/call.c:533:26: error: ‘L_PTE_MT_MASK’ undeclared
(first use in this function); did you mean ‘PREEMPT_MASK’?
return (pgprot_val(p) & L_PTE_MT_MASK) == L_PTE_MT_WRITEALLOC;
^~~~~~~~~~~~~
PREEMPT_MASK
linux/drivers/tee/optee/call.c:533:26: note: each undeclared identifier is
reported only once for each function it appears in
linux/drivers/tee/optee/call.c:533:44: error: ‘L_PTE_MT_WRITEALLOC’ undeclared
(first use in this function)
return (pgprot_val(p) & L_PTE_MT_MASK) == L_PTE_MT_WRITEALLOC;
^~~~~~~~~~~~~~~~~~~
Make the optee driver depend on MMU to fix the compilation issue.
Cc: Jens Wiklander <jens.wiklander(a)linaro.org>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino(a)arm.com>
---
drivers/tee/optee/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/tee/optee/Kconfig b/drivers/tee/optee/Kconfig
index d1ad512e1708..3ca71e3812ed 100644
--- a/drivers/tee/optee/Kconfig
+++ b/drivers/tee/optee/Kconfig
@@ -3,6 +3,7 @@
config OPTEE
tristate "OP-TEE"
depends on HAVE_ARM_SMCCC
+ depends on MMU
help
This implements the OP-TEE Trusted Execution Environment (TEE)
driver.
--
2.24.1
Hello arm-soc maintainers,
Please pull this OP-TEE driver fix with an added dependency to MMU in order
to avoid compile errors on nommu configurations.
Thanks,
Jens
The following changes since commit d1eef1c619749b2a57e514a3fa67d9a516ffa919:
Linux 5.5-rc2 (2019-12-15 15:16:08 -0800)
are available in the Git repository at:
https://git.linaro.org:/people/jens.wiklander/linux-tee.git tags/tee-optee-fix2-for-5.5
for you to fetch changes up to 9e0caab8e0f96f0af7d1dd388e62f44184a75372:
tee: optee: Fix compilation issue with nommu (2020-01-23 10:55:20 +0100)
----------------------------------------------------------------
Fix OP-TEE compile error with nommu
----------------------------------------------------------------
Vincenzo Frascino (1):
tee: optee: Fix compilation issue with nommu
drivers/tee/optee/Kconfig | 1 +
1 file changed, 1 insertion(+)
If CRYPTO_DEV_CCP_DD=m and AMDTEE=y, the following error is seen
while building call.c or core.c
drivers/tee/amdtee/call.o: In function `handle_unload_ta':
call.c:(.text+0x35f): undefined reference to `psp_tee_process_cmd'
drivers/tee/amdtee/core.o: In function `amdtee_driver_init':
core.c:(.init.text+0xf): undefined reference to `psp_check_tee_status
Fix the config dependency for AMDTEE here.
Reported-by: Hulk Robot <hulkci(a)huawei.com>
Fixes: 757cc3e9ff ("tee: add AMD-TEE driver")
Signed-off-by: Hongbo Yao <yaohongbo(a)huawei.com>
---
drivers/tee/amdtee/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/tee/amdtee/Kconfig b/drivers/tee/amdtee/Kconfig
index 4e32b6413b41..191f9715fa9a 100644
--- a/drivers/tee/amdtee/Kconfig
+++ b/drivers/tee/amdtee/Kconfig
@@ -3,6 +3,6 @@
config AMDTEE
tristate "AMD-TEE"
default m
- depends on CRYPTO_DEV_SP_PSP
+ depends on CRYPTO_DEV_SP_PSP && CRYPTO_DEV_CCP_DD
help
This implements AMD's Trusted Execution Environment (TEE) driver.
--
2.20.1
Hello community,
I want to discuss the next big feature for virtualized OP-TEE. As you
know, currently OP-TEE in virtualization mode does not support access to
real hardware like RPMB partitions, crypto accelerators and such because
all instances of TEE is equal and there is no locking mechanism (among
other problems).
I have wrote a small paper that discusses different approaches to this
problem, taking RPMB as an example. You can find this paper there:
https://xen-troops.github.io/papers/optee-virt-rpmb.pdf
For convenience of quoting and discussing, I'm posting its text below:
RPMB without virtualization
===========================
OP-TEE does not have direct access to the RPMB device because it is the
part of (e)MMC card and this card is used mostly by REE. Fortunately
RPMB specification employs HMAC to ensure that only trusted code can
read and write RPMB partition. So, there it is perfectly fine to
communicate with RPMB over Normal World. There is how it happens:
OP-TEE -> Linux kernel -> Supplicant -> Linux kernel -> RPMB
Linux kernel provide ioctls to communicate with RPMB partition on eMMC.
OP-TEE supplicant receives RPMB requests RPC from OP-TEE and uses
mentioned ioctls to access the RPMB partition.
It should be noted that during initialization OP-TEE reads the size of
RPMB partition and write counter. Then, in runtime it maintains the
write counter and returns an error if expected write counter does not
correspond to one returned by RPMB. As OP-TEE is the only user of RPMB
this is perfectly fine.
RPMB with virtualization
========================
Sharing a single RPMB partition
-------------------------------
Now suppose we want to share the single RPMB partition between the
multiple guests. The simplest solution is to provide each guest with its
own RPMB-capable device. But in the most cases platform will have only
one eMMC device, so all guests should share it in some way. Also, this
device will physically available to only one guest, let’s call it
“Driver Domain” or “DomD”. Writes to RPMB should be atomic, but nature
or RPMB driver in OP-TEE requires multiple writes if RPMB is not capable
of writing big buffer at once. This leads to series of writes, with
increasing write counter for every write request. That will lead to
de-synchronization of write counter value stored in different TEE
instances. Also, that could lead to race condition, when two or mode TEE
instances submit batch of write/read requests in the same time.
Communication between OP-TEE instances
--------------------------------------
As other domains can’t work with the eMMC directly we will require PV
protocol to access foreign RPMB partition. The problem is how to divide
RPMB partition between the multiple guests. We want each OP-TEE instance
to operate with the physical RPMB partition, but on other hand we want
to ensure maximum isolation between TEE instances. Clearly, we need one
designated TEE instance that is capable to accessing RPMB. This is the
instance that is associated with DomD. Also we need some communication
method between TEE instances. As OP-TEE is scheduled by Normal World, we
can’t make direct communication between TEE instances, because in this
way one guest can lock up some another guest.
So we need a secure way to exchange messages between TEE instances, but
which is scheduled by NW. This involves inter-guest communication in the
NW using mechanisms provided by the hypervisor. We are not required to
send actual data, only signal to another TEE instance that there is a
some request in some shared buffer. This is what is called “doorbell”.
1. optee_domU writes request to shm
2. optee_domU issues "doorbell" RPC to own supplicant (in domU)
3. supplicant in domU uses standard Xen PV protocol to call frontend in DomD
4. frontend in DomD calls pTA of optee_DomD
5. optee_DomD reads request from shm (written in p.1)
6. optee_DomD issues RPMB request using standard way
As you can see, client TEE instance writes the request to a shared
buffer inside the secure memory. Then it issues RPC request back to own
supplicant. Supplicant uses hypervisor-provided way to signal frontend
server in the DomD. In case of Xen, inter-domain events can be used.
This is a common mechanism to implement PV drivers in the Xen. Virtio
also provides similar methods of signaling. PV server in DomD opens
session to pTA in OP-TEE that belongs to DomD. This pTA accesses the
previously written shared memory and reads the request from DomU. Then
it can handle request - i.e. issue RPMB command. This provides the
secure way to transmit requests from one TEE instance to another without
adding inter-domain locking mechanisms. Obviously the same mechanism can
be used not only for RPMB requests but for any inter-domain
communication.
Partitioning RPMB Area
----------------------
The provided mechanism answers the question “how to access RPMB from
multiple domains”. But there is another question: “how to share space on
RPMB partition?”. Now OP-TEE have a static number of domains configured
at a compilation time (``CFG_VIRT_GUEST_COUNT`` configuration option).
So, we can divide RPMB partition into ``CFG_VIRT_GUEST_COUNT`` (equal)
parts and grant every guest access to one of these. But how we can make
sure that guest A will have access to partition for guest A, not for
guest B? We can’t rely on VM ID received by hypervisor at least because
VMs can be restarted and each time it will get a new ID. Also, IDs will
vary depending on VM creation order. Clearly we need persistent Virtual
Machine ID (GUID will be fine). But I can’t see a way how TEE instance
can be sure that it communicates with the same VM every boot. As
hypervisor could swap GUIDs between “good” and “rouge” VMs. Actually,
the same thing possible with multiple CAs, accessing the same TA, as
there is no mechanism to authenticate CA. This should be keep in mind
while developing Trusted Applications.
So, as I’m seeing it, RPMB partition should have partition table
somewhere. The very beginning of the partition should be fine.
This table should hold at least the following values:
- Magic value (might serve as the version number also)
- Total count of the partitions
- Table of partitions, each one having the next entries
- GUID of the VM
- Offset of the partition in RPMB write blocks
- Size of the partition in RPMB write blocks
Extending TEE virtualization API
================================
With features above, we need to extend virtualization-related APIs. The
``OPTEE_SMC_VM_CREATED`` call should be extended with the following
information:
- GUID of the virtual machine
- Flag to indicate that this machine have access to real RPMB partition
In the future number of flags can be extended to denote ability to
access hardware accelerators for example.
Also we need pTA for RPMB partition table management. So hypervisor (or
trusted domain) can configure the table, e.g. - to assign GUIDs to table
entries. It is debatable if it also should be able to wipe out
partitions to re-assign GUIDs later.
Another pTA along with some shared memory mechanism is needed to enable
inter-TEE instances communication as described in
subsection "Communication between OP-TEE instances".
So, only one TEE instance should enable this two pTAs.
Virtual RPMB partition
======================
There is completely another approach to RPMB available. Next version of
virtio interface specification provides virtio-rpmb interface which can
be used to “forward” RPMB requests to another VM. The same mechanism can
be used to enable software-emulated RPMB partitions inside of some
“trusted” domain. This approach requires absolutely no changes in the
OP-TEE code. We need to write software emulator for RPMB like the one
used in OP-TEE supplicant, but with virtio-rpmb interface. Then every
TEE instance will see own virtual RPMB device and work with it as usual,
providing that kernel will have virtio-rpmb frontend code, which will be
implemented anyways, as part of the virtio specification.
Actually, I believe that it is possible to use hardware RPMB to ensure
replay protection in the emulated RPMB devices. For example, we can
store hashes of virtual RPMB data in the hardware-backed RPMB devices.
Multiple hardware RPMB devices
==============================
If platform have more than one RPM partition (for example NVMe devices
can support up to 8 independent RPMB partitions), then it is possible to
assign each RPMB partition to a different VM. There are two ways
possible.
Hardware partitioning
---------------------
Some platforms allows to “give” access to a certain device to a certain
VM. For example, if platform have 3 independent MMC controllers, every
controller can be assigned to own VM and it will have exclusive access
to that MMC. In this case no changes needed neither in OP-TEE, nor in
linux driver. This is a matter of hypervisor configuration.
Virtio-rpmb assisted partitioning
---------------------------------
It is possible that RPMB devices can’t be exclusively assigned to VM.
For example, all RPMBs are belonging to one NVMe drive. Or platform
configuration have tightly bound MMC controllers, so there is no chance
to assign them to a different VMs.
In this case virtio-rpmb protocol can be used. We can configure
hypervisor to provide each VM with own RPMB physical partition via
virtio-rpmb. This setup is similar to one described in
section "Virtual RPMB partition" but with real HW instead if
emulation.
Comparison of proposed approaches
=================================
Comparison is made in form of table. Please see it in the
PDF version at https://xen-troops.github.io/papers/optee-virt-rpmb.pdf
--
Volodymyr Babchuk at EPAM
From: Jens Wiklander <jens.wiklander(a)linaro.org>
[ Upstream commit 03212e347f9443e524d6383c6806ac08295c1fb0 ]
Prior to this patch in optee_probe() when optee_enumerate_devices() was
called the struct optee was fully initialized. If
optee_enumerate_devices() returns an error optee_probe() is supposed to
clean up and free the struct optee completely, but will at this late
stage need to call optee_remove() instead. This isn't done and thus
freeing the struct optee prematurely.
With this patch the call to optee_enumerate_devices() is done after
optee_probe() has returned successfully and in case
optee_enumerate_devices() fails everything is cleaned up with a call to
optee_remove().
Fixes: c3fa24af9244 ("tee: optee: add TEE bus device enumeration support")
Reviewed-by: Sumit Garg <sumit.garg(a)linaro.org>
Signed-off-by: Jens Wiklander <jens.wiklander(a)linaro.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
drivers/tee/optee/core.c | 20 ++++++++++++--------
1 file changed, 12 insertions(+), 8 deletions(-)
diff --git a/drivers/tee/optee/core.c b/drivers/tee/optee/core.c
index 1854a3db7345..b830e0a87fba 100644
--- a/drivers/tee/optee/core.c
+++ b/drivers/tee/optee/core.c
@@ -643,11 +643,6 @@ static struct optee *optee_probe(struct device_node *np)
if (optee->sec_caps & OPTEE_SMC_SEC_CAP_DYNAMIC_SHM)
pr_info("dynamic shared memory is enabled\n");
- rc = optee_enumerate_devices();
- if (rc)
- goto err;
-
- pr_info("initialized driver\n");
return optee;
err:
if (optee) {
@@ -702,9 +697,10 @@ static struct optee *optee_svc;
static int __init optee_driver_init(void)
{
- struct device_node *fw_np;
- struct device_node *np;
- struct optee *optee;
+ struct device_node *fw_np = NULL;
+ struct device_node *np = NULL;
+ struct optee *optee = NULL;
+ int rc = 0;
/* Node is supposed to be below /firmware */
fw_np = of_find_node_by_name(NULL, "firmware");
@@ -723,6 +719,14 @@ static int __init optee_driver_init(void)
if (IS_ERR(optee))
return PTR_ERR(optee);
+ rc = optee_enumerate_devices();
+ if (rc) {
+ optee_remove(optee);
+ return rc;
+ }
+
+ pr_info("initialized driver\n");
+
optee_svc = optee;
return 0;
--
2.20.1
From: Colin Ian King <colin.king(a)canonical.com>
Currently the memory allocation failure checks on drv_data and
amdtee are using IS_ERR rather than checking for a null pointer.
Fix these checks to use the conventional null pointer check.
Addresses-Coverity: ("Dereference null return")
Fixes: 757cc3e9ff1d ("tee: add AMD-TEE driver")
Signed-off-by: Colin Ian King <colin.king(a)canonical.com>
---
drivers/tee/amdtee/core.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/tee/amdtee/core.c b/drivers/tee/amdtee/core.c
index 9d0cee1c837f..5fda810c79dc 100644
--- a/drivers/tee/amdtee/core.c
+++ b/drivers/tee/amdtee/core.c
@@ -444,11 +444,11 @@ static int __init amdtee_driver_init(void)
goto err_fail;
drv_data = kzalloc(sizeof(*drv_data), GFP_KERNEL);
- if (IS_ERR(drv_data))
+ if (!drv_data)
return -ENOMEM;
amdtee = kzalloc(sizeof(*amdtee), GFP_KERNEL);
- if (IS_ERR(amdtee)) {
+ if (!amdtee) {
rc = -ENOMEM;
goto err_kfree_drv_data;
}
--
2.24.0