The Trusted Services project provides a framework for developing and deploying device Root Of Trust services in FF-A Secure Partitions. The FF-A SPs are accessible through the FF-A driver, but this doesn't provide a user space interface. The goal of this TEE driver is to make Trusted Services SPs accessible for user space clients.
All TS SPs have the same FF-A UUID, it identifies the RPC protocol used by TS. A TS SP can host one or more services, a service is identified by its service UUID. The same type of service cannot be present twice in the same SP. During SP boot each service in the SP gets assigned an interface ID, this is just a short ID to simplify message addressing. There is 1:1 mapping between TS SPs and TEE devices, i.e. a TEE device gets registered for each Trusted Services compatible FF-A device found on the FF-A bus. Opening the SP corresponds to opening the TEE dev and creating a TEE context. A TS SP hosts one or more services, opening a service corresponds to opening a session in the given tee_context.
Signed-off-by: Balint Dobszay balint.dobszay@arm.com ---
Hi All,
This is an initial TEE driver implementation for Trusted Services [1]. Trusted Services is a TrustedFirmware.org project that provides a framework for developing and deploying device Root Of Trust services in FF-A Secure Partitions. The project hosts the reference implementation of Arm Platform Security Architecture [2] for Arm A-profile devices.
The FF-A Secure Partitions (SPs) are accessible through the FF-A driver in Linux. However, the FF-A driver doesn't have a user space interface so user space clients currently cannot access Trusted Services. The goal of this TEE driver is to bridge this gap and make Trusted Services functionality accessible from user space.
The predecessor of this driver is an out-of-tree kernel module that we have been using for prototyping [3]. An integration of Trusted Services, OP-TEE as a S-EL1 SPMC, TF-A, Linux and this out-of-tree module is available as part of the OP-TEE project, please find the manifest file at [4].
Some key points about the structure of Trusted Services (for more details please see the documentation at [5]): - A Trusted Services SP can host one or multiple services. All TS SPs have the same FF-A UUID, it identifies the RPC protocol used by TS. The protocol's register ABI definition is available at [6] and is implemented in tstee_private.h. - A service hosted by an SP is identified by its service UUID. The same type of service can be present in multiple SPs, but not twice in the same SP. During SP boot each service in the SP gets assigned an interface ID, this is just a short ID to simplify message addressing. - There is 1:1 mapping between TS SPs and TEE devices, i.e. a TEE device gets registered for each Trusted Services compatible FF-A device found on the FF-A bus. - Opening the SP corresponds to opening the TEE dev and creating a TEE context. A TS SP hosts one or more services, opening a service corresponds to opening a session in the given tee_context.
Regards, Balint
[1] https://www.trustedfirmware.org/projects/trusted-services/ [2] https://www.arm.com/architecture/security-features/platform-security [3] https://gitlab.arm.com/linux-arm/linux-trusted-services [4] https://github.com/OP-TEE/manifest/blob/master/fvp-ts.xml [5] https://trusted-services.readthedocs.io [6] https://trusted-services.readthedocs.io/en/integration/developer/service-acc...
drivers/tee/Kconfig | 1 + drivers/tee/Makefile | 1 + drivers/tee/tstee/Kconfig | 11 + drivers/tee/tstee/Makefile | 3 + drivers/tee/tstee/core.c | 473 ++++++++++++++++++++++++++++++ drivers/tee/tstee/shm_pool.c | 91 ++++++ drivers/tee/tstee/tstee_private.h | 97 ++++++ include/uapi/linux/tee.h | 1 + 8 files changed, 678 insertions(+) create mode 100644 drivers/tee/tstee/Kconfig create mode 100644 drivers/tee/tstee/Makefile create mode 100644 drivers/tee/tstee/core.c create mode 100644 drivers/tee/tstee/shm_pool.c create mode 100644 drivers/tee/tstee/tstee_private.h
diff --git a/drivers/tee/Kconfig b/drivers/tee/Kconfig index 73a147202e88..61b507c18780 100644 --- a/drivers/tee/Kconfig +++ b/drivers/tee/Kconfig @@ -15,5 +15,6 @@ if TEE
source "drivers/tee/optee/Kconfig" source "drivers/tee/amdtee/Kconfig" +source "drivers/tee/tstee/Kconfig"
endif diff --git a/drivers/tee/Makefile b/drivers/tee/Makefile index 68da044afbfa..5488cba30bd2 100644 --- a/drivers/tee/Makefile +++ b/drivers/tee/Makefile @@ -5,3 +5,4 @@ tee-objs += tee_shm.o tee-objs += tee_shm_pool.o obj-$(CONFIG_OPTEE) += optee/ obj-$(CONFIG_AMDTEE) += amdtee/ +obj-$(CONFIG_ARM_TSTEE) += tstee/ diff --git a/drivers/tee/tstee/Kconfig b/drivers/tee/tstee/Kconfig new file mode 100644 index 000000000000..cd8d32586a77 --- /dev/null +++ b/drivers/tee/tstee/Kconfig @@ -0,0 +1,11 @@ +# SPDX-License-Identifier: GPL-2.0-only +config ARM_TSTEE + tristate "Arm Trusted Services TEE driver" + depends on ARM_FFA_TRANSPORT + default n + help + The Trusted Services project provides a framework for developing and + deploying device Root Of Trust services in FF-A Secure Partitions. + This driver provides an interface to make Trusted Services Secure + Partitions accessible for user space clients, since the FF-A driver + doesn't implement a user space interface directly. diff --git a/drivers/tee/tstee/Makefile b/drivers/tee/tstee/Makefile new file mode 100644 index 000000000000..dd54b1f88652 --- /dev/null +++ b/drivers/tee/tstee/Makefile @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: GPL-2.0-only +arm-tstee-objs := core.o shm_pool.o +obj-$(CONFIG_ARM_TSTEE) = arm-tstee.o diff --git a/drivers/tee/tstee/core.c b/drivers/tee/tstee/core.c new file mode 100644 index 000000000000..c0194638b7da --- /dev/null +++ b/drivers/tee/tstee/core.c @@ -0,0 +1,473 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2023, Arm Limited + */ + +#define DRIVER_NAME "Arm TSTEE" +#define pr_fmt(fmt) DRIVER_NAME ": " fmt + +#include <linux/arm_ffa.h> +#include <linux/err.h> +#include <linux/errno.h> +#include <linux/kernel.h> +#include <linux/limits.h> +#include <linux/list.h> +#include <linux/mm.h> +#include <linux/module.h> +#include <linux/scatterlist.h> +#include <linux/slab.h> +#include <linux/stat.h> +#include <linux/tee_drv.h> +#include <linux/types.h> +#include <linux/uaccess.h> + +#include "tstee_private.h" + +#define FFA_INVALID_MEM_HANDLE U64_MAX + +static void arg_list_to_ffa_data(const u32 *args, struct ffa_send_direct_data *data) +{ + data->data0 = args[0]; + data->data1 = args[1]; + data->data2 = args[2]; + data->data3 = args[3]; + data->data4 = args[4]; +} + +static void arg_list_from_ffa_data(const struct ffa_send_direct_data *data, u32 *args) +{ + args[0] = lower_32_bits(data->data0); + args[1] = lower_32_bits(data->data1); + args[2] = lower_32_bits(data->data2); + args[3] = lower_32_bits(data->data3); + args[4] = lower_32_bits(data->data4); +} + +static void tstee_get_version(struct tee_device *teedev, struct tee_ioctl_version_data *vers) +{ + struct tstee *tstee = tee_get_drvdata(teedev); + struct tee_ioctl_version_data v = { + .impl_id = TEE_IMPL_ID_TSTEE, + .impl_caps = tstee->ffa_dev->vm_id, + .gen_caps = 0, + }; + + *vers = v; +} + +static int tstee_open(struct tee_context *ctx) +{ + struct ts_context_data *ctxdata; + + ctxdata = kzalloc(sizeof(*ctxdata), GFP_KERNEL); + if (!ctxdata) + return -ENOMEM; + + mutex_init(&ctxdata->mutex); + idr_init(&ctxdata->sess_ids); + INIT_LIST_HEAD(&ctxdata->sess_list); + + ctx->data = ctxdata; + + return 0; +} + +static void tstee_release(struct tee_context *ctx) +{ + struct ts_context_data *ctxdata = ctx->data; + struct ts_session *sess, *sess_tmp; + + if (!ctxdata) + return; + + list_for_each_entry_safe(sess, sess_tmp, &ctxdata->sess_list, list_node) { + list_del(&sess->list_node); + idr_remove(&ctxdata->sess_ids, sess->session_id); + kfree(sess); + } + + idr_destroy(&ctxdata->sess_ids); + mutex_destroy(&ctxdata->mutex); + + kfree(ctxdata); + ctx->data = NULL; +} + +static struct ts_session *find_session(struct ts_context_data *ctxdata, u32 session_id) +{ + struct ts_session *sess; + + list_for_each_entry(sess, &ctxdata->sess_list, list_node) + if (sess->session_id == session_id) + return sess; + + return NULL; +} + +static int tstee_open_session(struct tee_context *ctx, struct tee_ioctl_open_session_arg *arg, + struct tee_param *param __always_unused) +{ + struct tstee *tstee = tee_get_drvdata(ctx->teedev); + struct ffa_device *ffa_dev = tstee->ffa_dev; + struct ts_context_data *ctxdata = ctx->data; + struct ffa_send_direct_data ffa_data; + struct ts_session *sess = NULL; + u32 ffa_args[5] = {}; + int sess_id; + int rc; + + ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(TS_RPC_MGMT_IFACE_ID, + TS_RPC_OP_SERVICE_INFO); + + memcpy((u8 *)(ffa_args + TS_RPC_SERVICE_INFO_UUID0), arg->uuid, UUID_SIZE); + + arg_list_to_ffa_data(ffa_args, &ffa_data); + rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data); + if (rc) + return rc; + + arg_list_from_ffa_data(&ffa_data, ffa_args); + + if (ffa_args[TS_RPC_SERVICE_INFO_RPC_STATUS] != TS_RPC_OK) + return -ENODEV; + + if (ffa_args[TS_RPC_SERVICE_INFO_IFACE] > U8_MAX) + return -EINVAL; + + sess = kzalloc(sizeof(*sess), GFP_KERNEL); + if (!sess) + return -ENOMEM; + + sess_id = idr_alloc(&ctxdata->sess_ids, sess, 1, 0, GFP_KERNEL); + if (sess_id < 0) { + kfree(sess); + return sess_id; + } + + sess->session_id = sess_id; + sess->iface_id = ffa_args[TS_RPC_SERVICE_INFO_IFACE]; + + mutex_lock(&ctxdata->mutex); + list_add(&sess->list_node, &ctxdata->sess_list); + mutex_unlock(&ctxdata->mutex); + + arg->session = sess_id; + arg->ret = 0; + + return 0; +} + +static int tstee_close_session(struct tee_context *ctx, u32 session) +{ + struct ts_context_data *ctxdata = ctx->data; + struct ts_session *sess; + + mutex_lock(&ctxdata->mutex); + sess = find_session(ctxdata, session); + if (sess) + list_del(&sess->list_node); + + mutex_unlock(&ctxdata->mutex); + + if (!sess) + return -EINVAL; + + idr_remove(&ctxdata->sess_ids, sess->session_id); + kfree(sess); + + return 0; +} + +static int tstee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, + struct tee_param *param) +{ + struct tstee *tstee = tee_get_drvdata(ctx->teedev); + struct ffa_device *ffa_dev = tstee->ffa_dev; + struct ts_context_data *ctxdata = ctx->data; + struct ffa_send_direct_data ffa_data; + struct tee_shm *shm = NULL; + struct ts_session *sess; + u32 req_len, ffa_args[5] = {}; + int shm_id, rc; + u8 iface_id; + u64 handle; + u16 opcode; + + mutex_lock(&ctxdata->mutex); + sess = find_session(ctxdata, arg->session); + + /* Do this while holding the mutex to make sure that the session wasn't closed meanwhile */ + if (sess) + iface_id = sess->iface_id; + + mutex_unlock(&ctxdata->mutex); + if (!sess) + return -EINVAL; + + opcode = lower_16_bits(arg->func); + shm_id = lower_32_bits(param[0].u.value.a); + req_len = lower_32_bits(param[0].u.value.b); + + if (shm_id != 0) { + shm = tee_shm_get_from_id(ctx, shm_id); + if (IS_ERR(shm)) + return PTR_ERR(shm); + + if (shm->size < req_len) { + pr_err("request doesn't fit into shared memory buffer\n"); + rc = -EINVAL; + goto out; + } + + handle = shm->sec_world_id; + } else { + handle = FFA_INVALID_MEM_HANDLE; + } + + ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(iface_id, opcode); + ffa_args[TS_RPC_SERVICE_MEM_HANDLE_LSW] = lower_32_bits(handle); + ffa_args[TS_RPC_SERVICE_MEM_HANDLE_MSW] = upper_32_bits(handle); + ffa_args[TS_RPC_SERVICE_REQ_LEN] = req_len; + ffa_args[TS_RPC_SERVICE_CLIENT_ID] = 0; + + arg_list_to_ffa_data(ffa_args, &ffa_data); + rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data); + if (rc) + goto out; + + arg_list_from_ffa_data(&ffa_data, ffa_args); + + if (ffa_args[TS_RPC_SERVICE_RPC_STATUS] != TS_RPC_OK) { + pr_err("invoke_func rpc status: %d\n", ffa_args[TS_RPC_SERVICE_RPC_STATUS]); + rc = -EINVAL; + goto out; + } + + arg->ret = ffa_args[TS_RPC_SERVICE_STATUS]; + if (shm && shm->size >= ffa_args[TS_RPC_SERVICE_RESP_LEN]) + param[0].u.value.a = ffa_args[TS_RPC_SERVICE_RESP_LEN]; + +out: + if (shm) + tee_shm_put(shm); + + return rc; +} + +static int tstee_cancel_req(struct tee_context *ctx __always_unused, u32 cancel_id __always_unused, + u32 session __always_unused) +{ + return -EINVAL; +} + +int tstee_shm_register(struct tee_context *ctx, struct tee_shm *shm, struct page **pages, + size_t num_pages, unsigned long start __always_unused) +{ + struct tstee *tstee = tee_get_drvdata(ctx->teedev); + struct ffa_device *ffa_dev = tstee->ffa_dev; + struct ffa_mem_region_attributes mem_attr = { + .receiver = tstee->ffa_dev->vm_id, + .attrs = FFA_MEM_RW, + .flag = 0, + }; + struct ffa_mem_ops_args mem_args = { + .attrs = &mem_attr, + .use_txbuf = true, + .nattrs = 1, + .flags = 0, + }; + struct ffa_send_direct_data ffa_data; + struct sg_table sgt; + u32 ffa_args[5] = {}; + int rc; + + rc = sg_alloc_table_from_pages(&sgt, pages, num_pages, 0, num_pages * PAGE_SIZE, + GFP_KERNEL); + if (rc) + return rc; + + mem_args.sg = sgt.sgl; + rc = ffa_dev->ops->mem_ops->memory_share(&mem_args); + sg_free_table(&sgt); + if (rc) + return rc; + + shm->sec_world_id = mem_args.g_handle; + + ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(TS_RPC_MGMT_IFACE_ID, + TS_RPC_OP_RETRIEVE_MEM); + ffa_args[TS_RPC_RETRIEVE_MEM_HANDLE_LSW] = lower_32_bits(shm->sec_world_id); + ffa_args[TS_RPC_RETRIEVE_MEM_HANDLE_MSW] = upper_32_bits(shm->sec_world_id); + ffa_args[TS_RPC_RETRIEVE_MEM_TAG_LSW] = 0; + ffa_args[TS_RPC_RETRIEVE_MEM_TAG_MSW] = 0; + + arg_list_to_ffa_data(ffa_args, &ffa_data); + rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data); + if (rc) { + (void)ffa_dev->ops->mem_ops->memory_reclaim(shm->sec_world_id, 0); + return rc; + } + + arg_list_from_ffa_data(&ffa_data, ffa_args); + + if (ffa_args[TS_RPC_RETRIEVE_MEM_RPC_STATUS] != TS_RPC_OK) { + pr_err("shm_register rpc status: %d\n", ffa_args[TS_RPC_RETRIEVE_MEM_RPC_STATUS]); + ffa_dev->ops->mem_ops->memory_reclaim(shm->sec_world_id, 0); + return -EINVAL; + } + + return 0; +} + +int tstee_shm_unregister(struct tee_context *ctx, struct tee_shm *shm) +{ + struct tstee *tstee = tee_get_drvdata(ctx->teedev); + struct ffa_device *ffa_dev = tstee->ffa_dev; + struct ffa_send_direct_data ffa_data; + u32 ffa_args[5] = {}; + int rc; + + ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(TS_RPC_MGMT_IFACE_ID, + TS_RPC_OP_RELINQ_MEM); + ffa_args[TS_RPC_RELINQ_MEM_HANDLE_LSW] = lower_32_bits(shm->sec_world_id); + ffa_args[TS_RPC_RELINQ_MEM_HANDLE_MSW] = upper_32_bits(shm->sec_world_id); + + arg_list_to_ffa_data(ffa_args, &ffa_data); + rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data); + if (rc) + return rc; + arg_list_from_ffa_data(&ffa_data, ffa_args); + + if (ffa_args[TS_RPC_RELINQ_MEM_RPC_STATUS] != TS_RPC_OK) { + pr_err("shm_unregister rpc status: %d\n", ffa_args[TS_RPC_RELINQ_MEM_RPC_STATUS]); + return -EINVAL; + } + + rc = ffa_dev->ops->mem_ops->memory_reclaim(shm->sec_world_id, 0); + + return rc; +} + +static const struct tee_driver_ops tstee_ops = { + .get_version = tstee_get_version, + .open = tstee_open, + .release = tstee_release, + .open_session = tstee_open_session, + .close_session = tstee_close_session, + .invoke_func = tstee_invoke_func, + .cancel_req = tstee_cancel_req, + .shm_register = tstee_shm_register, + .shm_unregister = tstee_shm_unregister, +}; + +static const struct tee_desc tstee_desc = { + .name = "tstee-clnt", + .ops = &tstee_ops, + .owner = THIS_MODULE, +}; + +static bool tstee_check_rpc_compatible(struct ffa_device *ffa_dev) +{ + struct ffa_send_direct_data ffa_data; + u32 ffa_args[5] = {}; + + ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(TS_RPC_MGMT_IFACE_ID, + TS_RPC_OP_GET_VERSION); + + arg_list_to_ffa_data(ffa_args, &ffa_data); + if (ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data)) + return false; + + arg_list_from_ffa_data(&ffa_data, ffa_args); + + return ffa_args[TS_RPC_GET_VERSION_RESP] == TS_RPC_PROTOCOL_VERSION; +} + +static void tstee_deinit_common(struct tstee *tstee) +{ + tee_device_unregister(tstee->teedev); + if (tstee->pool) + tee_shm_pool_free(tstee->pool); + + kfree(tstee); +} + +static int tstee_probe(struct ffa_device *ffa_dev) +{ + struct tstee *tstee; + int rc; + + ffa_dev->ops->msg_ops->mode_32bit_set(ffa_dev); + + if (!tstee_check_rpc_compatible(ffa_dev)) + return -EINVAL; + + tstee = kzalloc(sizeof(*tstee), GFP_KERNEL); + if (!tstee) + return -ENOMEM; + + tstee->ffa_dev = ffa_dev; + + tstee->pool = tstee_create_shm_pool(); + if (IS_ERR(tstee->pool)) { + rc = PTR_ERR(tstee->pool); + tstee->pool = NULL; + goto err; + } + + tstee->teedev = tee_device_alloc(&tstee_desc, NULL, tstee->pool, tstee); + if (IS_ERR(tstee->teedev)) { + rc = PTR_ERR(tstee->teedev); + tstee->teedev = NULL; + goto err; + } + + rc = tee_device_register(tstee->teedev); + if (rc) + goto err; + + ffa_dev_set_drvdata(ffa_dev, tstee); + + pr_info("driver initialized for endpoint 0x%x\n", ffa_dev->vm_id); + + return 0; + +err: + tstee_deinit_common(tstee); + return rc; +} + +static void tstee_remove(struct ffa_device *ffa_dev) +{ + tstee_deinit_common(ffa_dev->dev.driver_data); +} + +static const struct ffa_device_id tstee_device_ids[] = { + /* TS RPC protocol UUID: bdcd76d7-825e-4751-963b-86d4f84943ac */ + { TS_RPC_UUID }, + {} +}; + +static struct ffa_driver tstee_driver = { + .name = "arm_tstee", + .probe = tstee_probe, + .remove = tstee_remove, + .id_table = tstee_device_ids, +}; + +static int __init mod_init(void) +{ + return ffa_register(&tstee_driver); +} +module_init(mod_init) + +static void __exit mod_exit(void) +{ + ffa_unregister(&tstee_driver); +} +module_exit(mod_exit) + +MODULE_ALIAS("arm-tstee"); +MODULE_AUTHOR("Balint Dobszay balint.dobszay@arm.com"); +MODULE_DESCRIPTION("Arm Trusted Services TEE driver"); +MODULE_LICENSE("GPL"); diff --git a/drivers/tee/tstee/shm_pool.c b/drivers/tee/tstee/shm_pool.c new file mode 100644 index 000000000000..518c10dd0735 --- /dev/null +++ b/drivers/tee/tstee/shm_pool.c @@ -0,0 +1,91 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2023, Arm Limited + */ + +#include <linux/arm_ffa.h> +#include <linux/device.h> +#include <linux/dma-buf.h> +#include <linux/genalloc.h> +#include <linux/slab.h> +#include <linux/tee_drv.h> + +#include "tstee_private.h" + +static int pool_op_alloc(struct tee_shm_pool *pool __always_unused, struct tee_shm *shm, + size_t size, size_t align __always_unused) +{ + unsigned int order, nr_pages, i; + struct page *page, **pages; + int rc; + + if (size == 0) + return -EINVAL; + + order = get_order(size); + nr_pages = 1 << order; + + page = alloc_pages(GFP_KERNEL | __GFP_ZERO, order); + if (!page) + return -ENOMEM; + + shm->kaddr = page_address(page); + shm->paddr = page_to_phys(page); + shm->size = PAGE_SIZE << order; + + pages = kcalloc(nr_pages, sizeof(*pages), GFP_KERNEL); + if (!pages) { + rc = -ENOMEM; + goto err; + } + + for (i = 0; i < nr_pages; i++) + pages[i] = page + i; + + rc = tstee_shm_register(shm->ctx, shm, pages, nr_pages, (unsigned long)shm->kaddr); + kfree(pages); + if (rc) + goto err; + + return 0; + +err: + __free_pages(page, order); + return rc; +} + +static void pool_op_free(struct tee_shm_pool *pool __always_unused, struct tee_shm *shm) +{ + int rc; + + rc = tstee_shm_unregister(shm->ctx, shm); + if (rc) + pr_err("shm id 0x%llx unregister rc %d\n", shm->sec_world_id, rc); + + free_pages((unsigned long)shm->kaddr, get_order(shm->size)); + shm->kaddr = NULL; +} + +static void pool_op_destroy_pool(struct tee_shm_pool *pool) +{ + kfree(pool); +} + +static const struct tee_shm_pool_ops pool_ops = { + .alloc = pool_op_alloc, + .free = pool_op_free, + .destroy_pool = pool_op_destroy_pool, +}; + +struct tee_shm_pool *tstee_create_shm_pool(void) +{ + struct tee_shm_pool *pool; + + pool = kzalloc(sizeof(*pool), GFP_KERNEL); + if (!pool) + return ERR_PTR(-ENOMEM); + + pool->ops = &pool_ops; + + return pool; +} diff --git a/drivers/tee/tstee/tstee_private.h b/drivers/tee/tstee/tstee_private.h new file mode 100644 index 000000000000..2ea266804ccf --- /dev/null +++ b/drivers/tee/tstee/tstee_private.h @@ -0,0 +1,97 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2023, Arm Limited + */ + +#ifndef TSTEE_PRIVATE_H +#define TSTEE_PRIVATE_H + +#include <linux/arm_ffa.h> +#include <linux/bitops.h> +#include <linux/idr.h> +#include <linux/tee_drv.h> +#include <linux/types.h> +#include <linux/uuid.h> + +/* UUID of this protocol */ +#define TS_RPC_UUID UUID_INIT(0xbdcd76d7, 0x825e, 0x4751, \ + 0x96, 0x3b, 0x86, 0xd4, 0xf8, 0x49, 0x43, 0xac) + +/* Protocol version*/ +#define TS_RPC_PROTOCOL_VERSION (1) + +/* Status codes */ +#define TS_RPC_OK (0) + +/* RPC control register */ +#define TS_RPC_CTRL_REG (0) +#define OPCODE_MASK GENMASK(15, 0) +#define IFACE_ID_MASK GENMASK(23, 16) +#define TS_RPC_CTRL_OPCODE(x) ((u16)(FIELD_GET(OPCODE_MASK, (x)))) +#define TS_RPC_CTRL_IFACE_ID(x) ((u8)(FIELD_GET(IFACE_ID_MASK, (x)))) +#define TS_RPC_CTRL_PACK_IFACE_OPCODE(i, o) \ + (FIELD_PREP(IFACE_ID_MASK, (i)) | FIELD_PREP(OPCODE_MASK, (o))) +#define TS_RPC_CTRL_SAP_RC BIT(30) +#define TS_RPC_CTRL_SAP_ERR BIT(31) + +/* Interface ID for RPC management operations */ +#define TS_RPC_MGMT_IFACE_ID (0xff) + +/* Management calls */ +#define TS_RPC_OP_GET_VERSION (0x0000) +#define TS_RPC_GET_VERSION_RESP (1) + +#define TS_RPC_OP_RETRIEVE_MEM (0x0001) +#define TS_RPC_RETRIEVE_MEM_HANDLE_LSW (1) +#define TS_RPC_RETRIEVE_MEM_HANDLE_MSW (2) +#define TS_RPC_RETRIEVE_MEM_TAG_LSW (3) +#define TS_RPC_RETRIEVE_MEM_TAG_MSW (4) +#define TS_RPC_RETRIEVE_MEM_RPC_STATUS (1) + +#define TS_RPC_OP_RELINQ_MEM (0x0002) +#define TS_RPC_RELINQ_MEM_HANDLE_LSW (1) +#define TS_RPC_RELINQ_MEM_HANDLE_MSW (2) +#define TS_RPC_RELINQ_MEM_RPC_STATUS (1) + +#define TS_RPC_OP_SERVICE_INFO (0x0003) +#define TS_RPC_SERVICE_INFO_UUID0 (1) +#define TS_RPC_SERVICE_INFO_UUID1 (2) +#define TS_RPC_SERVICE_INFO_UUID2 (3) +#define TS_RPC_SERVICE_INFO_UUID3 (4) +#define TS_RPC_SERVICE_INFO_RPC_STATUS (1) +#define TS_RPC_SERVICE_INFO_IFACE (2) + +/* Service call */ +#define TS_RPC_SERVICE_MEM_HANDLE_LSW (1) +#define TS_RPC_SERVICE_MEM_HANDLE_MSW (2) +#define TS_RPC_SERVICE_REQ_LEN (3) +#define TS_RPC_SERVICE_CLIENT_ID (4) +#define TS_RPC_SERVICE_RPC_STATUS (1) +#define TS_RPC_SERVICE_STATUS (2) +#define TS_RPC_SERVICE_RESP_LEN (3) + +struct tstee { + struct ffa_device *ffa_dev; + struct tee_device *teedev; + struct tee_shm_pool *pool; +}; + +struct ts_session { + struct list_head list_node; + u32 session_id; + u8 iface_id; +}; + +struct ts_context_data { + struct list_head sess_list; + struct idr sess_ids; + /* Serializes access to this struct */ + struct mutex mutex; +}; + +struct tee_shm_pool *tstee_create_shm_pool(void); +int tstee_shm_register(struct tee_context *ctx, struct tee_shm *shm, struct page **pages, + size_t num_pages, unsigned long start); +int tstee_shm_unregister(struct tee_context *ctx, struct tee_shm *shm); + +#endif /* TSTEE_PRIVATE_H */ diff --git a/include/uapi/linux/tee.h b/include/uapi/linux/tee.h index 23e57164693c..d0430bee8292 100644 --- a/include/uapi/linux/tee.h +++ b/include/uapi/linux/tee.h @@ -56,6 +56,7 @@ */ #define TEE_IMPL_ID_OPTEE 1 #define TEE_IMPL_ID_AMDTEE 2 +#define TEE_IMPL_ID_TSTEE 3
/* * OP-TEE specific capabilities -- 2.34.1
Hi Balint,
On Wed, Sep 27, 2023 at 05:21:46PM +0200, Balint Dobszay wrote:
The Trusted Services project provides a framework for developing and deploying device Root Of Trust services in FF-A Secure Partitions. The FF-A SPs are accessible through the FF-A driver, but this doesn't provide a user space interface. The goal of this TEE driver is to make Trusted Services SPs accessible for user space clients.
All TS SPs have the same FF-A UUID, it identifies the RPC protocol used by TS. A TS SP can host one or more services, a service is identified by its service UUID. The same type of service cannot be present twice in the same SP. During SP boot each service in the SP gets assigned an interface ID, this is just a short ID to simplify message addressing. There is 1:1 mapping between TS SPs and TEE devices, i.e. a TEE device gets registered for each Trusted Services compatible FF-A device found on the FF-A bus. Opening the SP corresponds to opening the TEE dev and creating a TEE context. A TS SP hosts one or more services, opening a service corresponds to opening a session in the given tee_context.
Signed-off-by: Balint Dobszay balint.dobszay@arm.com
Hi All,
This is an initial TEE driver implementation for Trusted Services [1]. Trusted Services is a TrustedFirmware.org project that provides a framework for developing and deploying device Root Of Trust services in FF-A Secure Partitions. The project hosts the reference implementation of Arm Platform Security Architecture [2] for Arm A-profile devices.
The FF-A Secure Partitions (SPs) are accessible through the FF-A driver in Linux. However, the FF-A driver doesn't have a user space interface so user space clients currently cannot access Trusted Services. The goal of this TEE driver is to bridge this gap and make Trusted Services functionality accessible from user space.
The predecessor of this driver is an out-of-tree kernel module that we have been using for prototyping [3]. An integration of Trusted Services, OP-TEE as a S-EL1 SPMC, TF-A, Linux and this out-of-tree module is available as part of the OP-TEE project, please find the manifest file at [4].
Some key points about the structure of Trusted Services (for more details please see the documentation at [5]):
- A Trusted Services SP can host one or multiple services. All TS SPs have the same FF-A UUID, it identifies the RPC protocol used by TS. The protocol's register ABI definition is available at [6] and is implemented in tstee_private.h.
- A service hosted by an SP is identified by its service UUID. The same type of service can be present in multiple SPs, but not twice in the same SP. During SP boot each service in the SP gets assigned an interface ID, this is just a short ID to simplify message addressing.
- There is 1:1 mapping between TS SPs and TEE devices, i.e. a TEE device gets registered for each Trusted Services compatible FF-A device found on the FF-A bus.
- Opening the SP corresponds to opening the TEE dev and creating a TEE context. A TS SP hosts one or more services, opening a service corresponds to opening a session in the given tee_context.
Regards, Balint
[1] https://www.trustedfirmware.org/projects/trusted-services/ [2] https://www.arm.com/architecture/security-features/platform-security [3] https://gitlab.arm.com/linux-arm/linux-trusted-services [4] https://github.com/OP-TEE/manifest/blob/master/fvp-ts.xml [5] https://trusted-services.readthedocs.io [6] https://trusted-services.readthedocs.io/en/integration/developer/service-acc...
drivers/tee/Kconfig | 1 + drivers/tee/Makefile | 1 + drivers/tee/tstee/Kconfig | 11 + drivers/tee/tstee/Makefile | 3 + drivers/tee/tstee/core.c | 473 ++++++++++++++++++++++++++++++ drivers/tee/tstee/shm_pool.c | 91 ++++++ drivers/tee/tstee/tstee_private.h | 97 ++++++ include/uapi/linux/tee.h | 1 + 8 files changed, 678 insertions(+) create mode 100644 drivers/tee/tstee/Kconfig create mode 100644 drivers/tee/tstee/Makefile create mode 100644 drivers/tee/tstee/core.c create mode 100644 drivers/tee/tstee/shm_pool.c create mode 100644 drivers/tee/tstee/tstee_private.h
diff --git a/drivers/tee/Kconfig b/drivers/tee/Kconfig index 73a147202e88..61b507c18780 100644 --- a/drivers/tee/Kconfig +++ b/drivers/tee/Kconfig @@ -15,5 +15,6 @@ if TEE
source "drivers/tee/optee/Kconfig" source "drivers/tee/amdtee/Kconfig" +source "drivers/tee/tstee/Kconfig"
endif diff --git a/drivers/tee/Makefile b/drivers/tee/Makefile index 68da044afbfa..5488cba30bd2 100644 --- a/drivers/tee/Makefile +++ b/drivers/tee/Makefile @@ -5,3 +5,4 @@ tee-objs += tee_shm.o tee-objs += tee_shm_pool.o obj-$(CONFIG_OPTEE) += optee/ obj-$(CONFIG_AMDTEE) += amdtee/ +obj-$(CONFIG_ARM_TSTEE) += tstee/ diff --git a/drivers/tee/tstee/Kconfig b/drivers/tee/tstee/Kconfig new file mode 100644 index 000000000000..cd8d32586a77 --- /dev/null +++ b/drivers/tee/tstee/Kconfig @@ -0,0 +1,11 @@ +# SPDX-License-Identifier: GPL-2.0-only +config ARM_TSTEE
- tristate "Arm Trusted Services TEE driver"
- depends on ARM_FFA_TRANSPORT
- default n
- help
The Trusted Services project provides a framework for developing and
deploying device Root Of Trust services in FF-A Secure Partitions.
This driver provides an interface to make Trusted Services Secure
Partitions accessible for user space clients, since the FF-A driver
doesn't implement a user space interface directly.
diff --git a/drivers/tee/tstee/Makefile b/drivers/tee/tstee/Makefile new file mode 100644 index 000000000000..dd54b1f88652 --- /dev/null +++ b/drivers/tee/tstee/Makefile @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: GPL-2.0-only +arm-tstee-objs := core.o shm_pool.o +obj-$(CONFIG_ARM_TSTEE) = arm-tstee.o diff --git a/drivers/tee/tstee/core.c b/drivers/tee/tstee/core.c new file mode 100644 index 000000000000..c0194638b7da --- /dev/null +++ b/drivers/tee/tstee/core.c @@ -0,0 +1,473 @@ +// SPDX-License-Identifier: GPL-2.0-only +/*
- Copyright (c) 2023, Arm Limited
- */
+#define DRIVER_NAME "Arm TSTEE" +#define pr_fmt(fmt) DRIVER_NAME ": " fmt
+#include <linux/arm_ffa.h> +#include <linux/err.h> +#include <linux/errno.h> +#include <linux/kernel.h> +#include <linux/limits.h> +#include <linux/list.h> +#include <linux/mm.h> +#include <linux/module.h> +#include <linux/scatterlist.h> +#include <linux/slab.h> +#include <linux/stat.h> +#include <linux/tee_drv.h> +#include <linux/types.h> +#include <linux/uaccess.h>
+#include "tstee_private.h"
+#define FFA_INVALID_MEM_HANDLE U64_MAX
+static void arg_list_to_ffa_data(const u32 *args, struct ffa_send_direct_data *data) +{
- data->data0 = args[0];
- data->data1 = args[1];
- data->data2 = args[2];
- data->data3 = args[3];
- data->data4 = args[4];
+}
+static void arg_list_from_ffa_data(const struct ffa_send_direct_data *data, u32 *args) +{
- args[0] = lower_32_bits(data->data0);
- args[1] = lower_32_bits(data->data1);
- args[2] = lower_32_bits(data->data2);
- args[3] = lower_32_bits(data->data3);
- args[4] = lower_32_bits(data->data4);
+}
+static void tstee_get_version(struct tee_device *teedev, struct tee_ioctl_version_data *vers) +{
- struct tstee *tstee = tee_get_drvdata(teedev);
- struct tee_ioctl_version_data v = {
.impl_id = TEE_IMPL_ID_TSTEE,
.impl_caps = tstee->ffa_dev->vm_id,
.gen_caps = 0,
- };
- *vers = v;
+}
+static int tstee_open(struct tee_context *ctx) +{
- struct ts_context_data *ctxdata;
- ctxdata = kzalloc(sizeof(*ctxdata), GFP_KERNEL);
- if (!ctxdata)
return -ENOMEM;
- mutex_init(&ctxdata->mutex);
- idr_init(&ctxdata->sess_ids);
- INIT_LIST_HEAD(&ctxdata->sess_list);
- ctx->data = ctxdata;
- return 0;
+}
+static void tstee_release(struct tee_context *ctx) +{
- struct ts_context_data *ctxdata = ctx->data;
- struct ts_session *sess, *sess_tmp;
- if (!ctxdata)
return;
- list_for_each_entry_safe(sess, sess_tmp, &ctxdata->sess_list, list_node) {
list_del(&sess->list_node);
idr_remove(&ctxdata->sess_ids, sess->session_id);
kfree(sess);
- }
- idr_destroy(&ctxdata->sess_ids);
- mutex_destroy(&ctxdata->mutex);
- kfree(ctxdata);
- ctx->data = NULL;
+}
+static struct ts_session *find_session(struct ts_context_data *ctxdata, u32 session_id) +{
- struct ts_session *sess;
- list_for_each_entry(sess, &ctxdata->sess_list, list_node)
if (sess->session_id == session_id)
return sess;
- return NULL;
+}
+static int tstee_open_session(struct tee_context *ctx, struct tee_ioctl_open_session_arg *arg,
struct tee_param *param __always_unused)
+{
- struct tstee *tstee = tee_get_drvdata(ctx->teedev);
- struct ffa_device *ffa_dev = tstee->ffa_dev;
- struct ts_context_data *ctxdata = ctx->data;
- struct ffa_send_direct_data ffa_data;
- struct ts_session *sess = NULL;
- u32 ffa_args[5] = {};
- int sess_id;
- int rc;
- ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(TS_RPC_MGMT_IFACE_ID,
TS_RPC_OP_SERVICE_INFO);
- memcpy((u8 *)(ffa_args + TS_RPC_SERVICE_INFO_UUID0), arg->uuid, UUID_SIZE);
- arg_list_to_ffa_data(ffa_args, &ffa_data);
- rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data);
- if (rc)
return rc;
- arg_list_from_ffa_data(&ffa_data, ffa_args);
- if (ffa_args[TS_RPC_SERVICE_INFO_RPC_STATUS] != TS_RPC_OK)
return -ENODEV;
- if (ffa_args[TS_RPC_SERVICE_INFO_IFACE] > U8_MAX)
return -EINVAL;
- sess = kzalloc(sizeof(*sess), GFP_KERNEL);
- if (!sess)
return -ENOMEM;
- sess_id = idr_alloc(&ctxdata->sess_ids, sess, 1, 0, GFP_KERNEL);
- if (sess_id < 0) {
kfree(sess);
return sess_id;
- }
- sess->session_id = sess_id;
- sess->iface_id = ffa_args[TS_RPC_SERVICE_INFO_IFACE];
- mutex_lock(&ctxdata->mutex);
- list_add(&sess->list_node, &ctxdata->sess_list);
- mutex_unlock(&ctxdata->mutex);
- arg->session = sess_id;
- arg->ret = 0;
- return 0;
+}
+static int tstee_close_session(struct tee_context *ctx, u32 session) +{
- struct ts_context_data *ctxdata = ctx->data;
- struct ts_session *sess;
- mutex_lock(&ctxdata->mutex);
- sess = find_session(ctxdata, session);
- if (sess)
list_del(&sess->list_node);
- mutex_unlock(&ctxdata->mutex);
- if (!sess)
return -EINVAL;
- idr_remove(&ctxdata->sess_ids, sess->session_id);
- kfree(sess);
- return 0;
+}
+static int tstee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg,
struct tee_param *param)
+{
- struct tstee *tstee = tee_get_drvdata(ctx->teedev);
- struct ffa_device *ffa_dev = tstee->ffa_dev;
- struct ts_context_data *ctxdata = ctx->data;
- struct ffa_send_direct_data ffa_data;
- struct tee_shm *shm = NULL;
- struct ts_session *sess;
- u32 req_len, ffa_args[5] = {};
- int shm_id, rc;
- u8 iface_id;
- u64 handle;
- u16 opcode;
- mutex_lock(&ctxdata->mutex);
- sess = find_session(ctxdata, arg->session);
- /* Do this while holding the mutex to make sure that the session wasn't closed meanwhile */
- if (sess)
iface_id = sess->iface_id;
- mutex_unlock(&ctxdata->mutex);
- if (!sess)
return -EINVAL;
- opcode = lower_16_bits(arg->func);
- shm_id = lower_32_bits(param[0].u.value.a);
- req_len = lower_32_bits(param[0].u.value.b);
- if (shm_id != 0) {
shm = tee_shm_get_from_id(ctx, shm_id);
if (IS_ERR(shm))
return PTR_ERR(shm);
if (shm->size < req_len) {
pr_err("request doesn't fit into shared memory buffer\n");
rc = -EINVAL;
goto out;
}
handle = shm->sec_world_id;
- } else {
handle = FFA_INVALID_MEM_HANDLE;
- }
- ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(iface_id, opcode);
- ffa_args[TS_RPC_SERVICE_MEM_HANDLE_LSW] = lower_32_bits(handle);
- ffa_args[TS_RPC_SERVICE_MEM_HANDLE_MSW] = upper_32_bits(handle);
- ffa_args[TS_RPC_SERVICE_REQ_LEN] = req_len;
- ffa_args[TS_RPC_SERVICE_CLIENT_ID] = 0;
- arg_list_to_ffa_data(ffa_args, &ffa_data);
- rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data);
- if (rc)
goto out;
- arg_list_from_ffa_data(&ffa_data, ffa_args);
- if (ffa_args[TS_RPC_SERVICE_RPC_STATUS] != TS_RPC_OK) {
pr_err("invoke_func rpc status: %d\n", ffa_args[TS_RPC_SERVICE_RPC_STATUS]);
rc = -EINVAL;
goto out;
- }
- arg->ret = ffa_args[TS_RPC_SERVICE_STATUS];
- if (shm && shm->size >= ffa_args[TS_RPC_SERVICE_RESP_LEN])
param[0].u.value.a = ffa_args[TS_RPC_SERVICE_RESP_LEN];
+out:
- if (shm)
tee_shm_put(shm);
- return rc;
+}
+static int tstee_cancel_req(struct tee_context *ctx __always_unused, u32 cancel_id __always_unused,
u32 session __always_unused)
+{
- return -EINVAL;
+}
+int tstee_shm_register(struct tee_context *ctx, struct tee_shm *shm, struct page **pages,
size_t num_pages, unsigned long start __always_unused)
+{
- struct tstee *tstee = tee_get_drvdata(ctx->teedev);
- struct ffa_device *ffa_dev = tstee->ffa_dev;
- struct ffa_mem_region_attributes mem_attr = {
.receiver = tstee->ffa_dev->vm_id,
.attrs = FFA_MEM_RW,
.flag = 0,
- };
- struct ffa_mem_ops_args mem_args = {
.attrs = &mem_attr,
.use_txbuf = true,
.nattrs = 1,
.flags = 0,
- };
- struct ffa_send_direct_data ffa_data;
- struct sg_table sgt;
- u32 ffa_args[5] = {};
- int rc;
- rc = sg_alloc_table_from_pages(&sgt, pages, num_pages, 0, num_pages * PAGE_SIZE,
GFP_KERNEL);
- if (rc)
return rc;
- mem_args.sg = sgt.sgl;
- rc = ffa_dev->ops->mem_ops->memory_share(&mem_args);
- sg_free_table(&sgt);
- if (rc)
return rc;
- shm->sec_world_id = mem_args.g_handle;
- ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(TS_RPC_MGMT_IFACE_ID,
TS_RPC_OP_RETRIEVE_MEM);
- ffa_args[TS_RPC_RETRIEVE_MEM_HANDLE_LSW] = lower_32_bits(shm->sec_world_id);
- ffa_args[TS_RPC_RETRIEVE_MEM_HANDLE_MSW] = upper_32_bits(shm->sec_world_id);
- ffa_args[TS_RPC_RETRIEVE_MEM_TAG_LSW] = 0;
- ffa_args[TS_RPC_RETRIEVE_MEM_TAG_MSW] = 0;
- arg_list_to_ffa_data(ffa_args, &ffa_data);
- rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data);
- if (rc) {
(void)ffa_dev->ops->mem_ops->memory_reclaim(shm->sec_world_id, 0);
return rc;
- }
- arg_list_from_ffa_data(&ffa_data, ffa_args);
- if (ffa_args[TS_RPC_RETRIEVE_MEM_RPC_STATUS] != TS_RPC_OK) {
pr_err("shm_register rpc status: %d\n", ffa_args[TS_RPC_RETRIEVE_MEM_RPC_STATUS]);
ffa_dev->ops->mem_ops->memory_reclaim(shm->sec_world_id, 0);
return -EINVAL;
- }
- return 0;
+}
+int tstee_shm_unregister(struct tee_context *ctx, struct tee_shm *shm) +{
- struct tstee *tstee = tee_get_drvdata(ctx->teedev);
- struct ffa_device *ffa_dev = tstee->ffa_dev;
- struct ffa_send_direct_data ffa_data;
- u32 ffa_args[5] = {};
- int rc;
- ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(TS_RPC_MGMT_IFACE_ID,
TS_RPC_OP_RELINQ_MEM);
- ffa_args[TS_RPC_RELINQ_MEM_HANDLE_LSW] = lower_32_bits(shm->sec_world_id);
- ffa_args[TS_RPC_RELINQ_MEM_HANDLE_MSW] = upper_32_bits(shm->sec_world_id);
- arg_list_to_ffa_data(ffa_args, &ffa_data);
- rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data);
- if (rc)
return rc;
- arg_list_from_ffa_data(&ffa_data, ffa_args);
- if (ffa_args[TS_RPC_RELINQ_MEM_RPC_STATUS] != TS_RPC_OK) {
pr_err("shm_unregister rpc status: %d\n", ffa_args[TS_RPC_RELINQ_MEM_RPC_STATUS]);
return -EINVAL;
- }
- rc = ffa_dev->ops->mem_ops->memory_reclaim(shm->sec_world_id, 0);
- return rc;
+}
+static const struct tee_driver_ops tstee_ops = {
- .get_version = tstee_get_version,
- .open = tstee_open,
- .release = tstee_release,
- .open_session = tstee_open_session,
- .close_session = tstee_close_session,
- .invoke_func = tstee_invoke_func,
- .cancel_req = tstee_cancel_req,
- .shm_register = tstee_shm_register,
- .shm_unregister = tstee_shm_unregister,
+};
+static const struct tee_desc tstee_desc = {
- .name = "tstee-clnt",
- .ops = &tstee_ops,
- .owner = THIS_MODULE,
+};
+static bool tstee_check_rpc_compatible(struct ffa_device *ffa_dev) +{
- struct ffa_send_direct_data ffa_data;
- u32 ffa_args[5] = {};
- ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(TS_RPC_MGMT_IFACE_ID,
TS_RPC_OP_GET_VERSION);
- arg_list_to_ffa_data(ffa_args, &ffa_data);
- if (ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data))
return false;
- arg_list_from_ffa_data(&ffa_data, ffa_args);
- return ffa_args[TS_RPC_GET_VERSION_RESP] == TS_RPC_PROTOCOL_VERSION;
+}
+static void tstee_deinit_common(struct tstee *tstee) +{
- tee_device_unregister(tstee->teedev);
- if (tstee->pool)
tee_shm_pool_free(tstee->pool);
- kfree(tstee);
+}
+static int tstee_probe(struct ffa_device *ffa_dev) +{
- struct tstee *tstee;
- int rc;
- ffa_dev->ops->msg_ops->mode_32bit_set(ffa_dev);
- if (!tstee_check_rpc_compatible(ffa_dev))
return -EINVAL;
- tstee = kzalloc(sizeof(*tstee), GFP_KERNEL);
- if (!tstee)
return -ENOMEM;
- tstee->ffa_dev = ffa_dev;
- tstee->pool = tstee_create_shm_pool();
- if (IS_ERR(tstee->pool)) {
rc = PTR_ERR(tstee->pool);
tstee->pool = NULL;
goto err;
- }
- tstee->teedev = tee_device_alloc(&tstee_desc, NULL, tstee->pool, tstee);
- if (IS_ERR(tstee->teedev)) {
rc = PTR_ERR(tstee->teedev);
tstee->teedev = NULL;
goto err;
- }
- rc = tee_device_register(tstee->teedev);
- if (rc)
goto err;
- ffa_dev_set_drvdata(ffa_dev, tstee);
- pr_info("driver initialized for endpoint 0x%x\n", ffa_dev->vm_id);
- return 0;
+err:
- tstee_deinit_common(tstee);
- return rc;
+}
+static void tstee_remove(struct ffa_device *ffa_dev) +{
- tstee_deinit_common(ffa_dev->dev.driver_data);
+}
+static const struct ffa_device_id tstee_device_ids[] = {
- /* TS RPC protocol UUID: bdcd76d7-825e-4751-963b-86d4f84943ac */
- { TS_RPC_UUID },
- {}
+};
+static struct ffa_driver tstee_driver = {
- .name = "arm_tstee",
- .probe = tstee_probe,
- .remove = tstee_remove,
- .id_table = tstee_device_ids,
+};
+static int __init mod_init(void) +{
- return ffa_register(&tstee_driver);
+} +module_init(mod_init)
+static void __exit mod_exit(void) +{
- ffa_unregister(&tstee_driver);
+} +module_exit(mod_exit)
+MODULE_ALIAS("arm-tstee"); +MODULE_AUTHOR("Balint Dobszay balint.dobszay@arm.com"); +MODULE_DESCRIPTION("Arm Trusted Services TEE driver"); +MODULE_LICENSE("GPL"); diff --git a/drivers/tee/tstee/shm_pool.c b/drivers/tee/tstee/shm_pool.c new file mode 100644 index 000000000000..518c10dd0735 --- /dev/null +++ b/drivers/tee/tstee/shm_pool.c @@ -0,0 +1,91 @@ +// SPDX-License-Identifier: GPL-2.0-only +/*
- Copyright (c) 2023, Arm Limited
- */
+#include <linux/arm_ffa.h> +#include <linux/device.h> +#include <linux/dma-buf.h> +#include <linux/genalloc.h> +#include <linux/slab.h> +#include <linux/tee_drv.h>
+#include "tstee_private.h"
+static int pool_op_alloc(struct tee_shm_pool *pool __always_unused, struct tee_shm *shm,
size_t size, size_t align __always_unused)
+{
- unsigned int order, nr_pages, i;
- struct page *page, **pages;
- int rc;
- if (size == 0)
return -EINVAL;
- order = get_order(size);
- nr_pages = 1 << order;
- page = alloc_pages(GFP_KERNEL | __GFP_ZERO, order);
- if (!page)
return -ENOMEM;
- shm->kaddr = page_address(page);
- shm->paddr = page_to_phys(page);
- shm->size = PAGE_SIZE << order;
- pages = kcalloc(nr_pages, sizeof(*pages), GFP_KERNEL);
- if (!pages) {
rc = -ENOMEM;
goto err;
- }
- for (i = 0; i < nr_pages; i++)
pages[i] = page + i;
- rc = tstee_shm_register(shm->ctx, shm, pages, nr_pages, (unsigned long)shm->kaddr);
- kfree(pages);
- if (rc)
goto err;
- return 0;
+err:
- __free_pages(page, order);
- return rc;
+}
+static void pool_op_free(struct tee_shm_pool *pool __always_unused, struct tee_shm *shm) +{
- int rc;
- rc = tstee_shm_unregister(shm->ctx, shm);
- if (rc)
pr_err("shm id 0x%llx unregister rc %d\n", shm->sec_world_id, rc);
- free_pages((unsigned long)shm->kaddr, get_order(shm->size));
- shm->kaddr = NULL;
+}
+static void pool_op_destroy_pool(struct tee_shm_pool *pool) +{
- kfree(pool);
+}
These pool_op functions look very much like the optee_pool_op functions. Compare this with how the pool_ffa_ops is implemented in the OP-TEE driver. We should consider factoring out the optee_pool_op functions into the TEE subsystem instead so we can avoid reimplementing the same thing.
Cheers, Jens
+static const struct tee_shm_pool_ops pool_ops = {
- .alloc = pool_op_alloc,
- .free = pool_op_free,
- .destroy_pool = pool_op_destroy_pool,
+};
+struct tee_shm_pool *tstee_create_shm_pool(void) +{
- struct tee_shm_pool *pool;
- pool = kzalloc(sizeof(*pool), GFP_KERNEL);
- if (!pool)
return ERR_PTR(-ENOMEM);
- pool->ops = &pool_ops;
- return pool;
+} diff --git a/drivers/tee/tstee/tstee_private.h b/drivers/tee/tstee/tstee_private.h new file mode 100644 index 000000000000..2ea266804ccf --- /dev/null +++ b/drivers/tee/tstee/tstee_private.h @@ -0,0 +1,97 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/*
- Copyright (c) 2023, Arm Limited
- */
+#ifndef TSTEE_PRIVATE_H +#define TSTEE_PRIVATE_H
+#include <linux/arm_ffa.h> +#include <linux/bitops.h> +#include <linux/idr.h> +#include <linux/tee_drv.h> +#include <linux/types.h> +#include <linux/uuid.h>
+/* UUID of this protocol */ +#define TS_RPC_UUID UUID_INIT(0xbdcd76d7, 0x825e, 0x4751, \
0x96, 0x3b, 0x86, 0xd4, 0xf8, 0x49, 0x43, 0xac)
+/* Protocol version*/ +#define TS_RPC_PROTOCOL_VERSION (1)
+/* Status codes */ +#define TS_RPC_OK (0)
+/* RPC control register */ +#define TS_RPC_CTRL_REG (0) +#define OPCODE_MASK GENMASK(15, 0) +#define IFACE_ID_MASK GENMASK(23, 16) +#define TS_RPC_CTRL_OPCODE(x) ((u16)(FIELD_GET(OPCODE_MASK, (x)))) +#define TS_RPC_CTRL_IFACE_ID(x) ((u8)(FIELD_GET(IFACE_ID_MASK, (x)))) +#define TS_RPC_CTRL_PACK_IFACE_OPCODE(i, o) \
- (FIELD_PREP(IFACE_ID_MASK, (i)) | FIELD_PREP(OPCODE_MASK, (o)))
+#define TS_RPC_CTRL_SAP_RC BIT(30) +#define TS_RPC_CTRL_SAP_ERR BIT(31)
+/* Interface ID for RPC management operations */ +#define TS_RPC_MGMT_IFACE_ID (0xff)
+/* Management calls */ +#define TS_RPC_OP_GET_VERSION (0x0000) +#define TS_RPC_GET_VERSION_RESP (1)
+#define TS_RPC_OP_RETRIEVE_MEM (0x0001) +#define TS_RPC_RETRIEVE_MEM_HANDLE_LSW (1) +#define TS_RPC_RETRIEVE_MEM_HANDLE_MSW (2) +#define TS_RPC_RETRIEVE_MEM_TAG_LSW (3) +#define TS_RPC_RETRIEVE_MEM_TAG_MSW (4) +#define TS_RPC_RETRIEVE_MEM_RPC_STATUS (1)
+#define TS_RPC_OP_RELINQ_MEM (0x0002) +#define TS_RPC_RELINQ_MEM_HANDLE_LSW (1) +#define TS_RPC_RELINQ_MEM_HANDLE_MSW (2) +#define TS_RPC_RELINQ_MEM_RPC_STATUS (1)
+#define TS_RPC_OP_SERVICE_INFO (0x0003) +#define TS_RPC_SERVICE_INFO_UUID0 (1) +#define TS_RPC_SERVICE_INFO_UUID1 (2) +#define TS_RPC_SERVICE_INFO_UUID2 (3) +#define TS_RPC_SERVICE_INFO_UUID3 (4) +#define TS_RPC_SERVICE_INFO_RPC_STATUS (1) +#define TS_RPC_SERVICE_INFO_IFACE (2)
+/* Service call */ +#define TS_RPC_SERVICE_MEM_HANDLE_LSW (1) +#define TS_RPC_SERVICE_MEM_HANDLE_MSW (2) +#define TS_RPC_SERVICE_REQ_LEN (3) +#define TS_RPC_SERVICE_CLIENT_ID (4) +#define TS_RPC_SERVICE_RPC_STATUS (1) +#define TS_RPC_SERVICE_STATUS (2) +#define TS_RPC_SERVICE_RESP_LEN (3)
+struct tstee {
- struct ffa_device *ffa_dev;
- struct tee_device *teedev;
- struct tee_shm_pool *pool;
+};
+struct ts_session {
- struct list_head list_node;
- u32 session_id;
- u8 iface_id;
+};
+struct ts_context_data {
- struct list_head sess_list;
- struct idr sess_ids;
- /* Serializes access to this struct */
- struct mutex mutex;
+};
+struct tee_shm_pool *tstee_create_shm_pool(void); +int tstee_shm_register(struct tee_context *ctx, struct tee_shm *shm, struct page **pages,
size_t num_pages, unsigned long start);
+int tstee_shm_unregister(struct tee_context *ctx, struct tee_shm *shm);
+#endif /* TSTEE_PRIVATE_H */ diff --git a/include/uapi/linux/tee.h b/include/uapi/linux/tee.h index 23e57164693c..d0430bee8292 100644 --- a/include/uapi/linux/tee.h +++ b/include/uapi/linux/tee.h @@ -56,6 +56,7 @@ */ #define TEE_IMPL_ID_OPTEE 1 #define TEE_IMPL_ID_AMDTEE 2 +#define TEE_IMPL_ID_TSTEE 3
/*
- OP-TEE specific capabilities
-- 2.34.1
Hi Jens,
On 3 Oct 2023, at 17:30, Jens Wiklander wrote:
On Wed, Sep 27, 2023 at 05:21:46PM +0200, Balint Dobszay wrote:
[snip]
diff --git a/drivers/tee/tstee/shm_pool.c b/drivers/tee/tstee/shm_pool.c new file mode 100644 index 000000000000..518c10dd0735 --- /dev/null +++ b/drivers/tee/tstee/shm_pool.c @@ -0,0 +1,91 @@ +// SPDX-License-Identifier: GPL-2.0-only +/*
- Copyright (c) 2023, Arm Limited
- */
+#include <linux/arm_ffa.h> +#include <linux/device.h> +#include <linux/dma-buf.h> +#include <linux/genalloc.h> +#include <linux/slab.h> +#include <linux/tee_drv.h>
+#include "tstee_private.h"
+static int pool_op_alloc(struct tee_shm_pool *pool __always_unused, struct tee_shm *shm,
size_t size, size_t align __always_unused)
+{
- unsigned int order, nr_pages, i;
- struct page *page, **pages;
- int rc;
- if (size == 0)
return -EINVAL;
- order = get_order(size);
- nr_pages = 1 << order;
- page = alloc_pages(GFP_KERNEL | __GFP_ZERO, order);
- if (!page)
return -ENOMEM;
- shm->kaddr = page_address(page);
- shm->paddr = page_to_phys(page);
- shm->size = PAGE_SIZE << order;
- pages = kcalloc(nr_pages, sizeof(*pages), GFP_KERNEL);
- if (!pages) {
rc = -ENOMEM;
goto err;
- }
- for (i = 0; i < nr_pages; i++)
pages[i] = page + i;
- rc = tstee_shm_register(shm->ctx, shm, pages, nr_pages, (unsigned long)shm->kaddr);
- kfree(pages);
- if (rc)
goto err;
- return 0;
+err:
- __free_pages(page, order);
- return rc;
+}
+static void pool_op_free(struct tee_shm_pool *pool __always_unused, struct tee_shm *shm) +{
- int rc;
- rc = tstee_shm_unregister(shm->ctx, shm);
- if (rc)
pr_err("shm id 0x%llx unregister rc %d\n", shm->sec_world_id, rc);
- free_pages((unsigned long)shm->kaddr, get_order(shm->size));
- shm->kaddr = NULL;
+}
+static void pool_op_destroy_pool(struct tee_shm_pool *pool) +{
- kfree(pool);
+}
These pool_op functions look very much like the optee_pool_op functions. Compare this with how the pool_ffa_ops is implemented in the OP-TEE driver. We should consider factoring out the optee_pool_op functions into the TEE subsystem instead so we can avoid reimplementing the same thing.
Makes sense, thanks. I'll propose a patch for this in the next version.
Regards, Balint
Hi Balint,
Thanks for your effort to put up this driver. It's interesting to see how the TEE subsystem can be utilized to address different use-cases.
On Wed, 27 Sept 2023 at 20:56, Balint Dobszay balint.dobszay@arm.com wrote:
The Trusted Services project provides a framework for developing and deploying device Root Of Trust services in FF-A Secure Partitions. The FF-A SPs are accessible through the FF-A driver, but this doesn't provide a user space interface. The goal of this TEE driver is to make Trusted Services SPs accessible for user space clients.
I am interested in exploring the user space library/applications. Do you have a standard library implementation and some example user-space applications leveraging this driver interface?
All TS SPs have the same FF-A UUID, it identifies the RPC protocol used by TS. A TS SP can host one or more services, a service is identified by its service UUID. The same type of service cannot be present twice in the same SP. During SP boot each service in the SP gets assigned an interface ID, this is just a short ID to simplify message addressing. There is 1:1 mapping between TS SPs and TEE devices, i.e. a TEE device gets registered for each Trusted Services compatible FF-A device found on the FF-A bus. Opening the SP corresponds to opening the TEE dev and creating a TEE context. A TS SP hosts one or more services, opening a service corresponds to opening a session in the given tee_context.
Signed-off-by: Balint Dobszay balint.dobszay@arm.com
Hi All,
This is an initial TEE driver implementation for Trusted Services [1]. Trusted Services is a TrustedFirmware.org project that provides a framework for developing and deploying device Root Of Trust services in FF-A Secure Partitions. The project hosts the reference implementation of Arm Platform Security Architecture [2] for Arm A-profile devices.
The FF-A Secure Partitions (SPs) are accessible through the FF-A driver in Linux. However, the FF-A driver doesn't have a user space interface so user space clients currently cannot access Trusted Services. The goal of this TEE driver is to bridge this gap and make Trusted Services functionality accessible from user space.
The predecessor of this driver is an out-of-tree kernel module that we have been using for prototyping [3]. An integration of Trusted Services, OP-TEE as a S-EL1 SPMC, TF-A, Linux and this out-of-tree module is available as part of the OP-TEE project, please find the manifest file at [4].
Some key points about the structure of Trusted Services (for more details please see the documentation at [5]):
- A Trusted Services SP can host one or multiple services. All TS SPs have the same FF-A UUID, it identifies the RPC protocol used by TS. The protocol's register ABI definition is available at [6] and is implemented in tstee_private.h.
- A service hosted by an SP is identified by its service UUID. The same type of service can be present in multiple SPs, but not twice in the same SP. During SP boot each service in the SP gets assigned an interface ID, this is just a short ID to simplify message addressing.
- There is 1:1 mapping between TS SPs and TEE devices, i.e. a TEE device gets registered for each Trusted Services compatible FF-A device found on the FF-A bus.
- Opening the SP corresponds to opening the TEE dev and creating a TEE context. A TS SP hosts one or more services, opening a service corresponds to opening a session in the given tee_context.
Regards, Balint
[1] https://www.trustedfirmware.org/projects/trusted-services/ [2] https://www.arm.com/architecture/security-features/platform-security [3] https://gitlab.arm.com/linux-arm/linux-trusted-services [4] https://github.com/OP-TEE/manifest/blob/master/fvp-ts.xml [5] https://trusted-services.readthedocs.io [6] https://trusted-services.readthedocs.io/en/integration/developer/service-acc...
drivers/tee/Kconfig | 1 + drivers/tee/Makefile | 1 + drivers/tee/tstee/Kconfig | 11 + drivers/tee/tstee/Makefile | 3 + drivers/tee/tstee/core.c | 473 ++++++++++++++++++++++++++++++ drivers/tee/tstee/shm_pool.c | 91 ++++++ drivers/tee/tstee/tstee_private.h | 97 ++++++ include/uapi/linux/tee.h | 1 + 8 files changed, 678 insertions(+) create mode 100644 drivers/tee/tstee/Kconfig create mode 100644 drivers/tee/tstee/Makefile create mode 100644 drivers/tee/tstee/core.c create mode 100644 drivers/tee/tstee/shm_pool.c create mode 100644 drivers/tee/tstee/tstee_private.h
diff --git a/drivers/tee/Kconfig b/drivers/tee/Kconfig index 73a147202e88..61b507c18780 100644 --- a/drivers/tee/Kconfig +++ b/drivers/tee/Kconfig @@ -15,5 +15,6 @@ if TEE
source "drivers/tee/optee/Kconfig" source "drivers/tee/amdtee/Kconfig" +source "drivers/tee/tstee/Kconfig"
endif diff --git a/drivers/tee/Makefile b/drivers/tee/Makefile index 68da044afbfa..5488cba30bd2 100644 --- a/drivers/tee/Makefile +++ b/drivers/tee/Makefile @@ -5,3 +5,4 @@ tee-objs += tee_shm.o tee-objs += tee_shm_pool.o obj-$(CONFIG_OPTEE) += optee/ obj-$(CONFIG_AMDTEE) += amdtee/ +obj-$(CONFIG_ARM_TSTEE) += tstee/ diff --git a/drivers/tee/tstee/Kconfig b/drivers/tee/tstee/Kconfig new file mode 100644 index 000000000000..cd8d32586a77 --- /dev/null +++ b/drivers/tee/tstee/Kconfig @@ -0,0 +1,11 @@ +# SPDX-License-Identifier: GPL-2.0-only +config ARM_TSTEE
tristate "Arm Trusted Services TEE driver"
depends on ARM_FFA_TRANSPORT
default n
help
The Trusted Services project provides a framework for developing and
deploying device Root Of Trust services in FF-A Secure Partitions.
This driver provides an interface to make Trusted Services Secure
Partitions accessible for user space clients, since the FF-A driver
doesn't implement a user space interface directly.
diff --git a/drivers/tee/tstee/Makefile b/drivers/tee/tstee/Makefile new file mode 100644 index 000000000000..dd54b1f88652 --- /dev/null +++ b/drivers/tee/tstee/Makefile @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: GPL-2.0-only +arm-tstee-objs := core.o shm_pool.o +obj-$(CONFIG_ARM_TSTEE) = arm-tstee.o diff --git a/drivers/tee/tstee/core.c b/drivers/tee/tstee/core.c new file mode 100644 index 000000000000..c0194638b7da --- /dev/null +++ b/drivers/tee/tstee/core.c @@ -0,0 +1,473 @@ +// SPDX-License-Identifier: GPL-2.0-only +/*
- Copyright (c) 2023, Arm Limited
- */
+#define DRIVER_NAME "Arm TSTEE" +#define pr_fmt(fmt) DRIVER_NAME ": " fmt
+#include <linux/arm_ffa.h> +#include <linux/err.h> +#include <linux/errno.h> +#include <linux/kernel.h> +#include <linux/limits.h> +#include <linux/list.h> +#include <linux/mm.h> +#include <linux/module.h> +#include <linux/scatterlist.h> +#include <linux/slab.h> +#include <linux/stat.h> +#include <linux/tee_drv.h> +#include <linux/types.h> +#include <linux/uaccess.h>
+#include "tstee_private.h"
+#define FFA_INVALID_MEM_HANDLE U64_MAX
+static void arg_list_to_ffa_data(const u32 *args, struct ffa_send_direct_data *data) +{
data->data0 = args[0];
data->data1 = args[1];
data->data2 = args[2];
data->data3 = args[3];
data->data4 = args[4];
+}
+static void arg_list_from_ffa_data(const struct ffa_send_direct_data *data, u32 *args) +{
args[0] = lower_32_bits(data->data0);
args[1] = lower_32_bits(data->data1);
args[2] = lower_32_bits(data->data2);
args[3] = lower_32_bits(data->data3);
args[4] = lower_32_bits(data->data4);
+}
+static void tstee_get_version(struct tee_device *teedev, struct tee_ioctl_version_data *vers) +{
struct tstee *tstee = tee_get_drvdata(teedev);
struct tee_ioctl_version_data v = {
.impl_id = TEE_IMPL_ID_TSTEE,
.impl_caps = tstee->ffa_dev->vm_id,
.gen_caps = 0,
};
*vers = v;
+}
+static int tstee_open(struct tee_context *ctx) +{
struct ts_context_data *ctxdata;
ctxdata = kzalloc(sizeof(*ctxdata), GFP_KERNEL);
if (!ctxdata)
return -ENOMEM;
mutex_init(&ctxdata->mutex);
idr_init(&ctxdata->sess_ids);
INIT_LIST_HEAD(&ctxdata->sess_list);
ctx->data = ctxdata;
return 0;
+}
+static void tstee_release(struct tee_context *ctx) +{
struct ts_context_data *ctxdata = ctx->data;
struct ts_session *sess, *sess_tmp;
if (!ctxdata)
return;
list_for_each_entry_safe(sess, sess_tmp, &ctxdata->sess_list, list_node) {
list_del(&sess->list_node);
idr_remove(&ctxdata->sess_ids, sess->session_id);
kfree(sess);
}
idr_destroy(&ctxdata->sess_ids);
mutex_destroy(&ctxdata->mutex);
kfree(ctxdata);
ctx->data = NULL;
+}
+static struct ts_session *find_session(struct ts_context_data *ctxdata, u32 session_id) +{
struct ts_session *sess;
list_for_each_entry(sess, &ctxdata->sess_list, list_node)
if (sess->session_id == session_id)
return sess;
return NULL;
+}
+static int tstee_open_session(struct tee_context *ctx, struct tee_ioctl_open_session_arg *arg,
struct tee_param *param __always_unused)
+{
struct tstee *tstee = tee_get_drvdata(ctx->teedev);
struct ffa_device *ffa_dev = tstee->ffa_dev;
struct ts_context_data *ctxdata = ctx->data;
struct ffa_send_direct_data ffa_data;
struct ts_session *sess = NULL;
u32 ffa_args[5] = {};
int sess_id;
int rc;
ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(TS_RPC_MGMT_IFACE_ID,
TS_RPC_OP_SERVICE_INFO);
memcpy((u8 *)(ffa_args + TS_RPC_SERVICE_INFO_UUID0), arg->uuid, UUID_SIZE);
arg_list_to_ffa_data(ffa_args, &ffa_data);
rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data);
if (rc)
return rc;
arg_list_from_ffa_data(&ffa_data, ffa_args);
if (ffa_args[TS_RPC_SERVICE_INFO_RPC_STATUS] != TS_RPC_OK)
return -ENODEV;
if (ffa_args[TS_RPC_SERVICE_INFO_IFACE] > U8_MAX)
return -EINVAL;
sess = kzalloc(sizeof(*sess), GFP_KERNEL);
if (!sess)
return -ENOMEM;
sess_id = idr_alloc(&ctxdata->sess_ids, sess, 1, 0, GFP_KERNEL);
if (sess_id < 0) {
kfree(sess);
return sess_id;
}
sess->session_id = sess_id;
sess->iface_id = ffa_args[TS_RPC_SERVICE_INFO_IFACE];
mutex_lock(&ctxdata->mutex);
list_add(&sess->list_node, &ctxdata->sess_list);
mutex_unlock(&ctxdata->mutex);
arg->session = sess_id;
arg->ret = 0;
return 0;
+}
+static int tstee_close_session(struct tee_context *ctx, u32 session) +{
struct ts_context_data *ctxdata = ctx->data;
struct ts_session *sess;
mutex_lock(&ctxdata->mutex);
sess = find_session(ctxdata, session);
if (sess)
list_del(&sess->list_node);
mutex_unlock(&ctxdata->mutex);
if (!sess)
return -EINVAL;
idr_remove(&ctxdata->sess_ids, sess->session_id);
kfree(sess);
return 0;
+}
+static int tstee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg,
struct tee_param *param)
+{
struct tstee *tstee = tee_get_drvdata(ctx->teedev);
struct ffa_device *ffa_dev = tstee->ffa_dev;
struct ts_context_data *ctxdata = ctx->data;
struct ffa_send_direct_data ffa_data;
struct tee_shm *shm = NULL;
struct ts_session *sess;
u32 req_len, ffa_args[5] = {};
int shm_id, rc;
u8 iface_id;
u64 handle;
u16 opcode;
mutex_lock(&ctxdata->mutex);
sess = find_session(ctxdata, arg->session);
/* Do this while holding the mutex to make sure that the session wasn't closed meanwhile */
if (sess)
iface_id = sess->iface_id;
mutex_unlock(&ctxdata->mutex);
if (!sess)
return -EINVAL;
opcode = lower_16_bits(arg->func);
shm_id = lower_32_bits(param[0].u.value.a);
req_len = lower_32_bits(param[0].u.value.b);
if (shm_id != 0) {
shm = tee_shm_get_from_id(ctx, shm_id);
if (IS_ERR(shm))
return PTR_ERR(shm);
if (shm->size < req_len) {
pr_err("request doesn't fit into shared memory buffer\n");
rc = -EINVAL;
goto out;
}
handle = shm->sec_world_id;
} else {
handle = FFA_INVALID_MEM_HANDLE;
}
ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(iface_id, opcode);
ffa_args[TS_RPC_SERVICE_MEM_HANDLE_LSW] = lower_32_bits(handle);
ffa_args[TS_RPC_SERVICE_MEM_HANDLE_MSW] = upper_32_bits(handle);
ffa_args[TS_RPC_SERVICE_REQ_LEN] = req_len;
ffa_args[TS_RPC_SERVICE_CLIENT_ID] = 0;
arg_list_to_ffa_data(ffa_args, &ffa_data);
rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data);
I haven't dug deeper into the ABI yet, which is something I will look into. But these RPC commands caught my attention. Are these RPC calls blocking in nature? Is there a possibility that these could cause CPU stalls? Do the Linux interrupts remain unhandled until the RPC calls return?
-Sumit
if (rc)
goto out;
arg_list_from_ffa_data(&ffa_data, ffa_args);
if (ffa_args[TS_RPC_SERVICE_RPC_STATUS] != TS_RPC_OK) {
pr_err("invoke_func rpc status: %d\n", ffa_args[TS_RPC_SERVICE_RPC_STATUS]);
rc = -EINVAL;
goto out;
}
arg->ret = ffa_args[TS_RPC_SERVICE_STATUS];
if (shm && shm->size >= ffa_args[TS_RPC_SERVICE_RESP_LEN])
param[0].u.value.a = ffa_args[TS_RPC_SERVICE_RESP_LEN];
+out:
if (shm)
tee_shm_put(shm);
return rc;
+}
+static int tstee_cancel_req(struct tee_context *ctx __always_unused, u32 cancel_id __always_unused,
u32 session __always_unused)
+{
return -EINVAL;
+}
+int tstee_shm_register(struct tee_context *ctx, struct tee_shm *shm, struct page **pages,
size_t num_pages, unsigned long start __always_unused)
+{
struct tstee *tstee = tee_get_drvdata(ctx->teedev);
struct ffa_device *ffa_dev = tstee->ffa_dev;
struct ffa_mem_region_attributes mem_attr = {
.receiver = tstee->ffa_dev->vm_id,
.attrs = FFA_MEM_RW,
.flag = 0,
};
struct ffa_mem_ops_args mem_args = {
.attrs = &mem_attr,
.use_txbuf = true,
.nattrs = 1,
.flags = 0,
};
struct ffa_send_direct_data ffa_data;
struct sg_table sgt;
u32 ffa_args[5] = {};
int rc;
rc = sg_alloc_table_from_pages(&sgt, pages, num_pages, 0, num_pages * PAGE_SIZE,
GFP_KERNEL);
if (rc)
return rc;
mem_args.sg = sgt.sgl;
rc = ffa_dev->ops->mem_ops->memory_share(&mem_args);
sg_free_table(&sgt);
if (rc)
return rc;
shm->sec_world_id = mem_args.g_handle;
ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(TS_RPC_MGMT_IFACE_ID,
TS_RPC_OP_RETRIEVE_MEM);
ffa_args[TS_RPC_RETRIEVE_MEM_HANDLE_LSW] = lower_32_bits(shm->sec_world_id);
ffa_args[TS_RPC_RETRIEVE_MEM_HANDLE_MSW] = upper_32_bits(shm->sec_world_id);
ffa_args[TS_RPC_RETRIEVE_MEM_TAG_LSW] = 0;
ffa_args[TS_RPC_RETRIEVE_MEM_TAG_MSW] = 0;
arg_list_to_ffa_data(ffa_args, &ffa_data);
rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data);
if (rc) {
(void)ffa_dev->ops->mem_ops->memory_reclaim(shm->sec_world_id, 0);
return rc;
}
arg_list_from_ffa_data(&ffa_data, ffa_args);
if (ffa_args[TS_RPC_RETRIEVE_MEM_RPC_STATUS] != TS_RPC_OK) {
pr_err("shm_register rpc status: %d\n", ffa_args[TS_RPC_RETRIEVE_MEM_RPC_STATUS]);
ffa_dev->ops->mem_ops->memory_reclaim(shm->sec_world_id, 0);
return -EINVAL;
}
return 0;
+}
+int tstee_shm_unregister(struct tee_context *ctx, struct tee_shm *shm) +{
struct tstee *tstee = tee_get_drvdata(ctx->teedev);
struct ffa_device *ffa_dev = tstee->ffa_dev;
struct ffa_send_direct_data ffa_data;
u32 ffa_args[5] = {};
int rc;
ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(TS_RPC_MGMT_IFACE_ID,
TS_RPC_OP_RELINQ_MEM);
ffa_args[TS_RPC_RELINQ_MEM_HANDLE_LSW] = lower_32_bits(shm->sec_world_id);
ffa_args[TS_RPC_RELINQ_MEM_HANDLE_MSW] = upper_32_bits(shm->sec_world_id);
arg_list_to_ffa_data(ffa_args, &ffa_data);
rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data);
if (rc)
return rc;
arg_list_from_ffa_data(&ffa_data, ffa_args);
if (ffa_args[TS_RPC_RELINQ_MEM_RPC_STATUS] != TS_RPC_OK) {
pr_err("shm_unregister rpc status: %d\n", ffa_args[TS_RPC_RELINQ_MEM_RPC_STATUS]);
return -EINVAL;
}
rc = ffa_dev->ops->mem_ops->memory_reclaim(shm->sec_world_id, 0);
return rc;
+}
+static const struct tee_driver_ops tstee_ops = {
.get_version = tstee_get_version,
.open = tstee_open,
.release = tstee_release,
.open_session = tstee_open_session,
.close_session = tstee_close_session,
.invoke_func = tstee_invoke_func,
.cancel_req = tstee_cancel_req,
.shm_register = tstee_shm_register,
.shm_unregister = tstee_shm_unregister,
+};
+static const struct tee_desc tstee_desc = {
.name = "tstee-clnt",
.ops = &tstee_ops,
.owner = THIS_MODULE,
+};
+static bool tstee_check_rpc_compatible(struct ffa_device *ffa_dev) +{
struct ffa_send_direct_data ffa_data;
u32 ffa_args[5] = {};
ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(TS_RPC_MGMT_IFACE_ID,
TS_RPC_OP_GET_VERSION);
arg_list_to_ffa_data(ffa_args, &ffa_data);
if (ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data))
return false;
arg_list_from_ffa_data(&ffa_data, ffa_args);
return ffa_args[TS_RPC_GET_VERSION_RESP] == TS_RPC_PROTOCOL_VERSION;
+}
+static void tstee_deinit_common(struct tstee *tstee) +{
tee_device_unregister(tstee->teedev);
if (tstee->pool)
tee_shm_pool_free(tstee->pool);
kfree(tstee);
+}
+static int tstee_probe(struct ffa_device *ffa_dev) +{
struct tstee *tstee;
int rc;
ffa_dev->ops->msg_ops->mode_32bit_set(ffa_dev);
if (!tstee_check_rpc_compatible(ffa_dev))
return -EINVAL;
tstee = kzalloc(sizeof(*tstee), GFP_KERNEL);
if (!tstee)
return -ENOMEM;
tstee->ffa_dev = ffa_dev;
tstee->pool = tstee_create_shm_pool();
if (IS_ERR(tstee->pool)) {
rc = PTR_ERR(tstee->pool);
tstee->pool = NULL;
goto err;
}
tstee->teedev = tee_device_alloc(&tstee_desc, NULL, tstee->pool, tstee);
if (IS_ERR(tstee->teedev)) {
rc = PTR_ERR(tstee->teedev);
tstee->teedev = NULL;
goto err;
}
rc = tee_device_register(tstee->teedev);
if (rc)
goto err;
ffa_dev_set_drvdata(ffa_dev, tstee);
pr_info("driver initialized for endpoint 0x%x\n", ffa_dev->vm_id);
return 0;
+err:
tstee_deinit_common(tstee);
return rc;
+}
+static void tstee_remove(struct ffa_device *ffa_dev) +{
tstee_deinit_common(ffa_dev->dev.driver_data);
+}
+static const struct ffa_device_id tstee_device_ids[] = {
/* TS RPC protocol UUID: bdcd76d7-825e-4751-963b-86d4f84943ac */
{ TS_RPC_UUID },
{}
+};
+static struct ffa_driver tstee_driver = {
.name = "arm_tstee",
.probe = tstee_probe,
.remove = tstee_remove,
.id_table = tstee_device_ids,
+};
+static int __init mod_init(void) +{
return ffa_register(&tstee_driver);
+} +module_init(mod_init)
+static void __exit mod_exit(void) +{
ffa_unregister(&tstee_driver);
+} +module_exit(mod_exit)
+MODULE_ALIAS("arm-tstee"); +MODULE_AUTHOR("Balint Dobszay balint.dobszay@arm.com"); +MODULE_DESCRIPTION("Arm Trusted Services TEE driver"); +MODULE_LICENSE("GPL"); diff --git a/drivers/tee/tstee/shm_pool.c b/drivers/tee/tstee/shm_pool.c new file mode 100644 index 000000000000..518c10dd0735 --- /dev/null +++ b/drivers/tee/tstee/shm_pool.c @@ -0,0 +1,91 @@ +// SPDX-License-Identifier: GPL-2.0-only +/*
- Copyright (c) 2023, Arm Limited
- */
+#include <linux/arm_ffa.h> +#include <linux/device.h> +#include <linux/dma-buf.h> +#include <linux/genalloc.h> +#include <linux/slab.h> +#include <linux/tee_drv.h>
+#include "tstee_private.h"
+static int pool_op_alloc(struct tee_shm_pool *pool __always_unused, struct tee_shm *shm,
size_t size, size_t align __always_unused)
+{
unsigned int order, nr_pages, i;
struct page *page, **pages;
int rc;
if (size == 0)
return -EINVAL;
order = get_order(size);
nr_pages = 1 << order;
page = alloc_pages(GFP_KERNEL | __GFP_ZERO, order);
if (!page)
return -ENOMEM;
shm->kaddr = page_address(page);
shm->paddr = page_to_phys(page);
shm->size = PAGE_SIZE << order;
pages = kcalloc(nr_pages, sizeof(*pages), GFP_KERNEL);
if (!pages) {
rc = -ENOMEM;
goto err;
}
for (i = 0; i < nr_pages; i++)
pages[i] = page + i;
rc = tstee_shm_register(shm->ctx, shm, pages, nr_pages, (unsigned long)shm->kaddr);
kfree(pages);
if (rc)
goto err;
return 0;
+err:
__free_pages(page, order);
return rc;
+}
+static void pool_op_free(struct tee_shm_pool *pool __always_unused, struct tee_shm *shm) +{
int rc;
rc = tstee_shm_unregister(shm->ctx, shm);
if (rc)
pr_err("shm id 0x%llx unregister rc %d\n", shm->sec_world_id, rc);
free_pages((unsigned long)shm->kaddr, get_order(shm->size));
shm->kaddr = NULL;
+}
+static void pool_op_destroy_pool(struct tee_shm_pool *pool) +{
kfree(pool);
+}
+static const struct tee_shm_pool_ops pool_ops = {
.alloc = pool_op_alloc,
.free = pool_op_free,
.destroy_pool = pool_op_destroy_pool,
+};
+struct tee_shm_pool *tstee_create_shm_pool(void) +{
struct tee_shm_pool *pool;
pool = kzalloc(sizeof(*pool), GFP_KERNEL);
if (!pool)
return ERR_PTR(-ENOMEM);
pool->ops = &pool_ops;
return pool;
+} diff --git a/drivers/tee/tstee/tstee_private.h b/drivers/tee/tstee/tstee_private.h new file mode 100644 index 000000000000..2ea266804ccf --- /dev/null +++ b/drivers/tee/tstee/tstee_private.h @@ -0,0 +1,97 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/*
- Copyright (c) 2023, Arm Limited
- */
+#ifndef TSTEE_PRIVATE_H +#define TSTEE_PRIVATE_H
+#include <linux/arm_ffa.h> +#include <linux/bitops.h> +#include <linux/idr.h> +#include <linux/tee_drv.h> +#include <linux/types.h> +#include <linux/uuid.h>
+/* UUID of this protocol */ +#define TS_RPC_UUID UUID_INIT(0xbdcd76d7, 0x825e, 0x4751, \
0x96, 0x3b, 0x86, 0xd4, 0xf8, 0x49, 0x43, 0xac)
+/* Protocol version*/ +#define TS_RPC_PROTOCOL_VERSION (1)
+/* Status codes */ +#define TS_RPC_OK (0)
+/* RPC control register */ +#define TS_RPC_CTRL_REG (0) +#define OPCODE_MASK GENMASK(15, 0) +#define IFACE_ID_MASK GENMASK(23, 16) +#define TS_RPC_CTRL_OPCODE(x) ((u16)(FIELD_GET(OPCODE_MASK, (x)))) +#define TS_RPC_CTRL_IFACE_ID(x) ((u8)(FIELD_GET(IFACE_ID_MASK, (x)))) +#define TS_RPC_CTRL_PACK_IFACE_OPCODE(i, o) \
(FIELD_PREP(IFACE_ID_MASK, (i)) | FIELD_PREP(OPCODE_MASK, (o)))
+#define TS_RPC_CTRL_SAP_RC BIT(30) +#define TS_RPC_CTRL_SAP_ERR BIT(31)
+/* Interface ID for RPC management operations */ +#define TS_RPC_MGMT_IFACE_ID (0xff)
+/* Management calls */ +#define TS_RPC_OP_GET_VERSION (0x0000) +#define TS_RPC_GET_VERSION_RESP (1)
+#define TS_RPC_OP_RETRIEVE_MEM (0x0001) +#define TS_RPC_RETRIEVE_MEM_HANDLE_LSW (1) +#define TS_RPC_RETRIEVE_MEM_HANDLE_MSW (2) +#define TS_RPC_RETRIEVE_MEM_TAG_LSW (3) +#define TS_RPC_RETRIEVE_MEM_TAG_MSW (4) +#define TS_RPC_RETRIEVE_MEM_RPC_STATUS (1)
+#define TS_RPC_OP_RELINQ_MEM (0x0002) +#define TS_RPC_RELINQ_MEM_HANDLE_LSW (1) +#define TS_RPC_RELINQ_MEM_HANDLE_MSW (2) +#define TS_RPC_RELINQ_MEM_RPC_STATUS (1)
+#define TS_RPC_OP_SERVICE_INFO (0x0003) +#define TS_RPC_SERVICE_INFO_UUID0 (1) +#define TS_RPC_SERVICE_INFO_UUID1 (2) +#define TS_RPC_SERVICE_INFO_UUID2 (3) +#define TS_RPC_SERVICE_INFO_UUID3 (4) +#define TS_RPC_SERVICE_INFO_RPC_STATUS (1) +#define TS_RPC_SERVICE_INFO_IFACE (2)
+/* Service call */ +#define TS_RPC_SERVICE_MEM_HANDLE_LSW (1) +#define TS_RPC_SERVICE_MEM_HANDLE_MSW (2) +#define TS_RPC_SERVICE_REQ_LEN (3) +#define TS_RPC_SERVICE_CLIENT_ID (4) +#define TS_RPC_SERVICE_RPC_STATUS (1) +#define TS_RPC_SERVICE_STATUS (2) +#define TS_RPC_SERVICE_RESP_LEN (3)
+struct tstee {
struct ffa_device *ffa_dev;
struct tee_device *teedev;
struct tee_shm_pool *pool;
+};
+struct ts_session {
struct list_head list_node;
u32 session_id;
u8 iface_id;
+};
+struct ts_context_data {
struct list_head sess_list;
struct idr sess_ids;
/* Serializes access to this struct */
struct mutex mutex;
+};
+struct tee_shm_pool *tstee_create_shm_pool(void); +int tstee_shm_register(struct tee_context *ctx, struct tee_shm *shm, struct page **pages,
size_t num_pages, unsigned long start);
+int tstee_shm_unregister(struct tee_context *ctx, struct tee_shm *shm);
+#endif /* TSTEE_PRIVATE_H */ diff --git a/include/uapi/linux/tee.h b/include/uapi/linux/tee.h index 23e57164693c..d0430bee8292 100644 --- a/include/uapi/linux/tee.h +++ b/include/uapi/linux/tee.h @@ -56,6 +56,7 @@ */ #define TEE_IMPL_ID_OPTEE 1 #define TEE_IMPL_ID_AMDTEE 2 +#define TEE_IMPL_ID_TSTEE 3
/*
- OP-TEE specific capabilities
-- 2.34.1
Hi Sumit,
On 3 Oct 2023, at 17:42, Sumit Garg wrote:
On Wed, 27 Sept 2023 at 20:56, Balint Dobszay balint.dobszay@arm.com wrote:
The Trusted Services project provides a framework for developing and deploying device Root Of Trust services in FF-A Secure Partitions. The FF-A SPs are accessible through the FF-A driver, but this doesn't provide a user space interface. The goal of this TEE driver is to make Trusted Services SPs accessible for user space clients.
I am interested in exploring the user space library/applications. Do you have a standard library implementation and some example user-space applications leveraging this driver interface?
Yes we have a library reference implementation in Trusted Services using this driver called libts [1]. There are some test applications that rely on this library, e.g. ts-service-test [2]. Also, the Parsec project can use Trusted Services as backend through libts [3] (note, the version of TS currently integrated in Parsec is not recent, thus the libts in there still uses an earlier version of this driver).
[snip]
+static int tstee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg,
struct tee_param *param)
+{
struct tstee *tstee = tee_get_drvdata(ctx->teedev);
struct ffa_device *ffa_dev = tstee->ffa_dev;
struct ts_context_data *ctxdata = ctx->data;
struct ffa_send_direct_data ffa_data;
struct tee_shm *shm = NULL;
struct ts_session *sess;
u32 req_len, ffa_args[5] = {};
int shm_id, rc;
u8 iface_id;
u64 handle;
u16 opcode;
mutex_lock(&ctxdata->mutex);
sess = find_session(ctxdata, arg->session);
/* Do this while holding the mutex to make sure that the session wasn't closed meanwhile */
if (sess)
iface_id = sess->iface_id;
mutex_unlock(&ctxdata->mutex);
if (!sess)
return -EINVAL;
opcode = lower_16_bits(arg->func);
shm_id = lower_32_bits(param[0].u.value.a);
req_len = lower_32_bits(param[0].u.value.b);
if (shm_id != 0) {
shm = tee_shm_get_from_id(ctx, shm_id);
if (IS_ERR(shm))
return PTR_ERR(shm);
if (shm->size < req_len) {
pr_err("request doesn't fit into shared memory buffer\n");
rc = -EINVAL;
goto out;
}
handle = shm->sec_world_id;
} else {
handle = FFA_INVALID_MEM_HANDLE;
}
ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(iface_id, opcode);
ffa_args[TS_RPC_SERVICE_MEM_HANDLE_LSW] = lower_32_bits(handle);
ffa_args[TS_RPC_SERVICE_MEM_HANDLE_MSW] = upper_32_bits(handle);
ffa_args[TS_RPC_SERVICE_REQ_LEN] = req_len;
ffa_args[TS_RPC_SERVICE_CLIENT_ID] = 0;
arg_list_to_ffa_data(ffa_args, &ffa_data);
rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data);
I haven't dug deeper into the ABI yet, which is something I will look into. But these RPC commands caught my attention. Are these RPC calls blocking in nature? Is there a possibility that these could cause CPU stalls? Do the Linux interrupts remain unhandled until the RPC calls return?
Yes, that is correct. We did encounter CPU stalls indeed, our solution was to enable preemption of S-EL0 SPs in OP-TEE [3] which solved the issue.
Regards, Balint
[1] https://git.trustedfirmware.org/TS/trusted-services.git/tree/deployments/lib... [2] https://git.trustedfirmware.org/TS/trusted-services.git/tree/deployments/ts-... [3] https://github.com/parallaxsecond/parsec/tree/3bcd732b92009612109517a6c90756... [4] https://github.com/OP-TEE/optee_os/pull/6002
On Tue, 10 Oct 2023 at 21:11, Balint Dobszay balint.dobszay@arm.com wrote:
Hi Sumit,
On 3 Oct 2023, at 17:42, Sumit Garg wrote:
On Wed, 27 Sept 2023 at 20:56, Balint Dobszay balint.dobszay@arm.com wrote:
The Trusted Services project provides a framework for developing and deploying device Root Of Trust services in FF-A Secure Partitions. The FF-A SPs are accessible through the FF-A driver, but this doesn't provide a user space interface. The goal of this TEE driver is to make Trusted Services SPs accessible for user space clients.
I am interested in exploring the user space library/applications. Do you have a standard library implementation and some example user-space applications leveraging this driver interface?
Yes we have a library reference implementation in Trusted Services using this driver called libts [1]. There are some test applications that rely on this library, e.g. ts-service-test [2]. Also, the Parsec project can use Trusted Services as backend through libts [3] (note, the version of TS currently integrated in Parsec is not recent, thus the libts in there still uses an earlier version of this driver).
Although I am currently exploring the user-space interface, I would like to give it a hands-on. Is it possible to run this trusted services setup on Qemu? If yes then can you share instructions how to test them?
[snip]
+static int tstee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg,
struct tee_param *param)
+{
struct tstee *tstee = tee_get_drvdata(ctx->teedev);
struct ffa_device *ffa_dev = tstee->ffa_dev;
struct ts_context_data *ctxdata = ctx->data;
struct ffa_send_direct_data ffa_data;
struct tee_shm *shm = NULL;
struct ts_session *sess;
u32 req_len, ffa_args[5] = {};
int shm_id, rc;
u8 iface_id;
u64 handle;
u16 opcode;
mutex_lock(&ctxdata->mutex);
sess = find_session(ctxdata, arg->session);
/* Do this while holding the mutex to make sure that the session wasn't closed meanwhile */
if (sess)
iface_id = sess->iface_id;
mutex_unlock(&ctxdata->mutex);
if (!sess)
return -EINVAL;
opcode = lower_16_bits(arg->func);
shm_id = lower_32_bits(param[0].u.value.a);
req_len = lower_32_bits(param[0].u.value.b);
if (shm_id != 0) {
shm = tee_shm_get_from_id(ctx, shm_id);
if (IS_ERR(shm))
return PTR_ERR(shm);
if (shm->size < req_len) {
pr_err("request doesn't fit into shared memory buffer\n");
rc = -EINVAL;
goto out;
}
handle = shm->sec_world_id;
} else {
handle = FFA_INVALID_MEM_HANDLE;
}
ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(iface_id, opcode);
ffa_args[TS_RPC_SERVICE_MEM_HANDLE_LSW] = lower_32_bits(handle);
ffa_args[TS_RPC_SERVICE_MEM_HANDLE_MSW] = upper_32_bits(handle);
ffa_args[TS_RPC_SERVICE_REQ_LEN] = req_len;
ffa_args[TS_RPC_SERVICE_CLIENT_ID] = 0;
arg_list_to_ffa_data(ffa_args, &ffa_data);
rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data);
I haven't dug deeper into the ABI yet, which is something I will look into. But these RPC commands caught my attention. Are these RPC calls blocking in nature? Is there a possibility that these could cause CPU stalls? Do the Linux interrupts remain unhandled until the RPC calls return?
Yes, that is correct. We did encounter CPU stalls indeed, our solution was to enable preemption of S-EL0 SPs in OP-TEE [3] which solved the issue.
I would have preferred to unite FFA_INTERRUPT and OPTEE_FFA_YIELDING_CALL_RETURN_INTERRUPT since underneath both are using FFA ABI.
Jens,
Can we change OP-TEE to use FFA_INTERRUPT as well when using FFA ABI?
-Sumit
Regards, Balint
[1] https://git.trustedfirmware.org/TS/trusted-services.git/tree/deployments/lib... [2] https://git.trustedfirmware.org/TS/trusted-services.git/tree/deployments/ts-... [3] https://github.com/parallaxsecond/parsec/tree/3bcd732b92009612109517a6c90756... [4] https://github.com/OP-TEE/optee_os/pull/6002
On Fri, Oct 13, 2023 at 1:38 PM Sumit Garg sumit.garg@linaro.org wrote:
On Tue, 10 Oct 2023 at 21:11, Balint Dobszay balint.dobszay@arm.com wrote:
Hi Sumit,
On 3 Oct 2023, at 17:42, Sumit Garg wrote:
On Wed, 27 Sept 2023 at 20:56, Balint Dobszay balint.dobszay@arm.com wrote:
The Trusted Services project provides a framework for developing and deploying device Root Of Trust services in FF-A Secure Partitions. The FF-A SPs are accessible through the FF-A driver, but this doesn't provide a user space interface. The goal of this TEE driver is to make Trusted Services SPs accessible for user space clients.
I am interested in exploring the user space library/applications. Do you have a standard library implementation and some example user-space applications leveraging this driver interface?
Yes we have a library reference implementation in Trusted Services using this driver called libts [1]. There are some test applications that rely on this library, e.g. ts-service-test [2]. Also, the Parsec project can use Trusted Services as backend through libts [3] (note, the version of TS currently integrated in Parsec is not recent, thus the libts in there still uses an earlier version of this driver).
Although I am currently exploring the user-space interface, I would like to give it a hands-on. Is it possible to run this trusted services setup on Qemu? If yes then can you share instructions how to test them?
[snip]
+static int tstee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg,
struct tee_param *param)
+{
struct tstee *tstee = tee_get_drvdata(ctx->teedev);
struct ffa_device *ffa_dev = tstee->ffa_dev;
struct ts_context_data *ctxdata = ctx->data;
struct ffa_send_direct_data ffa_data;
struct tee_shm *shm = NULL;
struct ts_session *sess;
u32 req_len, ffa_args[5] = {};
int shm_id, rc;
u8 iface_id;
u64 handle;
u16 opcode;
mutex_lock(&ctxdata->mutex);
sess = find_session(ctxdata, arg->session);
/* Do this while holding the mutex to make sure that the session wasn't closed meanwhile */
if (sess)
iface_id = sess->iface_id;
mutex_unlock(&ctxdata->mutex);
if (!sess)
return -EINVAL;
opcode = lower_16_bits(arg->func);
shm_id = lower_32_bits(param[0].u.value.a);
req_len = lower_32_bits(param[0].u.value.b);
if (shm_id != 0) {
shm = tee_shm_get_from_id(ctx, shm_id);
if (IS_ERR(shm))
return PTR_ERR(shm);
if (shm->size < req_len) {
pr_err("request doesn't fit into shared memory buffer\n");
rc = -EINVAL;
goto out;
}
handle = shm->sec_world_id;
} else {
handle = FFA_INVALID_MEM_HANDLE;
}
ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(iface_id, opcode);
ffa_args[TS_RPC_SERVICE_MEM_HANDLE_LSW] = lower_32_bits(handle);
ffa_args[TS_RPC_SERVICE_MEM_HANDLE_MSW] = upper_32_bits(handle);
ffa_args[TS_RPC_SERVICE_REQ_LEN] = req_len;
ffa_args[TS_RPC_SERVICE_CLIENT_ID] = 0;
arg_list_to_ffa_data(ffa_args, &ffa_data);
rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data);
I haven't dug deeper into the ABI yet, which is something I will look into. But these RPC commands caught my attention. Are these RPC calls blocking in nature? Is there a possibility that these could cause CPU stalls? Do the Linux interrupts remain unhandled until the RPC calls return?
Yes, that is correct. We did encounter CPU stalls indeed, our solution was to enable preemption of S-EL0 SPs in OP-TEE [3] which solved the issue.
I would have preferred to unite FFA_INTERRUPT and OPTEE_FFA_YIELDING_CALL_RETURN_INTERRUPT since underneath both are using FFA ABI.
Jens,
Can we change OP-TEE to use FFA_INTERRUPT as well when using FFA ABI?
No, OP-TEE uses managed exit. Among other advantages, it allows resuming execution on a different CPU.
Cheers, Jens
-Sumit
Regards, Balint
[1] https://git.trustedfirmware.org/TS/trusted-services.git/tree/deployments/lib... [2] https://git.trustedfirmware.org/TS/trusted-services.git/tree/deployments/ts-... [3] https://github.com/parallaxsecond/parsec/tree/3bcd732b92009612109517a6c90756... [4] https://github.com/OP-TEE/optee_os/pull/6002
On Mon, 16 Oct 2023 at 13:27, Jens Wiklander jens.wiklander@linaro.org wrote:
On Fri, Oct 13, 2023 at 1:38 PM Sumit Garg sumit.garg@linaro.org wrote:
On Tue, 10 Oct 2023 at 21:11, Balint Dobszay balint.dobszay@arm.com wrote:
Hi Sumit,
On 3 Oct 2023, at 17:42, Sumit Garg wrote:
On Wed, 27 Sept 2023 at 20:56, Balint Dobszay balint.dobszay@arm.com wrote:
The Trusted Services project provides a framework for developing and deploying device Root Of Trust services in FF-A Secure Partitions. The FF-A SPs are accessible through the FF-A driver, but this doesn't provide a user space interface. The goal of this TEE driver is to make Trusted Services SPs accessible for user space clients.
I am interested in exploring the user space library/applications. Do you have a standard library implementation and some example user-space applications leveraging this driver interface?
Yes we have a library reference implementation in Trusted Services using this driver called libts [1]. There are some test applications that rely on this library, e.g. ts-service-test [2]. Also, the Parsec project can use Trusted Services as backend through libts [3] (note, the version of TS currently integrated in Parsec is not recent, thus the libts in there still uses an earlier version of this driver).
Although I am currently exploring the user-space interface, I would like to give it a hands-on. Is it possible to run this trusted services setup on Qemu? If yes then can you share instructions how to test them?
[snip]
+static int tstee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg,
struct tee_param *param)
+{
struct tstee *tstee = tee_get_drvdata(ctx->teedev);
struct ffa_device *ffa_dev = tstee->ffa_dev;
struct ts_context_data *ctxdata = ctx->data;
struct ffa_send_direct_data ffa_data;
struct tee_shm *shm = NULL;
struct ts_session *sess;
u32 req_len, ffa_args[5] = {};
int shm_id, rc;
u8 iface_id;
u64 handle;
u16 opcode;
mutex_lock(&ctxdata->mutex);
sess = find_session(ctxdata, arg->session);
/* Do this while holding the mutex to make sure that the session wasn't closed meanwhile */
if (sess)
iface_id = sess->iface_id;
mutex_unlock(&ctxdata->mutex);
if (!sess)
return -EINVAL;
opcode = lower_16_bits(arg->func);
shm_id = lower_32_bits(param[0].u.value.a);
req_len = lower_32_bits(param[0].u.value.b);
if (shm_id != 0) {
shm = tee_shm_get_from_id(ctx, shm_id);
if (IS_ERR(shm))
return PTR_ERR(shm);
if (shm->size < req_len) {
pr_err("request doesn't fit into shared memory buffer\n");
rc = -EINVAL;
goto out;
}
handle = shm->sec_world_id;
} else {
handle = FFA_INVALID_MEM_HANDLE;
}
ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(iface_id, opcode);
ffa_args[TS_RPC_SERVICE_MEM_HANDLE_LSW] = lower_32_bits(handle);
ffa_args[TS_RPC_SERVICE_MEM_HANDLE_MSW] = upper_32_bits(handle);
ffa_args[TS_RPC_SERVICE_REQ_LEN] = req_len;
ffa_args[TS_RPC_SERVICE_CLIENT_ID] = 0;
arg_list_to_ffa_data(ffa_args, &ffa_data);
rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data);
I haven't dug deeper into the ABI yet, which is something I will look into. But these RPC commands caught my attention. Are these RPC calls blocking in nature? Is there a possibility that these could cause CPU stalls? Do the Linux interrupts remain unhandled until the RPC calls return?
Yes, that is correct. We did encounter CPU stalls indeed, our solution was to enable preemption of S-EL0 SPs in OP-TEE [3] which solved the issue.
I would have preferred to unite FFA_INTERRUPT and OPTEE_FFA_YIELDING_CALL_RETURN_INTERRUPT since underneath both are using FFA ABI.
Jens,
Can we change OP-TEE to use FFA_INTERRUPT as well when using FFA ABI?
No, OP-TEE uses managed exit. Among other advantages, it allows resuming execution on a different CPU.
I suppose that should be the case with FFA_INTERRUPT too. OP-TEE should be able to resume SPs on different CPUs as well, right?
-Sumit
Cheers, Jens
-Sumit
Regards, Balint
[1] https://git.trustedfirmware.org/TS/trusted-services.git/tree/deployments/lib... [2] https://git.trustedfirmware.org/TS/trusted-services.git/tree/deployments/ts-... [3] https://github.com/parallaxsecond/parsec/tree/3bcd732b92009612109517a6c90756... [4] https://github.com/OP-TEE/optee_os/pull/6002
On Tue, Oct 17, 2023 at 1:14 PM Sumit Garg sumit.garg@linaro.org wrote:
On Mon, 16 Oct 2023 at 13:27, Jens Wiklander jens.wiklander@linaro.org wrote:
On Fri, Oct 13, 2023 at 1:38 PM Sumit Garg sumit.garg@linaro.org wrote:
On Tue, 10 Oct 2023 at 21:11, Balint Dobszay balint.dobszay@arm.com wrote:
Hi Sumit,
On 3 Oct 2023, at 17:42, Sumit Garg wrote:
On Wed, 27 Sept 2023 at 20:56, Balint Dobszay balint.dobszay@arm.com wrote:
The Trusted Services project provides a framework for developing and deploying device Root Of Trust services in FF-A Secure Partitions. The FF-A SPs are accessible through the FF-A driver, but this doesn't provide a user space interface. The goal of this TEE driver is to make Trusted Services SPs accessible for user space clients.
I am interested in exploring the user space library/applications. Do you have a standard library implementation and some example user-space applications leveraging this driver interface?
Yes we have a library reference implementation in Trusted Services using this driver called libts [1]. There are some test applications that rely on this library, e.g. ts-service-test [2]. Also, the Parsec project can use Trusted Services as backend through libts [3] (note, the version of TS currently integrated in Parsec is not recent, thus the libts in there still uses an earlier version of this driver).
Although I am currently exploring the user-space interface, I would like to give it a hands-on. Is it possible to run this trusted services setup on Qemu? If yes then can you share instructions how to test them?
[snip]
+static int tstee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg,
struct tee_param *param)
+{
struct tstee *tstee = tee_get_drvdata(ctx->teedev);
struct ffa_device *ffa_dev = tstee->ffa_dev;
struct ts_context_data *ctxdata = ctx->data;
struct ffa_send_direct_data ffa_data;
struct tee_shm *shm = NULL;
struct ts_session *sess;
u32 req_len, ffa_args[5] = {};
int shm_id, rc;
u8 iface_id;
u64 handle;
u16 opcode;
mutex_lock(&ctxdata->mutex);
sess = find_session(ctxdata, arg->session);
/* Do this while holding the mutex to make sure that the session wasn't closed meanwhile */
if (sess)
iface_id = sess->iface_id;
mutex_unlock(&ctxdata->mutex);
if (!sess)
return -EINVAL;
opcode = lower_16_bits(arg->func);
shm_id = lower_32_bits(param[0].u.value.a);
req_len = lower_32_bits(param[0].u.value.b);
if (shm_id != 0) {
shm = tee_shm_get_from_id(ctx, shm_id);
if (IS_ERR(shm))
return PTR_ERR(shm);
if (shm->size < req_len) {
pr_err("request doesn't fit into shared memory buffer\n");
rc = -EINVAL;
goto out;
}
handle = shm->sec_world_id;
} else {
handle = FFA_INVALID_MEM_HANDLE;
}
ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(iface_id, opcode);
ffa_args[TS_RPC_SERVICE_MEM_HANDLE_LSW] = lower_32_bits(handle);
ffa_args[TS_RPC_SERVICE_MEM_HANDLE_MSW] = upper_32_bits(handle);
ffa_args[TS_RPC_SERVICE_REQ_LEN] = req_len;
ffa_args[TS_RPC_SERVICE_CLIENT_ID] = 0;
arg_list_to_ffa_data(ffa_args, &ffa_data);
rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data);
I haven't dug deeper into the ABI yet, which is something I will look into. But these RPC commands caught my attention. Are these RPC calls blocking in nature? Is there a possibility that these could cause CPU stalls? Do the Linux interrupts remain unhandled until the RPC calls return?
Yes, that is correct. We did encounter CPU stalls indeed, our solution was to enable preemption of S-EL0 SPs in OP-TEE [3] which solved the issue.
I would have preferred to unite FFA_INTERRUPT and OPTEE_FFA_YIELDING_CALL_RETURN_INTERRUPT since underneath both are using FFA ABI.
Jens,
Can we change OP-TEE to use FFA_INTERRUPT as well when using FFA ABI?
No, OP-TEE uses managed exit. Among other advantages, it allows resuming execution on a different CPU.
I suppose that should be the case with FFA_INTERRUPT too. OP-TEE should be able to resume SPs on different CPUs as well, right?
Possibly, but I leave that to Balint and company to sort out if that's desired or not.
Cheers, Jens
-Sumit
Cheers, Jens
-Sumit
Regards, Balint
[1] https://git.trustedfirmware.org/TS/trusted-services.git/tree/deployments/lib... [2] https://git.trustedfirmware.org/TS/trusted-services.git/tree/deployments/ts-... [3] https://github.com/parallaxsecond/parsec/tree/3bcd732b92009612109517a6c90756... [4] https://github.com/OP-TEE/optee_os/pull/6002
On 19 Oct 2023, at 16:16, Jens Wiklander wrote:
On Tue, Oct 17, 2023 at 1:14 PM Sumit Garg sumit.garg@linaro.org wrote:
On Mon, 16 Oct 2023 at 13:27, Jens Wiklander jens.wiklander@linaro.org wrote:
On Fri, Oct 13, 2023 at 1:38 PM Sumit Garg sumit.garg@linaro.org wrote:
On Tue, 10 Oct 2023 at 21:11, Balint Dobszay balint.dobszay@arm.com wrote:
On 3 Oct 2023, at 17:42, Sumit Garg wrote:
On Wed, 27 Sept 2023 at 20:56, Balint Dobszay balint.dobszay@arm.com wrote:
[snip]
> +static int tstee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, > + struct tee_param *param) > +{ > + struct tstee *tstee = tee_get_drvdata(ctx->teedev); > + struct ffa_device *ffa_dev = tstee->ffa_dev; > + struct ts_context_data *ctxdata = ctx->data; > + struct ffa_send_direct_data ffa_data; > + struct tee_shm *shm = NULL; > + struct ts_session *sess; > + u32 req_len, ffa_args[5] = {}; > + int shm_id, rc; > + u8 iface_id; > + u64 handle; > + u16 opcode; > + > + mutex_lock(&ctxdata->mutex); > + sess = find_session(ctxdata, arg->session); > + > + /* Do this while holding the mutex to make sure that the session wasn't closed meanwhile */ > + if (sess) > + iface_id = sess->iface_id; > + > + mutex_unlock(&ctxdata->mutex); > + if (!sess) > + return -EINVAL; > + > + opcode = lower_16_bits(arg->func); > + shm_id = lower_32_bits(param[0].u.value.a); > + req_len = lower_32_bits(param[0].u.value.b); > + > + if (shm_id != 0) { > + shm = tee_shm_get_from_id(ctx, shm_id); > + if (IS_ERR(shm)) > + return PTR_ERR(shm); > + > + if (shm->size < req_len) { > + pr_err("request doesn't fit into shared memory buffer\n"); > + rc = -EINVAL; > + goto out; > + } > + > + handle = shm->sec_world_id; > + } else { > + handle = FFA_INVALID_MEM_HANDLE; > + } > + > + ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(iface_id, opcode); > + ffa_args[TS_RPC_SERVICE_MEM_HANDLE_LSW] = lower_32_bits(handle); > + ffa_args[TS_RPC_SERVICE_MEM_HANDLE_MSW] = upper_32_bits(handle); > + ffa_args[TS_RPC_SERVICE_REQ_LEN] = req_len; > + ffa_args[TS_RPC_SERVICE_CLIENT_ID] = 0; > + > + arg_list_to_ffa_data(ffa_args, &ffa_data); > + rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data);
I haven't dug deeper into the ABI yet, which is something I will look into. But these RPC commands caught my attention. Are these RPC calls blocking in nature? Is there a possibility that these could cause CPU stalls? Do the Linux interrupts remain unhandled until the RPC calls return?
Yes, that is correct. We did encounter CPU stalls indeed, our solution was to enable preemption of S-EL0 SPs in OP-TEE [3] which solved the issue.
I would have preferred to unite FFA_INTERRUPT and OPTEE_FFA_YIELDING_CALL_RETURN_INTERRUPT since underneath both are using FFA ABI.
Jens,
Can we change OP-TEE to use FFA_INTERRUPT as well when using FFA ABI?
No, OP-TEE uses managed exit. Among other advantages, it allows resuming execution on a different CPU.
I suppose that should be the case with FFA_INTERRUPT too. OP-TEE should be able to resume SPs on different CPUs as well, right?
Possibly, but I leave that to Balint and company to sort out if that's desired or not.
FF-A mandates that S-EL0 SPs have a single execution context, run only on a single PE in the system at any point of time and are capable of migrating. Also, FF-A allows resuming a S-EL0 SP on a different CPU after it gets preempted by a NS interrupt. I think OP-TEE as S-EL1 SPMC does support this, but I don't have a setup yet that would explicitly test this scenario.
Managed exit is only available for S-EL1 SPs.
Regards, Balint
On Fri, 20 Oct 2023 at 19:29, Balint Dobszay balint.dobszay@arm.com wrote:
On 19 Oct 2023, at 16:16, Jens Wiklander wrote:
On Tue, Oct 17, 2023 at 1:14 PM Sumit Garg sumit.garg@linaro.org wrote:
On Mon, 16 Oct 2023 at 13:27, Jens Wiklander jens.wiklander@linaro.org wrote:
On Fri, Oct 13, 2023 at 1:38 PM Sumit Garg sumit.garg@linaro.org wrote:
On Tue, 10 Oct 2023 at 21:11, Balint Dobszay balint.dobszay@arm.com wrote:
On 3 Oct 2023, at 17:42, Sumit Garg wrote: > On Wed, 27 Sept 2023 at 20:56, Balint Dobszay balint.dobszay@arm.com wrote:
[snip]
>> +static int tstee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, >> + struct tee_param *param) >> +{ >> + struct tstee *tstee = tee_get_drvdata(ctx->teedev); >> + struct ffa_device *ffa_dev = tstee->ffa_dev; >> + struct ts_context_data *ctxdata = ctx->data; >> + struct ffa_send_direct_data ffa_data; >> + struct tee_shm *shm = NULL; >> + struct ts_session *sess; >> + u32 req_len, ffa_args[5] = {}; >> + int shm_id, rc; >> + u8 iface_id; >> + u64 handle; >> + u16 opcode; >> + >> + mutex_lock(&ctxdata->mutex); >> + sess = find_session(ctxdata, arg->session); >> + >> + /* Do this while holding the mutex to make sure that the session wasn't closed meanwhile */ >> + if (sess) >> + iface_id = sess->iface_id; >> + >> + mutex_unlock(&ctxdata->mutex); >> + if (!sess) >> + return -EINVAL; >> + >> + opcode = lower_16_bits(arg->func); >> + shm_id = lower_32_bits(param[0].u.value.a); >> + req_len = lower_32_bits(param[0].u.value.b); >> + >> + if (shm_id != 0) { >> + shm = tee_shm_get_from_id(ctx, shm_id); >> + if (IS_ERR(shm)) >> + return PTR_ERR(shm); >> + >> + if (shm->size < req_len) { >> + pr_err("request doesn't fit into shared memory buffer\n"); >> + rc = -EINVAL; >> + goto out; >> + } >> + >> + handle = shm->sec_world_id; >> + } else { >> + handle = FFA_INVALID_MEM_HANDLE; >> + } >> + >> + ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(iface_id, opcode); >> + ffa_args[TS_RPC_SERVICE_MEM_HANDLE_LSW] = lower_32_bits(handle); >> + ffa_args[TS_RPC_SERVICE_MEM_HANDLE_MSW] = upper_32_bits(handle); >> + ffa_args[TS_RPC_SERVICE_REQ_LEN] = req_len; >> + ffa_args[TS_RPC_SERVICE_CLIENT_ID] = 0; >> + >> + arg_list_to_ffa_data(ffa_args, &ffa_data); >> + rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data); > > I haven't dug deeper into the ABI yet, which is something I will look > into. But these RPC commands caught my attention. Are these RPC calls > blocking in nature? Is there a possibility that these could cause CPU > stalls? Do the Linux interrupts remain unhandled until the RPC calls > return?
Yes, that is correct. We did encounter CPU stalls indeed, our solution was to enable preemption of S-EL0 SPs in OP-TEE [3] which solved the issue.
I would have preferred to unite FFA_INTERRUPT and OPTEE_FFA_YIELDING_CALL_RETURN_INTERRUPT since underneath both are using FFA ABI.
Jens,
Can we change OP-TEE to use FFA_INTERRUPT as well when using FFA ABI?
No, OP-TEE uses managed exit. Among other advantages, it allows resuming execution on a different CPU.
I suppose that should be the case with FFA_INTERRUPT too. OP-TEE should be able to resume SPs on different CPUs as well, right?
Possibly, but I leave that to Balint and company to sort out if that's desired or not.
FF-A mandates that S-EL0 SPs have a single execution context, run only on a single PE in the system at any point of time and are capable of migrating. Also, FF-A allows resuming a S-EL0 SP on a different CPU after it gets preempted by a NS interrupt. I think OP-TEE as S-EL1 SPMC does support this, but I don't have a setup yet that would explicitly test this scenario.
You can try to add a few minutes loop within a secure partition and see if the Linux scheduler reschedules on a different CPUs. I suppose you need to keep the system loaded with other normal world apps too.
Managed exit is only available for S-EL1 SPs.
So does that mean OP-TEE can use FF-A constructs like (FFA_INTERRUPT) for managed exit instead of custom OPTEE_FFA_YIELDING_CALL_RETURN_INTERRUPT?
-Sumit
On Thu, Oct 26, 2023 at 11:36 AM Sumit Garg sumit.garg@linaro.org wrote:
On Fri, 20 Oct 2023 at 19:29, Balint Dobszay balint.dobszay@arm.com wrote:
On 19 Oct 2023, at 16:16, Jens Wiklander wrote:
On Tue, Oct 17, 2023 at 1:14 PM Sumit Garg sumit.garg@linaro.org wrote:
On Mon, 16 Oct 2023 at 13:27, Jens Wiklander jens.wiklander@linaro.org wrote:
On Fri, Oct 13, 2023 at 1:38 PM Sumit Garg sumit.garg@linaro.org wrote:
On Tue, 10 Oct 2023 at 21:11, Balint Dobszay balint.dobszay@arm.com wrote: > On 3 Oct 2023, at 17:42, Sumit Garg wrote: >> On Wed, 27 Sept 2023 at 20:56, Balint Dobszay balint.dobszay@arm.com wrote:
[snip]
>>> +static int tstee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, >>> + struct tee_param *param) >>> +{ >>> + struct tstee *tstee = tee_get_drvdata(ctx->teedev); >>> + struct ffa_device *ffa_dev = tstee->ffa_dev; >>> + struct ts_context_data *ctxdata = ctx->data; >>> + struct ffa_send_direct_data ffa_data; >>> + struct tee_shm *shm = NULL; >>> + struct ts_session *sess; >>> + u32 req_len, ffa_args[5] = {}; >>> + int shm_id, rc; >>> + u8 iface_id; >>> + u64 handle; >>> + u16 opcode; >>> + >>> + mutex_lock(&ctxdata->mutex); >>> + sess = find_session(ctxdata, arg->session); >>> + >>> + /* Do this while holding the mutex to make sure that the session wasn't closed meanwhile */ >>> + if (sess) >>> + iface_id = sess->iface_id; >>> + >>> + mutex_unlock(&ctxdata->mutex); >>> + if (!sess) >>> + return -EINVAL; >>> + >>> + opcode = lower_16_bits(arg->func); >>> + shm_id = lower_32_bits(param[0].u.value.a); >>> + req_len = lower_32_bits(param[0].u.value.b); >>> + >>> + if (shm_id != 0) { >>> + shm = tee_shm_get_from_id(ctx, shm_id); >>> + if (IS_ERR(shm)) >>> + return PTR_ERR(shm); >>> + >>> + if (shm->size < req_len) { >>> + pr_err("request doesn't fit into shared memory buffer\n"); >>> + rc = -EINVAL; >>> + goto out; >>> + } >>> + >>> + handle = shm->sec_world_id; >>> + } else { >>> + handle = FFA_INVALID_MEM_HANDLE; >>> + } >>> + >>> + ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(iface_id, opcode); >>> + ffa_args[TS_RPC_SERVICE_MEM_HANDLE_LSW] = lower_32_bits(handle); >>> + ffa_args[TS_RPC_SERVICE_MEM_HANDLE_MSW] = upper_32_bits(handle); >>> + ffa_args[TS_RPC_SERVICE_REQ_LEN] = req_len; >>> + ffa_args[TS_RPC_SERVICE_CLIENT_ID] = 0; >>> + >>> + arg_list_to_ffa_data(ffa_args, &ffa_data); >>> + rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data); >> >> I haven't dug deeper into the ABI yet, which is something I will look >> into. But these RPC commands caught my attention. Are these RPC calls >> blocking in nature? Is there a possibility that these could cause CPU >> stalls? Do the Linux interrupts remain unhandled until the RPC calls >> return? > > Yes, that is correct. We did encounter CPU stalls indeed, our solution > was to enable preemption of S-EL0 SPs in OP-TEE [3] which solved the > issue.
I would have preferred to unite FFA_INTERRUPT and OPTEE_FFA_YIELDING_CALL_RETURN_INTERRUPT since underneath both are using FFA ABI.
Jens,
Can we change OP-TEE to use FFA_INTERRUPT as well when using FFA ABI?
No, OP-TEE uses managed exit. Among other advantages, it allows resuming execution on a different CPU.
I suppose that should be the case with FFA_INTERRUPT too. OP-TEE should be able to resume SPs on different CPUs as well, right?
Possibly, but I leave that to Balint and company to sort out if that's desired or not.
FF-A mandates that S-EL0 SPs have a single execution context, run only on a single PE in the system at any point of time and are capable of migrating. Also, FF-A allows resuming a S-EL0 SP on a different CPU after it gets preempted by a NS interrupt. I think OP-TEE as S-EL1 SPMC does support this, but I don't have a setup yet that would explicitly test this scenario.
You can try to add a few minutes loop within a secure partition and see if the Linux scheduler reschedules on a different CPUs. I suppose you need to keep the system loaded with other normal world apps too.
Managed exit is only available for S-EL1 SPs.
So does that mean OP-TEE can use FF-A constructs like (FFA_INTERRUPT) for managed exit instead of custom OPTEE_FFA_YIELDING_CALL_RETURN_INTERRUPT?
No, and to be clear OPTEE_FFA_YIELDING_CALL_RETURN_INTERRUPT isn't a custom solution, it's fully within the specification. Returning via FFA_INTERRUPT is a simplified model where the SP has less control over the CPU. I imagine that it wouldn't play very nicely with spinlocks.
Cheers, Jens
-Sumit
On Thu, 26 Oct 2023 at 16:06, Jens Wiklander jens.wiklander@linaro.org wrote:
On Thu, Oct 26, 2023 at 11:36 AM Sumit Garg sumit.garg@linaro.org wrote:
On Fri, 20 Oct 2023 at 19:29, Balint Dobszay balint.dobszay@arm.com wrote:
On 19 Oct 2023, at 16:16, Jens Wiklander wrote:
On Tue, Oct 17, 2023 at 1:14 PM Sumit Garg sumit.garg@linaro.org wrote:
On Mon, 16 Oct 2023 at 13:27, Jens Wiklander jens.wiklander@linaro.org wrote:
On Fri, Oct 13, 2023 at 1:38 PM Sumit Garg sumit.garg@linaro.org wrote: > On Tue, 10 Oct 2023 at 21:11, Balint Dobszay balint.dobszay@arm.com wrote: >> On 3 Oct 2023, at 17:42, Sumit Garg wrote: >>> On Wed, 27 Sept 2023 at 20:56, Balint Dobszay balint.dobszay@arm.com wrote:
[snip]
>>>> +static int tstee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, >>>> + struct tee_param *param) >>>> +{ >>>> + struct tstee *tstee = tee_get_drvdata(ctx->teedev); >>>> + struct ffa_device *ffa_dev = tstee->ffa_dev; >>>> + struct ts_context_data *ctxdata = ctx->data; >>>> + struct ffa_send_direct_data ffa_data; >>>> + struct tee_shm *shm = NULL; >>>> + struct ts_session *sess; >>>> + u32 req_len, ffa_args[5] = {}; >>>> + int shm_id, rc; >>>> + u8 iface_id; >>>> + u64 handle; >>>> + u16 opcode; >>>> + >>>> + mutex_lock(&ctxdata->mutex); >>>> + sess = find_session(ctxdata, arg->session); >>>> + >>>> + /* Do this while holding the mutex to make sure that the session wasn't closed meanwhile */ >>>> + if (sess) >>>> + iface_id = sess->iface_id; >>>> + >>>> + mutex_unlock(&ctxdata->mutex); >>>> + if (!sess) >>>> + return -EINVAL; >>>> + >>>> + opcode = lower_16_bits(arg->func); >>>> + shm_id = lower_32_bits(param[0].u.value.a); >>>> + req_len = lower_32_bits(param[0].u.value.b); >>>> + >>>> + if (shm_id != 0) { >>>> + shm = tee_shm_get_from_id(ctx, shm_id); >>>> + if (IS_ERR(shm)) >>>> + return PTR_ERR(shm); >>>> + >>>> + if (shm->size < req_len) { >>>> + pr_err("request doesn't fit into shared memory buffer\n"); >>>> + rc = -EINVAL; >>>> + goto out; >>>> + } >>>> + >>>> + handle = shm->sec_world_id; >>>> + } else { >>>> + handle = FFA_INVALID_MEM_HANDLE; >>>> + } >>>> + >>>> + ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(iface_id, opcode); >>>> + ffa_args[TS_RPC_SERVICE_MEM_HANDLE_LSW] = lower_32_bits(handle); >>>> + ffa_args[TS_RPC_SERVICE_MEM_HANDLE_MSW] = upper_32_bits(handle); >>>> + ffa_args[TS_RPC_SERVICE_REQ_LEN] = req_len; >>>> + ffa_args[TS_RPC_SERVICE_CLIENT_ID] = 0; >>>> + >>>> + arg_list_to_ffa_data(ffa_args, &ffa_data); >>>> + rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data); >>> >>> I haven't dug deeper into the ABI yet, which is something I will look >>> into. But these RPC commands caught my attention. Are these RPC calls >>> blocking in nature? Is there a possibility that these could cause CPU >>> stalls? Do the Linux interrupts remain unhandled until the RPC calls >>> return? >> >> Yes, that is correct. We did encounter CPU stalls indeed, our solution >> was to enable preemption of S-EL0 SPs in OP-TEE [3] which solved the >> issue. > > I would have preferred to unite FFA_INTERRUPT and > OPTEE_FFA_YIELDING_CALL_RETURN_INTERRUPT since underneath both are > using FFA ABI. > > Jens, > > Can we change OP-TEE to use FFA_INTERRUPT as well when using FFA ABI?
No, OP-TEE uses managed exit. Among other advantages, it allows resuming execution on a different CPU.
I suppose that should be the case with FFA_INTERRUPT too. OP-TEE should be able to resume SPs on different CPUs as well, right?
Possibly, but I leave that to Balint and company to sort out if that's desired or not.
FF-A mandates that S-EL0 SPs have a single execution context, run only on a single PE in the system at any point of time and are capable of migrating. Also, FF-A allows resuming a S-EL0 SP on a different CPU after it gets preempted by a NS interrupt. I think OP-TEE as S-EL1 SPMC does support this, but I don't have a setup yet that would explicitly test this scenario.
You can try to add a few minutes loop within a secure partition and see if the Linux scheduler reschedules on a different CPUs. I suppose you need to keep the system loaded with other normal world apps too.
Managed exit is only available for S-EL1 SPs.
So does that mean OP-TEE can use FF-A constructs like (FFA_INTERRUPT) for managed exit instead of custom OPTEE_FFA_YIELDING_CALL_RETURN_INTERRUPT?
No, and to be clear OPTEE_FFA_YIELDING_CALL_RETURN_INTERRUPT isn't a custom solution, it's fully within the specification.
I am not talking about it being out of specification. It's rather that if the base FF-A layer provides you with a feature then we shouldn't need to reimplement in every SP (OP-TEE or other trusted OS) communication stack.
Returning via FFA_INTERRUPT is a simplified model where the SP has less control over the CPU.
I would like to understand which exact bits you are referring to. From Linux point of view, I don't see any difference.
I imagine that it wouldn't play very nicely with spinlocks.
Aren't interrupts disabled while spinlock is being held?
-Sumit
Cheers, Jens
-Sumit
On Thu, Oct 26, 2023 at 1:33 PM Sumit Garg sumit.garg@linaro.org wrote:
On Thu, 26 Oct 2023 at 16:06, Jens Wiklander jens.wiklander@linaro.org wrote:
On Thu, Oct 26, 2023 at 11:36 AM Sumit Garg sumit.garg@linaro.org wrote:
On Fri, 20 Oct 2023 at 19:29, Balint Dobszay balint.dobszay@arm.com wrote:
On 19 Oct 2023, at 16:16, Jens Wiklander wrote:
On Tue, Oct 17, 2023 at 1:14 PM Sumit Garg sumit.garg@linaro.org wrote:
On Mon, 16 Oct 2023 at 13:27, Jens Wiklander jens.wiklander@linaro.org wrote: > On Fri, Oct 13, 2023 at 1:38 PM Sumit Garg sumit.garg@linaro.org wrote: >> On Tue, 10 Oct 2023 at 21:11, Balint Dobszay balint.dobszay@arm.com wrote: >>> On 3 Oct 2023, at 17:42, Sumit Garg wrote: >>>> On Wed, 27 Sept 2023 at 20:56, Balint Dobszay balint.dobszay@arm.com wrote:
[snip]
>>>>> +static int tstee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, >>>>> + struct tee_param *param) >>>>> +{ >>>>> + struct tstee *tstee = tee_get_drvdata(ctx->teedev); >>>>> + struct ffa_device *ffa_dev = tstee->ffa_dev; >>>>> + struct ts_context_data *ctxdata = ctx->data; >>>>> + struct ffa_send_direct_data ffa_data; >>>>> + struct tee_shm *shm = NULL; >>>>> + struct ts_session *sess; >>>>> + u32 req_len, ffa_args[5] = {}; >>>>> + int shm_id, rc; >>>>> + u8 iface_id; >>>>> + u64 handle; >>>>> + u16 opcode; >>>>> + >>>>> + mutex_lock(&ctxdata->mutex); >>>>> + sess = find_session(ctxdata, arg->session); >>>>> + >>>>> + /* Do this while holding the mutex to make sure that the session wasn't closed meanwhile */ >>>>> + if (sess) >>>>> + iface_id = sess->iface_id; >>>>> + >>>>> + mutex_unlock(&ctxdata->mutex); >>>>> + if (!sess) >>>>> + return -EINVAL; >>>>> + >>>>> + opcode = lower_16_bits(arg->func); >>>>> + shm_id = lower_32_bits(param[0].u.value.a); >>>>> + req_len = lower_32_bits(param[0].u.value.b); >>>>> + >>>>> + if (shm_id != 0) { >>>>> + shm = tee_shm_get_from_id(ctx, shm_id); >>>>> + if (IS_ERR(shm)) >>>>> + return PTR_ERR(shm); >>>>> + >>>>> + if (shm->size < req_len) { >>>>> + pr_err("request doesn't fit into shared memory buffer\n"); >>>>> + rc = -EINVAL; >>>>> + goto out; >>>>> + } >>>>> + >>>>> + handle = shm->sec_world_id; >>>>> + } else { >>>>> + handle = FFA_INVALID_MEM_HANDLE; >>>>> + } >>>>> + >>>>> + ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(iface_id, opcode); >>>>> + ffa_args[TS_RPC_SERVICE_MEM_HANDLE_LSW] = lower_32_bits(handle); >>>>> + ffa_args[TS_RPC_SERVICE_MEM_HANDLE_MSW] = upper_32_bits(handle); >>>>> + ffa_args[TS_RPC_SERVICE_REQ_LEN] = req_len; >>>>> + ffa_args[TS_RPC_SERVICE_CLIENT_ID] = 0; >>>>> + >>>>> + arg_list_to_ffa_data(ffa_args, &ffa_data); >>>>> + rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data); >>>> >>>> I haven't dug deeper into the ABI yet, which is something I will look >>>> into. But these RPC commands caught my attention. Are these RPC calls >>>> blocking in nature? Is there a possibility that these could cause CPU >>>> stalls? Do the Linux interrupts remain unhandled until the RPC calls >>>> return? >>> >>> Yes, that is correct. We did encounter CPU stalls indeed, our solution >>> was to enable preemption of S-EL0 SPs in OP-TEE [3] which solved the >>> issue. >> >> I would have preferred to unite FFA_INTERRUPT and >> OPTEE_FFA_YIELDING_CALL_RETURN_INTERRUPT since underneath both are >> using FFA ABI. >> >> Jens, >> >> Can we change OP-TEE to use FFA_INTERRUPT as well when using FFA ABI? > > No, OP-TEE uses managed exit. Among other advantages, it allows > resuming execution on a different CPU. >
I suppose that should be the case with FFA_INTERRUPT too. OP-TEE should be able to resume SPs on different CPUs as well, right?
Possibly, but I leave that to Balint and company to sort out if that's desired or not.
FF-A mandates that S-EL0 SPs have a single execution context, run only on a single PE in the system at any point of time and are capable of migrating. Also, FF-A allows resuming a S-EL0 SP on a different CPU after it gets preempted by a NS interrupt. I think OP-TEE as S-EL1 SPMC does support this, but I don't have a setup yet that would explicitly test this scenario.
You can try to add a few minutes loop within a secure partition and see if the Linux scheduler reschedules on a different CPUs. I suppose you need to keep the system loaded with other normal world apps too.
Managed exit is only available for S-EL1 SPs.
So does that mean OP-TEE can use FF-A constructs like (FFA_INTERRUPT) for managed exit instead of custom OPTEE_FFA_YIELDING_CALL_RETURN_INTERRUPT?
No, and to be clear OPTEE_FFA_YIELDING_CALL_RETURN_INTERRUPT isn't a custom solution, it's fully within the specification.
I am not talking about it being out of specification. It's rather that if the base FF-A layer provides you with a feature then we shouldn't need to reimplement in every SP (OP-TEE or other trusted OS) communication stack.
Let's take a look at the code we have in the OP-TEE driver for this: case OPTEE_FFA_YIELDING_CALL_RETURN_INTERRUPT: /* Interrupt delivered by now */ break;
The implementation is pretty small. We're of course piggy-backing on the rest of the RPC code, but that would still be needed even if the FF-A framework could handle the managed exits. From OP-TEE's point of view, there's nothing to gain.
Returning via FFA_INTERRUPT is a simplified model where the SP has less control over the CPU.
I would like to understand which exact bits you are referring to. From Linux point of view, I don't see any difference.
Given how the ABI is defined we can't use FFA_INTERRUPT for managed exit. There's a vCPU field that is used by the SPMC, there's nothing for OP-TEE to suspend/resume a thread to let another thread use the CPU.
I imagine that it wouldn't play very nicely with spinlocks.
Aren't interrupts disabled while spinlock is being held?
Yes, but without managed exit that doesn't matter much since suspend/resume are transparent to the SP.
Cheers, Jens
-Sumit
Cheers, Jens
-Sumit
On Thu, 26 Oct 2023 at 19:57, Jens Wiklander jens.wiklander@linaro.org wrote:
On Thu, Oct 26, 2023 at 1:33 PM Sumit Garg sumit.garg@linaro.org wrote:
On Thu, 26 Oct 2023 at 16:06, Jens Wiklander jens.wiklander@linaro.org wrote:
On Thu, Oct 26, 2023 at 11:36 AM Sumit Garg sumit.garg@linaro.org wrote:
On Fri, 20 Oct 2023 at 19:29, Balint Dobszay balint.dobszay@arm.com wrote:
On 19 Oct 2023, at 16:16, Jens Wiklander wrote:
On Tue, Oct 17, 2023 at 1:14 PM Sumit Garg sumit.garg@linaro.org wrote: > On Mon, 16 Oct 2023 at 13:27, Jens Wiklander jens.wiklander@linaro.org wrote: >> On Fri, Oct 13, 2023 at 1:38 PM Sumit Garg sumit.garg@linaro.org wrote: >>> On Tue, 10 Oct 2023 at 21:11, Balint Dobszay balint.dobszay@arm.com wrote: >>>> On 3 Oct 2023, at 17:42, Sumit Garg wrote: >>>>> On Wed, 27 Sept 2023 at 20:56, Balint Dobszay balint.dobszay@arm.com wrote:
[snip]
>>>>>> +static int tstee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, >>>>>> + struct tee_param *param) >>>>>> +{ >>>>>> + struct tstee *tstee = tee_get_drvdata(ctx->teedev); >>>>>> + struct ffa_device *ffa_dev = tstee->ffa_dev; >>>>>> + struct ts_context_data *ctxdata = ctx->data; >>>>>> + struct ffa_send_direct_data ffa_data; >>>>>> + struct tee_shm *shm = NULL; >>>>>> + struct ts_session *sess; >>>>>> + u32 req_len, ffa_args[5] = {}; >>>>>> + int shm_id, rc; >>>>>> + u8 iface_id; >>>>>> + u64 handle; >>>>>> + u16 opcode; >>>>>> + >>>>>> + mutex_lock(&ctxdata->mutex); >>>>>> + sess = find_session(ctxdata, arg->session); >>>>>> + >>>>>> + /* Do this while holding the mutex to make sure that the session wasn't closed meanwhile */ >>>>>> + if (sess) >>>>>> + iface_id = sess->iface_id; >>>>>> + >>>>>> + mutex_unlock(&ctxdata->mutex); >>>>>> + if (!sess) >>>>>> + return -EINVAL; >>>>>> + >>>>>> + opcode = lower_16_bits(arg->func); >>>>>> + shm_id = lower_32_bits(param[0].u.value.a); >>>>>> + req_len = lower_32_bits(param[0].u.value.b); >>>>>> + >>>>>> + if (shm_id != 0) { >>>>>> + shm = tee_shm_get_from_id(ctx, shm_id); >>>>>> + if (IS_ERR(shm)) >>>>>> + return PTR_ERR(shm); >>>>>> + >>>>>> + if (shm->size < req_len) { >>>>>> + pr_err("request doesn't fit into shared memory buffer\n"); >>>>>> + rc = -EINVAL; >>>>>> + goto out; >>>>>> + } >>>>>> + >>>>>> + handle = shm->sec_world_id; >>>>>> + } else { >>>>>> + handle = FFA_INVALID_MEM_HANDLE; >>>>>> + } >>>>>> + >>>>>> + ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(iface_id, opcode); >>>>>> + ffa_args[TS_RPC_SERVICE_MEM_HANDLE_LSW] = lower_32_bits(handle); >>>>>> + ffa_args[TS_RPC_SERVICE_MEM_HANDLE_MSW] = upper_32_bits(handle); >>>>>> + ffa_args[TS_RPC_SERVICE_REQ_LEN] = req_len; >>>>>> + ffa_args[TS_RPC_SERVICE_CLIENT_ID] = 0; >>>>>> + >>>>>> + arg_list_to_ffa_data(ffa_args, &ffa_data); >>>>>> + rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data); >>>>> >>>>> I haven't dug deeper into the ABI yet, which is something I will look >>>>> into. But these RPC commands caught my attention. Are these RPC calls >>>>> blocking in nature? Is there a possibility that these could cause CPU >>>>> stalls? Do the Linux interrupts remain unhandled until the RPC calls >>>>> return? >>>> >>>> Yes, that is correct. We did encounter CPU stalls indeed, our solution >>>> was to enable preemption of S-EL0 SPs in OP-TEE [3] which solved the >>>> issue. >>> >>> I would have preferred to unite FFA_INTERRUPT and >>> OPTEE_FFA_YIELDING_CALL_RETURN_INTERRUPT since underneath both are >>> using FFA ABI. >>> >>> Jens, >>> >>> Can we change OP-TEE to use FFA_INTERRUPT as well when using FFA ABI? >> >> No, OP-TEE uses managed exit. Among other advantages, it allows >> resuming execution on a different CPU. >> > > I suppose that should be the case with FFA_INTERRUPT too. OP-TEE > should be able to resume SPs on different CPUs as well, right?
Possibly, but I leave that to Balint and company to sort out if that's desired or not.
FF-A mandates that S-EL0 SPs have a single execution context, run only on a single PE in the system at any point of time and are capable of migrating. Also, FF-A allows resuming a S-EL0 SP on a different CPU after it gets preempted by a NS interrupt. I think OP-TEE as S-EL1 SPMC does support this, but I don't have a setup yet that would explicitly test this scenario.
You can try to add a few minutes loop within a secure partition and see if the Linux scheduler reschedules on a different CPUs. I suppose you need to keep the system loaded with other normal world apps too.
Managed exit is only available for S-EL1 SPs.
So does that mean OP-TEE can use FF-A constructs like (FFA_INTERRUPT) for managed exit instead of custom OPTEE_FFA_YIELDING_CALL_RETURN_INTERRUPT?
No, and to be clear OPTEE_FFA_YIELDING_CALL_RETURN_INTERRUPT isn't a custom solution, it's fully within the specification.
I am not talking about it being out of specification. It's rather that if the base FF-A layer provides you with a feature then we shouldn't need to reimplement in every SP (OP-TEE or other trusted OS) communication stack.
Let's take a look at the code we have in the OP-TEE driver for this: case OPTEE_FFA_YIELDING_CALL_RETURN_INTERRUPT: /* Interrupt delivered by now */ break;
The implementation is pretty small. We're of course piggy-backing on the rest of the RPC code, but that would still be needed even if the FF-A framework could handle the managed exits. From OP-TEE's point of view, there's nothing to gain.
Returning via FFA_INTERRUPT is a simplified model where the SP has less control over the CPU.
I would like to understand which exact bits you are referring to. From Linux point of view, I don't see any difference.
Given how the ABI is defined we can't use FFA_INTERRUPT for managed exit.
Okay I see that ABI issue. We have to be backwards compatible and that will cause more pain rather than reusing FFA_INTERRUPT here. I think I can live with that in mind.
-Sumit
There's a vCPU field that is used by the SPMC, there's nothing for OP-TEE to suspend/resume a thread to let another thread use the CPU.
I imagine that it wouldn't play very nicely with spinlocks.
Aren't interrupts disabled while spinlock is being held?
Yes, but without managed exit that doesn't matter much since suspend/resume are transparent to the SP.
Cheers, Jens
-Sumit
Cheers, Jens
-Sumit
On 26 Oct 2023, at 11:36, Sumit Garg wrote:
On Fri, 20 Oct 2023 at 19:29, Balint Dobszay balint.dobszay@arm.com wrote:
On 19 Oct 2023, at 16:16, Jens Wiklander wrote:
On Tue, Oct 17, 2023 at 1:14 PM Sumit Garg sumit.garg@linaro.org wrote:
On Mon, 16 Oct 2023 at 13:27, Jens Wiklander jens.wiklander@linaro.org wrote:
On Fri, Oct 13, 2023 at 1:38 PM Sumit Garg sumit.garg@linaro.org wrote:
On Tue, 10 Oct 2023 at 21:11, Balint Dobszay balint.dobszay@arm.com wrote: > On 3 Oct 2023, at 17:42, Sumit Garg wrote: >> On Wed, 27 Sept 2023 at 20:56, Balint Dobszay balint.dobszay@arm.com wrote:
[snip]
>>> +static int tstee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, >>> + struct tee_param *param) >>> +{ >>> + struct tstee *tstee = tee_get_drvdata(ctx->teedev); >>> + struct ffa_device *ffa_dev = tstee->ffa_dev; >>> + struct ts_context_data *ctxdata = ctx->data; >>> + struct ffa_send_direct_data ffa_data; >>> + struct tee_shm *shm = NULL; >>> + struct ts_session *sess; >>> + u32 req_len, ffa_args[5] = {}; >>> + int shm_id, rc; >>> + u8 iface_id; >>> + u64 handle; >>> + u16 opcode; >>> + >>> + mutex_lock(&ctxdata->mutex); >>> + sess = find_session(ctxdata, arg->session); >>> + >>> + /* Do this while holding the mutex to make sure that the session wasn't closed meanwhile */ >>> + if (sess) >>> + iface_id = sess->iface_id; >>> + >>> + mutex_unlock(&ctxdata->mutex); >>> + if (!sess) >>> + return -EINVAL; >>> + >>> + opcode = lower_16_bits(arg->func); >>> + shm_id = lower_32_bits(param[0].u.value.a); >>> + req_len = lower_32_bits(param[0].u.value.b); >>> + >>> + if (shm_id != 0) { >>> + shm = tee_shm_get_from_id(ctx, shm_id); >>> + if (IS_ERR(shm)) >>> + return PTR_ERR(shm); >>> + >>> + if (shm->size < req_len) { >>> + pr_err("request doesn't fit into shared memory buffer\n"); >>> + rc = -EINVAL; >>> + goto out; >>> + } >>> + >>> + handle = shm->sec_world_id; >>> + } else { >>> + handle = FFA_INVALID_MEM_HANDLE; >>> + } >>> + >>> + ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(iface_id, opcode); >>> + ffa_args[TS_RPC_SERVICE_MEM_HANDLE_LSW] = lower_32_bits(handle); >>> + ffa_args[TS_RPC_SERVICE_MEM_HANDLE_MSW] = upper_32_bits(handle); >>> + ffa_args[TS_RPC_SERVICE_REQ_LEN] = req_len; >>> + ffa_args[TS_RPC_SERVICE_CLIENT_ID] = 0; >>> + >>> + arg_list_to_ffa_data(ffa_args, &ffa_data); >>> + rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data); >> >> I haven't dug deeper into the ABI yet, which is something I will look >> into. But these RPC commands caught my attention. Are these RPC calls >> blocking in nature? Is there a possibility that these could cause CPU >> stalls? Do the Linux interrupts remain unhandled until the RPC calls >> return? > > Yes, that is correct. We did encounter CPU stalls indeed, our solution > was to enable preemption of S-EL0 SPs in OP-TEE [3] which solved the > issue.
I would have preferred to unite FFA_INTERRUPT and OPTEE_FFA_YIELDING_CALL_RETURN_INTERRUPT since underneath both are using FFA ABI.
Jens,
Can we change OP-TEE to use FFA_INTERRUPT as well when using FFA ABI?
No, OP-TEE uses managed exit. Among other advantages, it allows resuming execution on a different CPU.
I suppose that should be the case with FFA_INTERRUPT too. OP-TEE should be able to resume SPs on different CPUs as well, right?
Possibly, but I leave that to Balint and company to sort out if that's desired or not.
FF-A mandates that S-EL0 SPs have a single execution context, run only on a single PE in the system at any point of time and are capable of migrating. Also, FF-A allows resuming a S-EL0 SP on a different CPU after it gets preempted by a NS interrupt. I think OP-TEE as S-EL1 SPMC does support this, but I don't have a setup yet that would explicitly test this scenario.
You can try to add a few minutes loop within a secure partition and see if the Linux scheduler reschedules on a different CPUs. I suppose you need to keep the system loaded with other normal world apps too.
Thanks, I'll give it a try.
Regards, Balint
On 13 Oct 2023, at 13:38, Sumit Garg wrote:
On Tue, 10 Oct 2023 at 21:11, Balint Dobszay balint.dobszay@arm.com wrote:
On 3 Oct 2023, at 17:42, Sumit Garg wrote:
On Wed, 27 Sept 2023 at 20:56, Balint Dobszay balint.dobszay@arm.com wrote:
The Trusted Services project provides a framework for developing and deploying device Root Of Trust services in FF-A Secure Partitions. The FF-A SPs are accessible through the FF-A driver, but this doesn't provide a user space interface. The goal of this TEE driver is to make Trusted Services SPs accessible for user space clients.
I am interested in exploring the user space library/applications. Do you have a standard library implementation and some example user-space applications leveraging this driver interface?
Yes we have a library reference implementation in Trusted Services using this driver called libts [1]. There are some test applications that rely on this library, e.g. ts-service-test [2]. Also, the Parsec project can use Trusted Services as backend through libts [3] (note, the version of TS currently integrated in Parsec is not recent, thus the libts in there still uses an earlier version of this driver).
Although I am currently exploring the user-space interface, I would like to give it a hands-on. Is it possible to run this trusted services setup on Qemu? If yes then can you share instructions how to test them?
Currently we only support the FVP Base platform. Please find the instructions for setting up the build environment and running the tests at [1]. Note, our upstream end-to-end integration currently relies on the out-of-tree version of this driver, so Trusted Services is looking for driver version number in that repository. To make it compatible with this in-tree version, please use the branch at [2].
Regards, Balint
[1] https://developer.trustedfirmware.org/w/trusted-services/running_ts_on_fvp [2] https://git.trustedfirmware.org/TS/trusted-services.git/log/?h=topics/bd/tst...
On Wed, 27 Sept 2023 at 20:56, Balint Dobszay balint.dobszay@arm.com wrote:
The Trusted Services project provides a framework for developing and deploying device Root Of Trust services in FF-A Secure Partitions. The FF-A SPs are accessible through the FF-A driver, but this doesn't provide a user space interface. The goal of this TEE driver is to make Trusted Services SPs accessible for user space clients.
All TS SPs have the same FF-A UUID, it identifies the RPC protocol used by TS. A TS SP can host one or more services, a service is identified by its service UUID. The same type of service cannot be present twice in the same SP. During SP boot each service in the SP gets assigned an interface ID, this is just a short ID to simplify message addressing. There is 1:1 mapping between TS SPs and TEE devices, i.e. a TEE device gets registered for each Trusted Services compatible FF-A device found on the FF-A bus. Opening the SP corresponds to opening the TEE dev and creating a TEE context. A TS SP hosts one or more services, opening a service corresponds to opening a session in the given tee_context.
Signed-off-by: Balint Dobszay balint.dobszay@arm.com
Hi All,
This is an initial TEE driver implementation for Trusted Services [1]. Trusted Services is a TrustedFirmware.org project that provides a framework for developing and deploying device Root Of Trust services in FF-A Secure Partitions. The project hosts the reference implementation of Arm Platform Security Architecture [2] for Arm A-profile devices.
The FF-A Secure Partitions (SPs) are accessible through the FF-A driver in Linux. However, the FF-A driver doesn't have a user space interface so user space clients currently cannot access Trusted Services. The goal of this TEE driver is to bridge this gap and make Trusted Services functionality accessible from user space.
The predecessor of this driver is an out-of-tree kernel module that we have been using for prototyping [3]. An integration of Trusted Services, OP-TEE as a S-EL1 SPMC, TF-A, Linux and this out-of-tree module is available as part of the OP-TEE project, please find the manifest file at [4].
Some key points about the structure of Trusted Services (for more details please see the documentation at [5]):
- A Trusted Services SP can host one or multiple services. All TS SPs have the same FF-A UUID, it identifies the RPC protocol used by TS. The protocol's register ABI definition is available at [6] and is implemented in tstee_private.h.
- A service hosted by an SP is identified by its service UUID. The same type of service can be present in multiple SPs, but not twice in the same SP. During SP boot each service in the SP gets assigned an interface ID, this is just a short ID to simplify message addressing.
- There is 1:1 mapping between TS SPs and TEE devices, i.e. a TEE device gets registered for each Trusted Services compatible FF-A device found on the FF-A bus.
- Opening the SP corresponds to opening the TEE dev and creating a TEE context. A TS SP hosts one or more services, opening a service corresponds to opening a session in the given tee_context.
Regards, Balint
[1] https://www.trustedfirmware.org/projects/trusted-services/ [2] https://www.arm.com/architecture/security-features/platform-security [3] https://gitlab.arm.com/linux-arm/linux-trusted-services [4] https://github.com/OP-TEE/manifest/blob/master/fvp-ts.xml [5] https://trusted-services.readthedocs.io [6] https://trusted-services.readthedocs.io/en/integration/developer/service-acc...
drivers/tee/Kconfig | 1 + drivers/tee/Makefile | 1 + drivers/tee/tstee/Kconfig | 11 + drivers/tee/tstee/Makefile | 3 + drivers/tee/tstee/core.c | 473 ++++++++++++++++++++++++++++++ drivers/tee/tstee/shm_pool.c | 91 ++++++ drivers/tee/tstee/tstee_private.h | 97 ++++++ include/uapi/linux/tee.h | 1 + 8 files changed, 678 insertions(+) create mode 100644 drivers/tee/tstee/Kconfig create mode 100644 drivers/tee/tstee/Makefile create mode 100644 drivers/tee/tstee/core.c create mode 100644 drivers/tee/tstee/shm_pool.c create mode 100644 drivers/tee/tstee/tstee_private.h
diff --git a/drivers/tee/Kconfig b/drivers/tee/Kconfig index 73a147202e88..61b507c18780 100644 --- a/drivers/tee/Kconfig +++ b/drivers/tee/Kconfig @@ -15,5 +15,6 @@ if TEE
source "drivers/tee/optee/Kconfig" source "drivers/tee/amdtee/Kconfig" +source "drivers/tee/tstee/Kconfig"
endif diff --git a/drivers/tee/Makefile b/drivers/tee/Makefile index 68da044afbfa..5488cba30bd2 100644 --- a/drivers/tee/Makefile +++ b/drivers/tee/Makefile @@ -5,3 +5,4 @@ tee-objs += tee_shm.o tee-objs += tee_shm_pool.o obj-$(CONFIG_OPTEE) += optee/ obj-$(CONFIG_AMDTEE) += amdtee/ +obj-$(CONFIG_ARM_TSTEE) += tstee/ diff --git a/drivers/tee/tstee/Kconfig b/drivers/tee/tstee/Kconfig new file mode 100644 index 000000000000..cd8d32586a77 --- /dev/null +++ b/drivers/tee/tstee/Kconfig @@ -0,0 +1,11 @@ +# SPDX-License-Identifier: GPL-2.0-only +config ARM_TSTEE
tristate "Arm Trusted Services TEE driver"
depends on ARM_FFA_TRANSPORT
default n
help
The Trusted Services project provides a framework for developing and
deploying device Root Of Trust services in FF-A Secure Partitions.
This driver provides an interface to make Trusted Services Secure
Partitions accessible for user space clients, since the FF-A driver
doesn't implement a user space interface directly.
diff --git a/drivers/tee/tstee/Makefile b/drivers/tee/tstee/Makefile new file mode 100644 index 000000000000..dd54b1f88652 --- /dev/null +++ b/drivers/tee/tstee/Makefile @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: GPL-2.0-only +arm-tstee-objs := core.o shm_pool.o +obj-$(CONFIG_ARM_TSTEE) = arm-tstee.o diff --git a/drivers/tee/tstee/core.c b/drivers/tee/tstee/core.c new file mode 100644 index 000000000000..c0194638b7da --- /dev/null +++ b/drivers/tee/tstee/core.c @@ -0,0 +1,473 @@ +// SPDX-License-Identifier: GPL-2.0-only +/*
- Copyright (c) 2023, Arm Limited
- */
+#define DRIVER_NAME "Arm TSTEE" +#define pr_fmt(fmt) DRIVER_NAME ": " fmt
+#include <linux/arm_ffa.h> +#include <linux/err.h> +#include <linux/errno.h> +#include <linux/kernel.h> +#include <linux/limits.h> +#include <linux/list.h> +#include <linux/mm.h> +#include <linux/module.h> +#include <linux/scatterlist.h> +#include <linux/slab.h> +#include <linux/stat.h> +#include <linux/tee_drv.h> +#include <linux/types.h> +#include <linux/uaccess.h>
+#include "tstee_private.h"
+#define FFA_INVALID_MEM_HANDLE U64_MAX
+static void arg_list_to_ffa_data(const u32 *args, struct ffa_send_direct_data *data) +{
data->data0 = args[0];
data->data1 = args[1];
data->data2 = args[2];
data->data3 = args[3];
data->data4 = args[4];
+}
+static void arg_list_from_ffa_data(const struct ffa_send_direct_data *data, u32 *args) +{
args[0] = lower_32_bits(data->data0);
args[1] = lower_32_bits(data->data1);
args[2] = lower_32_bits(data->data2);
args[3] = lower_32_bits(data->data3);
args[4] = lower_32_bits(data->data4);
+}
+static void tstee_get_version(struct tee_device *teedev, struct tee_ioctl_version_data *vers) +{
struct tstee *tstee = tee_get_drvdata(teedev);
struct tee_ioctl_version_data v = {
.impl_id = TEE_IMPL_ID_TSTEE,
.impl_caps = tstee->ffa_dev->vm_id,
So while exploring the user-space interface, I observed an anomaly here. The ".impl_caps" refers to "Implementation specific capabilities" meant to support backwards compatibility of a particular TEE implementation. But here I observe you are using it instead for endpoint_id. Can you provide the reasoning behind it? Also, do you plan to support multiple endpoints via this driver?
.gen_caps = 0,
Do you plan to support TEE_GEN_CAP_REG_MEM? IOW, do you expect user-space to leverage TEE_IOC_SHM_REGISTER? I can see this driver already provides tstee_shm_register() and tstee_shm_unregister() callbacks.
};
*vers = v;
+}
+static int tstee_open(struct tee_context *ctx) +{
struct ts_context_data *ctxdata;
ctxdata = kzalloc(sizeof(*ctxdata), GFP_KERNEL);
if (!ctxdata)
return -ENOMEM;
mutex_init(&ctxdata->mutex);
idr_init(&ctxdata->sess_ids);
INIT_LIST_HEAD(&ctxdata->sess_list);
ctx->data = ctxdata;
return 0;
+}
+static void tstee_release(struct tee_context *ctx) +{
struct ts_context_data *ctxdata = ctx->data;
struct ts_session *sess, *sess_tmp;
if (!ctxdata)
return;
list_for_each_entry_safe(sess, sess_tmp, &ctxdata->sess_list, list_node) {
list_del(&sess->list_node);
idr_remove(&ctxdata->sess_ids, sess->session_id);
kfree(sess);
}
idr_destroy(&ctxdata->sess_ids);
mutex_destroy(&ctxdata->mutex);
kfree(ctxdata);
ctx->data = NULL;
+}
+static struct ts_session *find_session(struct ts_context_data *ctxdata, u32 session_id) +{
struct ts_session *sess;
list_for_each_entry(sess, &ctxdata->sess_list, list_node)
if (sess->session_id == session_id)
return sess;
return NULL;
+}
+static int tstee_open_session(struct tee_context *ctx, struct tee_ioctl_open_session_arg *arg,
struct tee_param *param __always_unused)
+{
struct tstee *tstee = tee_get_drvdata(ctx->teedev);
struct ffa_device *ffa_dev = tstee->ffa_dev;
struct ts_context_data *ctxdata = ctx->data;
struct ffa_send_direct_data ffa_data;
struct ts_session *sess = NULL;
u32 ffa_args[5] = {};
Please don't use magic numbers.
int sess_id;
int rc;
ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(TS_RPC_MGMT_IFACE_ID,
TS_RPC_OP_SERVICE_INFO);
memcpy((u8 *)(ffa_args + TS_RPC_SERVICE_INFO_UUID0), arg->uuid, UUID_SIZE);
arg_list_to_ffa_data(ffa_args, &ffa_data);
rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data);
if (rc)
return rc;
arg_list_from_ffa_data(&ffa_data, ffa_args);
if (ffa_args[TS_RPC_SERVICE_INFO_RPC_STATUS] != TS_RPC_OK)
return -ENODEV;
if (ffa_args[TS_RPC_SERVICE_INFO_IFACE] > U8_MAX)
return -EINVAL;
sess = kzalloc(sizeof(*sess), GFP_KERNEL);
if (!sess)
return -ENOMEM;
sess_id = idr_alloc(&ctxdata->sess_ids, sess, 1, 0, GFP_KERNEL);
if (sess_id < 0) {
kfree(sess);
return sess_id;
}
sess->session_id = sess_id;
sess->iface_id = ffa_args[TS_RPC_SERVICE_INFO_IFACE];
mutex_lock(&ctxdata->mutex);
list_add(&sess->list_node, &ctxdata->sess_list);
mutex_unlock(&ctxdata->mutex);
arg->session = sess_id;
arg->ret = 0;
return 0;
+}
+static int tstee_close_session(struct tee_context *ctx, u32 session) +{
struct ts_context_data *ctxdata = ctx->data;
struct ts_session *sess;
mutex_lock(&ctxdata->mutex);
sess = find_session(ctxdata, session);
if (sess)
list_del(&sess->list_node);
mutex_unlock(&ctxdata->mutex);
if (!sess)
return -EINVAL;
idr_remove(&ctxdata->sess_ids, sess->session_id);
kfree(sess);
return 0;
+}
+static int tstee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg,
struct tee_param *param)
+{
struct tstee *tstee = tee_get_drvdata(ctx->teedev);
struct ffa_device *ffa_dev = tstee->ffa_dev;
struct ts_context_data *ctxdata = ctx->data;
struct ffa_send_direct_data ffa_data;
struct tee_shm *shm = NULL;
struct ts_session *sess;
u32 req_len, ffa_args[5] = {};
ditto.
-Sumit
int shm_id, rc;
u8 iface_id;
u64 handle;
u16 opcode;
mutex_lock(&ctxdata->mutex);
sess = find_session(ctxdata, arg->session);
/* Do this while holding the mutex to make sure that the session wasn't closed meanwhile */
if (sess)
iface_id = sess->iface_id;
mutex_unlock(&ctxdata->mutex);
if (!sess)
return -EINVAL;
opcode = lower_16_bits(arg->func);
shm_id = lower_32_bits(param[0].u.value.a);
req_len = lower_32_bits(param[0].u.value.b);
if (shm_id != 0) {
shm = tee_shm_get_from_id(ctx, shm_id);
if (IS_ERR(shm))
return PTR_ERR(shm);
if (shm->size < req_len) {
pr_err("request doesn't fit into shared memory buffer\n");
rc = -EINVAL;
goto out;
}
handle = shm->sec_world_id;
} else {
handle = FFA_INVALID_MEM_HANDLE;
}
ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(iface_id, opcode);
ffa_args[TS_RPC_SERVICE_MEM_HANDLE_LSW] = lower_32_bits(handle);
ffa_args[TS_RPC_SERVICE_MEM_HANDLE_MSW] = upper_32_bits(handle);
ffa_args[TS_RPC_SERVICE_REQ_LEN] = req_len;
ffa_args[TS_RPC_SERVICE_CLIENT_ID] = 0;
arg_list_to_ffa_data(ffa_args, &ffa_data);
rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data);
if (rc)
goto out;
arg_list_from_ffa_data(&ffa_data, ffa_args);
if (ffa_args[TS_RPC_SERVICE_RPC_STATUS] != TS_RPC_OK) {
pr_err("invoke_func rpc status: %d\n", ffa_args[TS_RPC_SERVICE_RPC_STATUS]);
rc = -EINVAL;
goto out;
}
arg->ret = ffa_args[TS_RPC_SERVICE_STATUS];
if (shm && shm->size >= ffa_args[TS_RPC_SERVICE_RESP_LEN])
param[0].u.value.a = ffa_args[TS_RPC_SERVICE_RESP_LEN];
+out:
if (shm)
tee_shm_put(shm);
return rc;
+}
+static int tstee_cancel_req(struct tee_context *ctx __always_unused, u32 cancel_id __always_unused,
u32 session __always_unused)
+{
return -EINVAL;
+}
+int tstee_shm_register(struct tee_context *ctx, struct tee_shm *shm, struct page **pages,
size_t num_pages, unsigned long start __always_unused)
+{
struct tstee *tstee = tee_get_drvdata(ctx->teedev);
struct ffa_device *ffa_dev = tstee->ffa_dev;
struct ffa_mem_region_attributes mem_attr = {
.receiver = tstee->ffa_dev->vm_id,
.attrs = FFA_MEM_RW,
.flag = 0,
};
struct ffa_mem_ops_args mem_args = {
.attrs = &mem_attr,
.use_txbuf = true,
.nattrs = 1,
.flags = 0,
};
struct ffa_send_direct_data ffa_data;
struct sg_table sgt;
u32 ffa_args[5] = {};
int rc;
rc = sg_alloc_table_from_pages(&sgt, pages, num_pages, 0, num_pages * PAGE_SIZE,
GFP_KERNEL);
if (rc)
return rc;
mem_args.sg = sgt.sgl;
rc = ffa_dev->ops->mem_ops->memory_share(&mem_args);
sg_free_table(&sgt);
if (rc)
return rc;
shm->sec_world_id = mem_args.g_handle;
ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(TS_RPC_MGMT_IFACE_ID,
TS_RPC_OP_RETRIEVE_MEM);
ffa_args[TS_RPC_RETRIEVE_MEM_HANDLE_LSW] = lower_32_bits(shm->sec_world_id);
ffa_args[TS_RPC_RETRIEVE_MEM_HANDLE_MSW] = upper_32_bits(shm->sec_world_id);
ffa_args[TS_RPC_RETRIEVE_MEM_TAG_LSW] = 0;
ffa_args[TS_RPC_RETRIEVE_MEM_TAG_MSW] = 0;
arg_list_to_ffa_data(ffa_args, &ffa_data);
rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data);
if (rc) {
(void)ffa_dev->ops->mem_ops->memory_reclaim(shm->sec_world_id, 0);
return rc;
}
arg_list_from_ffa_data(&ffa_data, ffa_args);
if (ffa_args[TS_RPC_RETRIEVE_MEM_RPC_STATUS] != TS_RPC_OK) {
pr_err("shm_register rpc status: %d\n", ffa_args[TS_RPC_RETRIEVE_MEM_RPC_STATUS]);
ffa_dev->ops->mem_ops->memory_reclaim(shm->sec_world_id, 0);
return -EINVAL;
}
return 0;
+}
+int tstee_shm_unregister(struct tee_context *ctx, struct tee_shm *shm) +{
struct tstee *tstee = tee_get_drvdata(ctx->teedev);
struct ffa_device *ffa_dev = tstee->ffa_dev;
struct ffa_send_direct_data ffa_data;
u32 ffa_args[5] = {};
int rc;
ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(TS_RPC_MGMT_IFACE_ID,
TS_RPC_OP_RELINQ_MEM);
ffa_args[TS_RPC_RELINQ_MEM_HANDLE_LSW] = lower_32_bits(shm->sec_world_id);
ffa_args[TS_RPC_RELINQ_MEM_HANDLE_MSW] = upper_32_bits(shm->sec_world_id);
arg_list_to_ffa_data(ffa_args, &ffa_data);
rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data);
if (rc)
return rc;
arg_list_from_ffa_data(&ffa_data, ffa_args);
if (ffa_args[TS_RPC_RELINQ_MEM_RPC_STATUS] != TS_RPC_OK) {
pr_err("shm_unregister rpc status: %d\n", ffa_args[TS_RPC_RELINQ_MEM_RPC_STATUS]);
return -EINVAL;
}
rc = ffa_dev->ops->mem_ops->memory_reclaim(shm->sec_world_id, 0);
return rc;
+}
+static const struct tee_driver_ops tstee_ops = {
.get_version = tstee_get_version,
.open = tstee_open,
.release = tstee_release,
.open_session = tstee_open_session,
.close_session = tstee_close_session,
.invoke_func = tstee_invoke_func,
.cancel_req = tstee_cancel_req,
.shm_register = tstee_shm_register,
.shm_unregister = tstee_shm_unregister,
+};
+static const struct tee_desc tstee_desc = {
.name = "tstee-clnt",
.ops = &tstee_ops,
.owner = THIS_MODULE,
+};
+static bool tstee_check_rpc_compatible(struct ffa_device *ffa_dev) +{
struct ffa_send_direct_data ffa_data;
u32 ffa_args[5] = {};
ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(TS_RPC_MGMT_IFACE_ID,
TS_RPC_OP_GET_VERSION);
arg_list_to_ffa_data(ffa_args, &ffa_data);
if (ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data))
return false;
arg_list_from_ffa_data(&ffa_data, ffa_args);
return ffa_args[TS_RPC_GET_VERSION_RESP] == TS_RPC_PROTOCOL_VERSION;
+}
+static void tstee_deinit_common(struct tstee *tstee) +{
tee_device_unregister(tstee->teedev);
if (tstee->pool)
tee_shm_pool_free(tstee->pool);
kfree(tstee);
+}
+static int tstee_probe(struct ffa_device *ffa_dev) +{
struct tstee *tstee;
int rc;
ffa_dev->ops->msg_ops->mode_32bit_set(ffa_dev);
if (!tstee_check_rpc_compatible(ffa_dev))
return -EINVAL;
tstee = kzalloc(sizeof(*tstee), GFP_KERNEL);
if (!tstee)
return -ENOMEM;
tstee->ffa_dev = ffa_dev;
tstee->pool = tstee_create_shm_pool();
if (IS_ERR(tstee->pool)) {
rc = PTR_ERR(tstee->pool);
tstee->pool = NULL;
goto err;
}
tstee->teedev = tee_device_alloc(&tstee_desc, NULL, tstee->pool, tstee);
if (IS_ERR(tstee->teedev)) {
rc = PTR_ERR(tstee->teedev);
tstee->teedev = NULL;
goto err;
}
rc = tee_device_register(tstee->teedev);
if (rc)
goto err;
ffa_dev_set_drvdata(ffa_dev, tstee);
pr_info("driver initialized for endpoint 0x%x\n", ffa_dev->vm_id);
return 0;
+err:
tstee_deinit_common(tstee);
return rc;
+}
+static void tstee_remove(struct ffa_device *ffa_dev) +{
tstee_deinit_common(ffa_dev->dev.driver_data);
+}
+static const struct ffa_device_id tstee_device_ids[] = {
/* TS RPC protocol UUID: bdcd76d7-825e-4751-963b-86d4f84943ac */
{ TS_RPC_UUID },
{}
+};
+static struct ffa_driver tstee_driver = {
.name = "arm_tstee",
.probe = tstee_probe,
.remove = tstee_remove,
.id_table = tstee_device_ids,
+};
+static int __init mod_init(void) +{
return ffa_register(&tstee_driver);
+} +module_init(mod_init)
+static void __exit mod_exit(void) +{
ffa_unregister(&tstee_driver);
+} +module_exit(mod_exit)
+MODULE_ALIAS("arm-tstee"); +MODULE_AUTHOR("Balint Dobszay balint.dobszay@arm.com"); +MODULE_DESCRIPTION("Arm Trusted Services TEE driver"); +MODULE_LICENSE("GPL"); diff --git a/drivers/tee/tstee/shm_pool.c b/drivers/tee/tstee/shm_pool.c new file mode 100644 index 000000000000..518c10dd0735 --- /dev/null +++ b/drivers/tee/tstee/shm_pool.c @@ -0,0 +1,91 @@ +// SPDX-License-Identifier: GPL-2.0-only +/*
- Copyright (c) 2023, Arm Limited
- */
+#include <linux/arm_ffa.h> +#include <linux/device.h> +#include <linux/dma-buf.h> +#include <linux/genalloc.h> +#include <linux/slab.h> +#include <linux/tee_drv.h>
+#include "tstee_private.h"
+static int pool_op_alloc(struct tee_shm_pool *pool __always_unused, struct tee_shm *shm,
size_t size, size_t align __always_unused)
+{
unsigned int order, nr_pages, i;
struct page *page, **pages;
int rc;
if (size == 0)
return -EINVAL;
order = get_order(size);
nr_pages = 1 << order;
page = alloc_pages(GFP_KERNEL | __GFP_ZERO, order);
if (!page)
return -ENOMEM;
shm->kaddr = page_address(page);
shm->paddr = page_to_phys(page);
shm->size = PAGE_SIZE << order;
pages = kcalloc(nr_pages, sizeof(*pages), GFP_KERNEL);
if (!pages) {
rc = -ENOMEM;
goto err;
}
for (i = 0; i < nr_pages; i++)
pages[i] = page + i;
rc = tstee_shm_register(shm->ctx, shm, pages, nr_pages, (unsigned long)shm->kaddr);
kfree(pages);
if (rc)
goto err;
return 0;
+err:
__free_pages(page, order);
return rc;
+}
+static void pool_op_free(struct tee_shm_pool *pool __always_unused, struct tee_shm *shm) +{
int rc;
rc = tstee_shm_unregister(shm->ctx, shm);
if (rc)
pr_err("shm id 0x%llx unregister rc %d\n", shm->sec_world_id, rc);
free_pages((unsigned long)shm->kaddr, get_order(shm->size));
shm->kaddr = NULL;
+}
+static void pool_op_destroy_pool(struct tee_shm_pool *pool) +{
kfree(pool);
+}
+static const struct tee_shm_pool_ops pool_ops = {
.alloc = pool_op_alloc,
.free = pool_op_free,
.destroy_pool = pool_op_destroy_pool,
+};
+struct tee_shm_pool *tstee_create_shm_pool(void) +{
struct tee_shm_pool *pool;
pool = kzalloc(sizeof(*pool), GFP_KERNEL);
if (!pool)
return ERR_PTR(-ENOMEM);
pool->ops = &pool_ops;
return pool;
+} diff --git a/drivers/tee/tstee/tstee_private.h b/drivers/tee/tstee/tstee_private.h new file mode 100644 index 000000000000..2ea266804ccf --- /dev/null +++ b/drivers/tee/tstee/tstee_private.h @@ -0,0 +1,97 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/*
- Copyright (c) 2023, Arm Limited
- */
+#ifndef TSTEE_PRIVATE_H +#define TSTEE_PRIVATE_H
+#include <linux/arm_ffa.h> +#include <linux/bitops.h> +#include <linux/idr.h> +#include <linux/tee_drv.h> +#include <linux/types.h> +#include <linux/uuid.h>
+/* UUID of this protocol */ +#define TS_RPC_UUID UUID_INIT(0xbdcd76d7, 0x825e, 0x4751, \
0x96, 0x3b, 0x86, 0xd4, 0xf8, 0x49, 0x43, 0xac)
+/* Protocol version*/ +#define TS_RPC_PROTOCOL_VERSION (1)
+/* Status codes */ +#define TS_RPC_OK (0)
+/* RPC control register */ +#define TS_RPC_CTRL_REG (0) +#define OPCODE_MASK GENMASK(15, 0) +#define IFACE_ID_MASK GENMASK(23, 16) +#define TS_RPC_CTRL_OPCODE(x) ((u16)(FIELD_GET(OPCODE_MASK, (x)))) +#define TS_RPC_CTRL_IFACE_ID(x) ((u8)(FIELD_GET(IFACE_ID_MASK, (x)))) +#define TS_RPC_CTRL_PACK_IFACE_OPCODE(i, o) \
(FIELD_PREP(IFACE_ID_MASK, (i)) | FIELD_PREP(OPCODE_MASK, (o)))
+#define TS_RPC_CTRL_SAP_RC BIT(30) +#define TS_RPC_CTRL_SAP_ERR BIT(31)
+/* Interface ID for RPC management operations */ +#define TS_RPC_MGMT_IFACE_ID (0xff)
+/* Management calls */ +#define TS_RPC_OP_GET_VERSION (0x0000) +#define TS_RPC_GET_VERSION_RESP (1)
+#define TS_RPC_OP_RETRIEVE_MEM (0x0001) +#define TS_RPC_RETRIEVE_MEM_HANDLE_LSW (1) +#define TS_RPC_RETRIEVE_MEM_HANDLE_MSW (2) +#define TS_RPC_RETRIEVE_MEM_TAG_LSW (3) +#define TS_RPC_RETRIEVE_MEM_TAG_MSW (4) +#define TS_RPC_RETRIEVE_MEM_RPC_STATUS (1)
+#define TS_RPC_OP_RELINQ_MEM (0x0002) +#define TS_RPC_RELINQ_MEM_HANDLE_LSW (1) +#define TS_RPC_RELINQ_MEM_HANDLE_MSW (2) +#define TS_RPC_RELINQ_MEM_RPC_STATUS (1)
+#define TS_RPC_OP_SERVICE_INFO (0x0003) +#define TS_RPC_SERVICE_INFO_UUID0 (1) +#define TS_RPC_SERVICE_INFO_UUID1 (2) +#define TS_RPC_SERVICE_INFO_UUID2 (3) +#define TS_RPC_SERVICE_INFO_UUID3 (4) +#define TS_RPC_SERVICE_INFO_RPC_STATUS (1) +#define TS_RPC_SERVICE_INFO_IFACE (2)
+/* Service call */ +#define TS_RPC_SERVICE_MEM_HANDLE_LSW (1) +#define TS_RPC_SERVICE_MEM_HANDLE_MSW (2) +#define TS_RPC_SERVICE_REQ_LEN (3) +#define TS_RPC_SERVICE_CLIENT_ID (4) +#define TS_RPC_SERVICE_RPC_STATUS (1) +#define TS_RPC_SERVICE_STATUS (2) +#define TS_RPC_SERVICE_RESP_LEN (3)
+struct tstee {
struct ffa_device *ffa_dev;
struct tee_device *teedev;
struct tee_shm_pool *pool;
+};
+struct ts_session {
struct list_head list_node;
u32 session_id;
u8 iface_id;
+};
+struct ts_context_data {
struct list_head sess_list;
struct idr sess_ids;
/* Serializes access to this struct */
struct mutex mutex;
+};
+struct tee_shm_pool *tstee_create_shm_pool(void); +int tstee_shm_register(struct tee_context *ctx, struct tee_shm *shm, struct page **pages,
size_t num_pages, unsigned long start);
+int tstee_shm_unregister(struct tee_context *ctx, struct tee_shm *shm);
+#endif /* TSTEE_PRIVATE_H */ diff --git a/include/uapi/linux/tee.h b/include/uapi/linux/tee.h index 23e57164693c..d0430bee8292 100644 --- a/include/uapi/linux/tee.h +++ b/include/uapi/linux/tee.h @@ -56,6 +56,7 @@ */ #define TEE_IMPL_ID_OPTEE 1 #define TEE_IMPL_ID_AMDTEE 2 +#define TEE_IMPL_ID_TSTEE 3
/*
- OP-TEE specific capabilities
-- 2.34.1
On 13 Oct 2023, at 14:47, Sumit Garg wrote:
On Wed, 27 Sept 2023 at 20:56, Balint Dobszay balint.dobszay@arm.com wrote:
[snip]
diff --git a/drivers/tee/tstee/core.c b/drivers/tee/tstee/core.c new file mode 100644 index 000000000000..c0194638b7da --- /dev/null +++ b/drivers/tee/tstee/core.c @@ -0,0 +1,473 @@ +// SPDX-License-Identifier: GPL-2.0-only +/*
- Copyright (c) 2023, Arm Limited
- */
+#define DRIVER_NAME "Arm TSTEE" +#define pr_fmt(fmt) DRIVER_NAME ": " fmt
+#include <linux/arm_ffa.h> +#include <linux/err.h> +#include <linux/errno.h> +#include <linux/kernel.h> +#include <linux/limits.h> +#include <linux/list.h> +#include <linux/mm.h> +#include <linux/module.h> +#include <linux/scatterlist.h> +#include <linux/slab.h> +#include <linux/stat.h> +#include <linux/tee_drv.h> +#include <linux/types.h> +#include <linux/uaccess.h>
+#include "tstee_private.h"
+#define FFA_INVALID_MEM_HANDLE U64_MAX
+static void arg_list_to_ffa_data(const u32 *args, struct ffa_send_direct_data *data) +{
data->data0 = args[0];
data->data1 = args[1];
data->data2 = args[2];
data->data3 = args[3];
data->data4 = args[4];
+}
+static void arg_list_from_ffa_data(const struct ffa_send_direct_data *data, u32 *args) +{
args[0] = lower_32_bits(data->data0);
args[1] = lower_32_bits(data->data1);
args[2] = lower_32_bits(data->data2);
args[3] = lower_32_bits(data->data3);
args[4] = lower_32_bits(data->data4);
+}
+static void tstee_get_version(struct tee_device *teedev, struct tee_ioctl_version_data *vers) +{
struct tstee *tstee = tee_get_drvdata(teedev);
struct tee_ioctl_version_data v = {
.impl_id = TEE_IMPL_ID_TSTEE,
.impl_caps = tstee->ffa_dev->vm_id,
So while exploring the user-space interface, I observed an anomaly here. The ".impl_caps" refers to "Implementation specific capabilities" meant to support backwards compatibility of a particular TEE implementation. But here I observe you are using it instead for endpoint_id. Can you provide the reasoning behind it? Also, do you plan to support multiple endpoints via this driver?
The mapping between Trusted Services SPs and TEE devices is 1:1, i.e. each /dev/tee<n> device represents exactly one FF-A endpoint. To answer your second question, each instance of the driver represents a single endpoint, multiple endpoints are supported by having multiple instances of the driver. Therefore we always return a single endpoint ID in this field.
The reason behind the usage of this field is that somehow user space has to be able to discover which TEE device represents which FF-A endpoint. This is required when a client wants to invoke an SP with a specific endpoint ID, e.g. in a system where the SP's endpoint IDs is static and known by the client.
I understand this is not a conventional usage of this field, I couldn't find a better way to "fit" this information into the ABI. My understanding is this solution shouldn't cause any issues for other TEEs, since this implementation specific field should only be parsed by a client if it has already matched on the TEE implementation ID. Please correct me if this assumption is wrong, or if you have any suggestions on other ways to expose the endpoint ID to user space.
.gen_caps = 0,
Do you plan to support TEE_GEN_CAP_REG_MEM? IOW, do you expect user-space to leverage TEE_IOC_SHM_REGISTER? I can see this driver already provides tstee_shm_register() and tstee_shm_unregister() callbacks.
We didn't have a use case yet that would require this feature. I will give it some more thought, whether it's likely that it will be needed in the future.
};
*vers = v;
+}
+static int tstee_open(struct tee_context *ctx) +{
struct ts_context_data *ctxdata;
ctxdata = kzalloc(sizeof(*ctxdata), GFP_KERNEL);
if (!ctxdata)
return -ENOMEM;
mutex_init(&ctxdata->mutex);
idr_init(&ctxdata->sess_ids);
INIT_LIST_HEAD(&ctxdata->sess_list);
ctx->data = ctxdata;
return 0;
+}
+static void tstee_release(struct tee_context *ctx) +{
struct ts_context_data *ctxdata = ctx->data;
struct ts_session *sess, *sess_tmp;
if (!ctxdata)
return;
list_for_each_entry_safe(sess, sess_tmp, &ctxdata->sess_list, list_node) {
list_del(&sess->list_node);
idr_remove(&ctxdata->sess_ids, sess->session_id);
kfree(sess);
}
idr_destroy(&ctxdata->sess_ids);
mutex_destroy(&ctxdata->mutex);
kfree(ctxdata);
ctx->data = NULL;
+}
+static struct ts_session *find_session(struct ts_context_data *ctxdata, u32 session_id) +{
struct ts_session *sess;
list_for_each_entry(sess, &ctxdata->sess_list, list_node)
if (sess->session_id == session_id)
return sess;
return NULL;
+}
+static int tstee_open_session(struct tee_context *ctx, struct tee_ioctl_open_session_arg *arg,
struct tee_param *param __always_unused)
+{
struct tstee *tstee = tee_get_drvdata(ctx->teedev);
struct ffa_device *ffa_dev = tstee->ffa_dev;
struct ts_context_data *ctxdata = ctx->data;
struct ffa_send_direct_data ffa_data;
struct ts_session *sess = NULL;
u32 ffa_args[5] = {};
Please don't use magic numbers.
Ack, will fix in the next version.
int sess_id;
int rc;
ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(TS_RPC_MGMT_IFACE_ID,
TS_RPC_OP_SERVICE_INFO);
memcpy((u8 *)(ffa_args + TS_RPC_SERVICE_INFO_UUID0), arg->uuid, UUID_SIZE);
arg_list_to_ffa_data(ffa_args, &ffa_data);
rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data);
if (rc)
return rc;
arg_list_from_ffa_data(&ffa_data, ffa_args);
if (ffa_args[TS_RPC_SERVICE_INFO_RPC_STATUS] != TS_RPC_OK)
return -ENODEV;
if (ffa_args[TS_RPC_SERVICE_INFO_IFACE] > U8_MAX)
return -EINVAL;
sess = kzalloc(sizeof(*sess), GFP_KERNEL);
if (!sess)
return -ENOMEM;
sess_id = idr_alloc(&ctxdata->sess_ids, sess, 1, 0, GFP_KERNEL);
if (sess_id < 0) {
kfree(sess);
return sess_id;
}
sess->session_id = sess_id;
sess->iface_id = ffa_args[TS_RPC_SERVICE_INFO_IFACE];
mutex_lock(&ctxdata->mutex);
list_add(&sess->list_node, &ctxdata->sess_list);
mutex_unlock(&ctxdata->mutex);
arg->session = sess_id;
arg->ret = 0;
return 0;
+}
+static int tstee_close_session(struct tee_context *ctx, u32 session) +{
struct ts_context_data *ctxdata = ctx->data;
struct ts_session *sess;
mutex_lock(&ctxdata->mutex);
sess = find_session(ctxdata, session);
if (sess)
list_del(&sess->list_node);
mutex_unlock(&ctxdata->mutex);
if (!sess)
return -EINVAL;
idr_remove(&ctxdata->sess_ids, sess->session_id);
kfree(sess);
return 0;
+}
+static int tstee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg,
struct tee_param *param)
+{
struct tstee *tstee = tee_get_drvdata(ctx->teedev);
struct ffa_device *ffa_dev = tstee->ffa_dev;
struct ts_context_data *ctxdata = ctx->data;
struct ffa_send_direct_data ffa_data;
struct tee_shm *shm = NULL;
struct ts_session *sess;
u32 req_len, ffa_args[5] = {};
ditto.
Ack.
Regards, Balint
On Fri, Oct 20, 2023 at 3:52 PM Balint Dobszay balint.dobszay@arm.com wrote:
On 13 Oct 2023, at 14:47, Sumit Garg wrote:
On Wed, 27 Sept 2023 at 20:56, Balint Dobszay balint.dobszay@arm.com wrote:
[snip]
diff --git a/drivers/tee/tstee/core.c b/drivers/tee/tstee/core.c new file mode 100644 index 000000000000..c0194638b7da --- /dev/null +++ b/drivers/tee/tstee/core.c @@ -0,0 +1,473 @@ +// SPDX-License-Identifier: GPL-2.0-only +/*
- Copyright (c) 2023, Arm Limited
- */
+#define DRIVER_NAME "Arm TSTEE" +#define pr_fmt(fmt) DRIVER_NAME ": " fmt
+#include <linux/arm_ffa.h> +#include <linux/err.h> +#include <linux/errno.h> +#include <linux/kernel.h> +#include <linux/limits.h> +#include <linux/list.h> +#include <linux/mm.h> +#include <linux/module.h> +#include <linux/scatterlist.h> +#include <linux/slab.h> +#include <linux/stat.h> +#include <linux/tee_drv.h> +#include <linux/types.h> +#include <linux/uaccess.h>
+#include "tstee_private.h"
+#define FFA_INVALID_MEM_HANDLE U64_MAX
+static void arg_list_to_ffa_data(const u32 *args, struct ffa_send_direct_data *data) +{
data->data0 = args[0];
data->data1 = args[1];
data->data2 = args[2];
data->data3 = args[3];
data->data4 = args[4];
+}
+static void arg_list_from_ffa_data(const struct ffa_send_direct_data *data, u32 *args) +{
args[0] = lower_32_bits(data->data0);
args[1] = lower_32_bits(data->data1);
args[2] = lower_32_bits(data->data2);
args[3] = lower_32_bits(data->data3);
args[4] = lower_32_bits(data->data4);
+}
+static void tstee_get_version(struct tee_device *teedev, struct tee_ioctl_version_data *vers) +{
struct tstee *tstee = tee_get_drvdata(teedev);
struct tee_ioctl_version_data v = {
.impl_id = TEE_IMPL_ID_TSTEE,
.impl_caps = tstee->ffa_dev->vm_id,
So while exploring the user-space interface, I observed an anomaly here. The ".impl_caps" refers to "Implementation specific capabilities" meant to support backwards compatibility of a particular TEE implementation. But here I observe you are using it instead for endpoint_id. Can you provide the reasoning behind it? Also, do you plan to support multiple endpoints via this driver?
The mapping between Trusted Services SPs and TEE devices is 1:1, i.e. each /dev/tee<n> device represents exactly one FF-A endpoint. To answer your second question, each instance of the driver represents a single endpoint, multiple endpoints are supported by having multiple instances of the driver. Therefore we always return a single endpoint ID in this field.
The reason behind the usage of this field is that somehow user space has to be able to discover which TEE device represents which FF-A endpoint. This is required when a client wants to invoke an SP with a specific endpoint ID, e.g. in a system where the SP's endpoint IDs is static and known by the client.
I understand this is not a conventional usage of this field, I couldn't find a better way to "fit" this information into the ABI. My understanding is this solution shouldn't cause any issues for other TEEs, since this implementation specific field should only be parsed by a client if it has already matched on the TEE implementation ID. Please correct me if this assumption is wrong, or if you have any suggestions on other ways to expose the endpoint ID to user space.
impl_caps is a u32 and an SP ID can never use more than 16 bits. How about making this explicit so it's clear that the upper bits are still free for other usage?
Cheers, Jens
.gen_caps = 0,
Do you plan to support TEE_GEN_CAP_REG_MEM? IOW, do you expect user-space to leverage TEE_IOC_SHM_REGISTER? I can see this driver already provides tstee_shm_register() and tstee_shm_unregister() callbacks.
We didn't have a use case yet that would require this feature. I will give it some more thought, whether it's likely that it will be needed in the future.
};
*vers = v;
+}
+static int tstee_open(struct tee_context *ctx) +{
struct ts_context_data *ctxdata;
ctxdata = kzalloc(sizeof(*ctxdata), GFP_KERNEL);
if (!ctxdata)
return -ENOMEM;
mutex_init(&ctxdata->mutex);
idr_init(&ctxdata->sess_ids);
INIT_LIST_HEAD(&ctxdata->sess_list);
ctx->data = ctxdata;
return 0;
+}
+static void tstee_release(struct tee_context *ctx) +{
struct ts_context_data *ctxdata = ctx->data;
struct ts_session *sess, *sess_tmp;
if (!ctxdata)
return;
list_for_each_entry_safe(sess, sess_tmp, &ctxdata->sess_list, list_node) {
list_del(&sess->list_node);
idr_remove(&ctxdata->sess_ids, sess->session_id);
kfree(sess);
}
idr_destroy(&ctxdata->sess_ids);
mutex_destroy(&ctxdata->mutex);
kfree(ctxdata);
ctx->data = NULL;
+}
+static struct ts_session *find_session(struct ts_context_data *ctxdata, u32 session_id) +{
struct ts_session *sess;
list_for_each_entry(sess, &ctxdata->sess_list, list_node)
if (sess->session_id == session_id)
return sess;
return NULL;
+}
+static int tstee_open_session(struct tee_context *ctx, struct tee_ioctl_open_session_arg *arg,
struct tee_param *param __always_unused)
+{
struct tstee *tstee = tee_get_drvdata(ctx->teedev);
struct ffa_device *ffa_dev = tstee->ffa_dev;
struct ts_context_data *ctxdata = ctx->data;
struct ffa_send_direct_data ffa_data;
struct ts_session *sess = NULL;
u32 ffa_args[5] = {};
Please don't use magic numbers.
Ack, will fix in the next version.
int sess_id;
int rc;
ffa_args[TS_RPC_CTRL_REG] = TS_RPC_CTRL_PACK_IFACE_OPCODE(TS_RPC_MGMT_IFACE_ID,
TS_RPC_OP_SERVICE_INFO);
memcpy((u8 *)(ffa_args + TS_RPC_SERVICE_INFO_UUID0), arg->uuid, UUID_SIZE);
arg_list_to_ffa_data(ffa_args, &ffa_data);
rc = ffa_dev->ops->msg_ops->sync_send_receive(ffa_dev, &ffa_data);
if (rc)
return rc;
arg_list_from_ffa_data(&ffa_data, ffa_args);
if (ffa_args[TS_RPC_SERVICE_INFO_RPC_STATUS] != TS_RPC_OK)
return -ENODEV;
if (ffa_args[TS_RPC_SERVICE_INFO_IFACE] > U8_MAX)
return -EINVAL;
sess = kzalloc(sizeof(*sess), GFP_KERNEL);
if (!sess)
return -ENOMEM;
sess_id = idr_alloc(&ctxdata->sess_ids, sess, 1, 0, GFP_KERNEL);
if (sess_id < 0) {
kfree(sess);
return sess_id;
}
sess->session_id = sess_id;
sess->iface_id = ffa_args[TS_RPC_SERVICE_INFO_IFACE];
mutex_lock(&ctxdata->mutex);
list_add(&sess->list_node, &ctxdata->sess_list);
mutex_unlock(&ctxdata->mutex);
arg->session = sess_id;
arg->ret = 0;
return 0;
+}
+static int tstee_close_session(struct tee_context *ctx, u32 session) +{
struct ts_context_data *ctxdata = ctx->data;
struct ts_session *sess;
mutex_lock(&ctxdata->mutex);
sess = find_session(ctxdata, session);
if (sess)
list_del(&sess->list_node);
mutex_unlock(&ctxdata->mutex);
if (!sess)
return -EINVAL;
idr_remove(&ctxdata->sess_ids, sess->session_id);
kfree(sess);
return 0;
+}
+static int tstee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg,
struct tee_param *param)
+{
struct tstee *tstee = tee_get_drvdata(ctx->teedev);
struct ffa_device *ffa_dev = tstee->ffa_dev;
struct ts_context_data *ctxdata = ctx->data;
struct ffa_send_direct_data ffa_data;
struct tee_shm *shm = NULL;
struct ts_session *sess;
u32 req_len, ffa_args[5] = {};
ditto.
Ack.
Regards, Balint
On 24 Oct 2023, at 8:22, Jens Wiklander wrote:
On Fri, Oct 20, 2023 at 3:52 PM Balint Dobszay balint.dobszay@arm.com wrote:
On 13 Oct 2023, at 14:47, Sumit Garg wrote:
On Wed, 27 Sept 2023 at 20:56, Balint Dobszay balint.dobszay@arm.com wrote:
[snip]
diff --git a/drivers/tee/tstee/core.c b/drivers/tee/tstee/core.c new file mode 100644 index 000000000000..c0194638b7da --- /dev/null +++ b/drivers/tee/tstee/core.c @@ -0,0 +1,473 @@ +// SPDX-License-Identifier: GPL-2.0-only +/*
- Copyright (c) 2023, Arm Limited
- */
+#define DRIVER_NAME "Arm TSTEE" +#define pr_fmt(fmt) DRIVER_NAME ": " fmt
+#include <linux/arm_ffa.h> +#include <linux/err.h> +#include <linux/errno.h> +#include <linux/kernel.h> +#include <linux/limits.h> +#include <linux/list.h> +#include <linux/mm.h> +#include <linux/module.h> +#include <linux/scatterlist.h> +#include <linux/slab.h> +#include <linux/stat.h> +#include <linux/tee_drv.h> +#include <linux/types.h> +#include <linux/uaccess.h>
+#include "tstee_private.h"
+#define FFA_INVALID_MEM_HANDLE U64_MAX
+static void arg_list_to_ffa_data(const u32 *args, struct ffa_send_direct_data *data) +{
data->data0 = args[0];
data->data1 = args[1];
data->data2 = args[2];
data->data3 = args[3];
data->data4 = args[4];
+}
+static void arg_list_from_ffa_data(const struct ffa_send_direct_data *data, u32 *args) +{
args[0] = lower_32_bits(data->data0);
args[1] = lower_32_bits(data->data1);
args[2] = lower_32_bits(data->data2);
args[3] = lower_32_bits(data->data3);
args[4] = lower_32_bits(data->data4);
+}
+static void tstee_get_version(struct tee_device *teedev, struct tee_ioctl_version_data *vers) +{
struct tstee *tstee = tee_get_drvdata(teedev);
struct tee_ioctl_version_data v = {
.impl_id = TEE_IMPL_ID_TSTEE,
.impl_caps = tstee->ffa_dev->vm_id,
So while exploring the user-space interface, I observed an anomaly here. The ".impl_caps" refers to "Implementation specific capabilities" meant to support backwards compatibility of a particular TEE implementation. But here I observe you are using it instead for endpoint_id. Can you provide the reasoning behind it? Also, do you plan to support multiple endpoints via this driver?
The mapping between Trusted Services SPs and TEE devices is 1:1, i.e. each /dev/tee<n> device represents exactly one FF-A endpoint. To answer your second question, each instance of the driver represents a single endpoint, multiple endpoints are supported by having multiple instances of the driver. Therefore we always return a single endpoint ID in this field.
The reason behind the usage of this field is that somehow user space has to be able to discover which TEE device represents which FF-A endpoint. This is required when a client wants to invoke an SP with a specific endpoint ID, e.g. in a system where the SP's endpoint IDs is static and known by the client.
I understand this is not a conventional usage of this field, I couldn't find a better way to "fit" this information into the ABI. My understanding is this solution shouldn't cause any issues for other TEEs, since this implementation specific field should only be parsed by a client if it has already matched on the TEE implementation ID. Please correct me if this assumption is wrong, or if you have any suggestions on other ways to expose the endpoint ID to user space.
impl_caps is a u32 and an SP ID can never use more than 16 bits. How about making this explicit so it's clear that the upper bits are still free for other usage?
That's fine with me, if Sumit agrees too?
Regards, Balint
On Fri, 20 Oct 2023 at 19:22, Balint Dobszay balint.dobszay@arm.com wrote:
On 13 Oct 2023, at 14:47, Sumit Garg wrote:
On Wed, 27 Sept 2023 at 20:56, Balint Dobszay balint.dobszay@arm.com wrote:
[snip]
diff --git a/drivers/tee/tstee/core.c b/drivers/tee/tstee/core.c new file mode 100644 index 000000000000..c0194638b7da --- /dev/null +++ b/drivers/tee/tstee/core.c @@ -0,0 +1,473 @@ +// SPDX-License-Identifier: GPL-2.0-only +/*
- Copyright (c) 2023, Arm Limited
- */
+#define DRIVER_NAME "Arm TSTEE" +#define pr_fmt(fmt) DRIVER_NAME ": " fmt
+#include <linux/arm_ffa.h> +#include <linux/err.h> +#include <linux/errno.h> +#include <linux/kernel.h> +#include <linux/limits.h> +#include <linux/list.h> +#include <linux/mm.h> +#include <linux/module.h> +#include <linux/scatterlist.h> +#include <linux/slab.h> +#include <linux/stat.h> +#include <linux/tee_drv.h> +#include <linux/types.h> +#include <linux/uaccess.h>
+#include "tstee_private.h"
+#define FFA_INVALID_MEM_HANDLE U64_MAX
+static void arg_list_to_ffa_data(const u32 *args, struct ffa_send_direct_data *data) +{
data->data0 = args[0];
data->data1 = args[1];
data->data2 = args[2];
data->data3 = args[3];
data->data4 = args[4];
+}
+static void arg_list_from_ffa_data(const struct ffa_send_direct_data *data, u32 *args) +{
args[0] = lower_32_bits(data->data0);
args[1] = lower_32_bits(data->data1);
args[2] = lower_32_bits(data->data2);
args[3] = lower_32_bits(data->data3);
args[4] = lower_32_bits(data->data4);
+}
+static void tstee_get_version(struct tee_device *teedev, struct tee_ioctl_version_data *vers) +{
struct tstee *tstee = tee_get_drvdata(teedev);
struct tee_ioctl_version_data v = {
.impl_id = TEE_IMPL_ID_TSTEE,
.impl_caps = tstee->ffa_dev->vm_id,
So while exploring the user-space interface, I observed an anomaly here. The ".impl_caps" refers to "Implementation specific capabilities" meant to support backwards compatibility of a particular TEE implementation. But here I observe you are using it instead for endpoint_id. Can you provide the reasoning behind it? Also, do you plan to support multiple endpoints via this driver?
The mapping between Trusted Services SPs and TEE devices is 1:1, i.e. each /dev/tee<n> device represents exactly one FF-A endpoint. To answer your second question, each instance of the driver represents a single endpoint, multiple endpoints are supported by having multiple instances of the driver.
I don't follow you here. How multiple instances of "arm_tstee" driver going to work? Also, I can only see a single FF-A based device: TS_RPC_UUID being probed here.
Do you have any working example for this?
Therefore we always return a single endpoint ID in this field.
The reason behind the usage of this field is that somehow user space has to be able to discover which TEE device represents which FF-A endpoint. This is required when a client wants to invoke an SP with a specific endpoint ID, e.g. in a system where the SP's endpoint IDs is static and known by the client.
I understand this is not a conventional usage of this field, I couldn't find a better way to "fit" this information into the ABI. My understanding is this solution shouldn't cause any issues for other TEEs, since this implementation specific field should only be parsed by a client if it has already matched on the TEE implementation ID. Please correct me if this assumption is wrong, or if you have any suggestions on other ways to expose the endpoint ID to user space.
When you say "each /dev/tee<n> device represents exactly one FF-A endpoint" then "impl_id" should be sufficient to represent endpoint ID too. But I am still confused regarding how you are going to support multiple endpoints?
Looking at this driver TEE_IMPL_ID_TSTEE represents an underlying FF-A based TS_RPC_UUID device.
-Sumit
On 26 Oct 2023, at 8:52, Sumit Garg wrote:
On Fri, 20 Oct 2023 at 19:22, Balint Dobszay balint.dobszay@arm.com wrote:
On 13 Oct 2023, at 14:47, Sumit Garg wrote:
On Wed, 27 Sept 2023 at 20:56, Balint Dobszay balint.dobszay@arm.com wrote:
[snip]
diff --git a/drivers/tee/tstee/core.c b/drivers/tee/tstee/core.c new file mode 100644 index 000000000000..c0194638b7da --- /dev/null +++ b/drivers/tee/tstee/core.c @@ -0,0 +1,473 @@ +// SPDX-License-Identifier: GPL-2.0-only +/*
- Copyright (c) 2023, Arm Limited
- */
+#define DRIVER_NAME "Arm TSTEE" +#define pr_fmt(fmt) DRIVER_NAME ": " fmt
+#include <linux/arm_ffa.h> +#include <linux/err.h> +#include <linux/errno.h> +#include <linux/kernel.h> +#include <linux/limits.h> +#include <linux/list.h> +#include <linux/mm.h> +#include <linux/module.h> +#include <linux/scatterlist.h> +#include <linux/slab.h> +#include <linux/stat.h> +#include <linux/tee_drv.h> +#include <linux/types.h> +#include <linux/uaccess.h>
+#include "tstee_private.h"
+#define FFA_INVALID_MEM_HANDLE U64_MAX
+static void arg_list_to_ffa_data(const u32 *args, struct ffa_send_direct_data *data) +{
data->data0 = args[0];
data->data1 = args[1];
data->data2 = args[2];
data->data3 = args[3];
data->data4 = args[4];
+}
+static void arg_list_from_ffa_data(const struct ffa_send_direct_data *data, u32 *args) +{
args[0] = lower_32_bits(data->data0);
args[1] = lower_32_bits(data->data1);
args[2] = lower_32_bits(data->data2);
args[3] = lower_32_bits(data->data3);
args[4] = lower_32_bits(data->data4);
+}
+static void tstee_get_version(struct tee_device *teedev, struct tee_ioctl_version_data *vers) +{
struct tstee *tstee = tee_get_drvdata(teedev);
struct tee_ioctl_version_data v = {
.impl_id = TEE_IMPL_ID_TSTEE,
.impl_caps = tstee->ffa_dev->vm_id,
So while exploring the user-space interface, I observed an anomaly here. The ".impl_caps" refers to "Implementation specific capabilities" meant to support backwards compatibility of a particular TEE implementation. But here I observe you are using it instead for endpoint_id. Can you provide the reasoning behind it? Also, do you plan to support multiple endpoints via this driver?
The mapping between Trusted Services SPs and TEE devices is 1:1, i.e. each /dev/tee<n> device represents exactly one FF-A endpoint. To answer your second question, each instance of the driver represents a single endpoint, multiple endpoints are supported by having multiple instances of the driver.
I don't follow you here. How multiple instances of "arm_tstee" driver going to work? Also, I can only see a single FF-A based device: TS_RPC_UUID being probed here.
My terminology regarding "multiple instances of the driver" was incorrect, apologies. What I meant was that tstee_probe() gets called multiple times, thus allocating multiple instances of "struct tstee".
All of the Trusted Services SPs use the same FF-A UUID (TS_RPC_UUID). These will show up on the FF-A bus in Linux as multiple devices, having the same FF-A UUID, but different endpoint IDs, e.g.
Name FF-A EP ID FF-A UUID ------------ ---------- ----------------------------------- OP-TEE 0x8001 486178e0-e7f8-11e3-bc5e0002a5d5c51b TS ITS SP 0x8002 bdcd76d7-825e-4751-963b86d4f84943ac TS Crypto SP 0x8003 bdcd76d7-825e-4751-963b86d4f84943ac TS Attest SP 0x8003 bdcd76d7-825e-4751-963b86d4f84943ac
Each of the Trusted Services FF-A devices will match the single UUID present in tstee_device_ids[], so tstee_probe() will be called multiple times, each time with a different FF-A device as argument. In the probe function a new TEE device is allocated, and a new "struct tstee" which represents the connection between a particular ffa_device and tee_device.
I think a similar scenario would be if we had e.g. Hafnium as S-EL2 SPMC and two OP-TEE instances as S-EL1 SPs running under Hafnium. In this case the FF-A devices representing the OP-TEE instances would have the same FF-A UUID but different endpoint IDs. The OP-TEE driver in Linux would register two TEE devices for the two OP-TEE SPs.
Do you have any working example for this?
The end-to-end integration for FVP I mentioned earlier does build, boot and test multiple endpoints.
Therefore we always return a single endpoint ID in this field.
The reason behind the usage of this field is that somehow user space has to be able to discover which TEE device represents which FF-A endpoint. This is required when a client wants to invoke an SP with a specific endpoint ID, e.g. in a system where the SP's endpoint IDs is static and known by the client.
I understand this is not a conventional usage of this field, I couldn't find a better way to "fit" this information into the ABI. My understanding is this solution shouldn't cause any issues for other TEEs, since this implementation specific field should only be parsed by a client if it has already matched on the TEE implementation ID. Please correct me if this assumption is wrong, or if you have any suggestions on other ways to expose the endpoint ID to user space.
When you say "each /dev/tee<n> device represents exactly one FF-A endpoint" then "impl_id" should be sufficient to represent endpoint ID too. But I am still confused regarding how you are going to support multiple endpoints?
Hopefully my answer above covers this question too.
Looking at this driver TEE_IMPL_ID_TSTEE represents an underlying FF-A based TS_RPC_UUID device.
Yes, that is correct. The FF-A UUID here acts as a protocol identifier, multiple FF-A endpoints can implement the same protocol, therefore can have the same UUID. The mechanism to discover what services a particular TS SP provides is not defined by FF-A, in this case it's done by the TS RPC protocol which is one layer above FF-A.
Regards, Balint
Hi Balint,
Thanks for the detailed insights.
On Tue, 31 Oct 2023 at 00:05, Balint Dobszay balint.dobszay@arm.com wrote:
On 26 Oct 2023, at 8:52, Sumit Garg wrote:
On Fri, 20 Oct 2023 at 19:22, Balint Dobszay balint.dobszay@arm.com wrote:
On 13 Oct 2023, at 14:47, Sumit Garg wrote:
On Wed, 27 Sept 2023 at 20:56, Balint Dobszay balint.dobszay@arm.com wrote:
[snip]
diff --git a/drivers/tee/tstee/core.c b/drivers/tee/tstee/core.c new file mode 100644 index 000000000000..c0194638b7da --- /dev/null +++ b/drivers/tee/tstee/core.c @@ -0,0 +1,473 @@ +// SPDX-License-Identifier: GPL-2.0-only +/*
- Copyright (c) 2023, Arm Limited
- */
+#define DRIVER_NAME "Arm TSTEE" +#define pr_fmt(fmt) DRIVER_NAME ": " fmt
+#include <linux/arm_ffa.h> +#include <linux/err.h> +#include <linux/errno.h> +#include <linux/kernel.h> +#include <linux/limits.h> +#include <linux/list.h> +#include <linux/mm.h> +#include <linux/module.h> +#include <linux/scatterlist.h> +#include <linux/slab.h> +#include <linux/stat.h> +#include <linux/tee_drv.h> +#include <linux/types.h> +#include <linux/uaccess.h>
+#include "tstee_private.h"
+#define FFA_INVALID_MEM_HANDLE U64_MAX
+static void arg_list_to_ffa_data(const u32 *args, struct ffa_send_direct_data *data) +{
data->data0 = args[0];
data->data1 = args[1];
data->data2 = args[2];
data->data3 = args[3];
data->data4 = args[4];
+}
+static void arg_list_from_ffa_data(const struct ffa_send_direct_data *data, u32 *args) +{
args[0] = lower_32_bits(data->data0);
args[1] = lower_32_bits(data->data1);
args[2] = lower_32_bits(data->data2);
args[3] = lower_32_bits(data->data3);
args[4] = lower_32_bits(data->data4);
+}
+static void tstee_get_version(struct tee_device *teedev, struct tee_ioctl_version_data *vers) +{
struct tstee *tstee = tee_get_drvdata(teedev);
struct tee_ioctl_version_data v = {
.impl_id = TEE_IMPL_ID_TSTEE,
.impl_caps = tstee->ffa_dev->vm_id,
So while exploring the user-space interface, I observed an anomaly here. The ".impl_caps" refers to "Implementation specific capabilities" meant to support backwards compatibility of a particular TEE implementation. But here I observe you are using it instead for endpoint_id. Can you provide the reasoning behind it? Also, do you plan to support multiple endpoints via this driver?
The mapping between Trusted Services SPs and TEE devices is 1:1, i.e. each /dev/tee<n> device represents exactly one FF-A endpoint. To answer your second question, each instance of the driver represents a single endpoint, multiple endpoints are supported by having multiple instances of the driver.
I don't follow you here. How multiple instances of "arm_tstee" driver going to work? Also, I can only see a single FF-A based device: TS_RPC_UUID being probed here.
My terminology regarding "multiple instances of the driver" was incorrect, apologies. What I meant was that tstee_probe() gets called multiple times, thus allocating multiple instances of "struct tstee".
All of the Trusted Services SPs use the same FF-A UUID (TS_RPC_UUID). These will show up on the FF-A bus in Linux as multiple devices, having the same FF-A UUID, but different endpoint IDs, e.g.
Name FF-A EP ID FF-A UUID
OP-TEE 0x8001 486178e0-e7f8-11e3-bc5e0002a5d5c51b TS ITS SP 0x8002 bdcd76d7-825e-4751-963b86d4f84943ac TS Crypto SP 0x8003 bdcd76d7-825e-4751-963b86d4f84943ac TS Attest SP 0x8003 bdcd76d7-825e-4751-963b86d4f84943ac
Each of the Trusted Services FF-A devices will match the single UUID present in tstee_device_ids[], so tstee_probe() will be called multiple times, each time with a different FF-A device as argument. In the probe function a new TEE device is allocated, and a new "struct tstee" which represents the connection between a particular ffa_device and tee_device.
I can see now how it works but allocation of a tee_device for every ffa_device seems an overkill here. I suppose standalone secure partitions are somewhat analogous to trusted applications. I don't think we really need separate TEE devices in order to communicate with different secure partitions.
Based on whatever I have understood as of now regarding this interface, it should work with single TEE device as follows:
- During tstee_probe(), create a list of FF-A devices (representing SPs) and create a single TEE device representing TSTEE RPC protocol. - During open session IOCTL, pass endpoint ID as an argument and check if corresponding FF-A device is present in the list. If present then you can return session ID derived from the endpoint ID to the user-space client. - For all further invoke commands IOCTL, you can retrieve endpoint ID from session ID and then talk to corresponding FF-A device.
IMO, this way it would be more scalable and efficient usage of a TEE device.
On a side note, I would suggest you document Trusted Services TEE for the next patch revision.
I think a similar scenario would be if we had e.g. Hafnium as S-EL2 SPMC and two OP-TEE instances as S-EL1 SPs running under Hafnium. In this case the FF-A devices representing the OP-TEE instances would have the same FF-A UUID but different endpoint IDs. The OP-TEE driver in Linux would register two TEE devices for the two OP-TEE SPs.
Do you have any particular use-case where two or more OP-TEE instances will be required by a single VM (Linux)? Those different TEE devices are intended to represent different TEE implementations (following unique communication protocol stack).
-Sumit
Do you have any working example for this?
The end-to-end integration for FVP I mentioned earlier does build, boot and test multiple endpoints.
Therefore we always return a single endpoint ID in this field.
The reason behind the usage of this field is that somehow user space has to be able to discover which TEE device represents which FF-A endpoint. This is required when a client wants to invoke an SP with a specific endpoint ID, e.g. in a system where the SP's endpoint IDs is static and known by the client.
I understand this is not a conventional usage of this field, I couldn't find a better way to "fit" this information into the ABI. My understanding is this solution shouldn't cause any issues for other TEEs, since this implementation specific field should only be parsed by a client if it has already matched on the TEE implementation ID. Please correct me if this assumption is wrong, or if you have any suggestions on other ways to expose the endpoint ID to user space.
When you say "each /dev/tee<n> device represents exactly one FF-A endpoint" then "impl_id" should be sufficient to represent endpoint ID too. But I am still confused regarding how you are going to support multiple endpoints?
Hopefully my answer above covers this question too.
Looking at this driver TEE_IMPL_ID_TSTEE represents an underlying FF-A based TS_RPC_UUID device.
Yes, that is correct. The FF-A UUID here acts as a protocol identifier, multiple FF-A endpoints can implement the same protocol, therefore can have the same UUID. The mechanism to discover what services a particular TS SP provides is not defined by FF-A, in this case it's done by the TS RPC protocol which is one layer above FF-A.
Regards, Balint
Hi Sumit,
Apologies for the delay.
On 31 Oct 2023, at 12:39, Sumit Garg wrote:
On Tue, 31 Oct 2023 at 00:05, Balint Dobszay balint.dobszay@arm.com wrote:
On 26 Oct 2023, at 8:52, Sumit Garg wrote:
On Fri, 20 Oct 2023 at 19:22, Balint Dobszay balint.dobszay@arm.com wrote:
On 13 Oct 2023, at 14:47, Sumit Garg wrote:
On Wed, 27 Sept 2023 at 20:56, Balint Dobszay balint.dobszay@arm.com wrote:
[snip]
diff --git a/drivers/tee/tstee/core.c b/drivers/tee/tstee/core.c new file mode 100644 index 000000000000..c0194638b7da --- /dev/null +++ b/drivers/tee/tstee/core.c @@ -0,0 +1,473 @@ +// SPDX-License-Identifier: GPL-2.0-only +/*
- Copyright (c) 2023, Arm Limited
- */
+#define DRIVER_NAME "Arm TSTEE" +#define pr_fmt(fmt) DRIVER_NAME ": " fmt
+#include <linux/arm_ffa.h> +#include <linux/err.h> +#include <linux/errno.h> +#include <linux/kernel.h> +#include <linux/limits.h> +#include <linux/list.h> +#include <linux/mm.h> +#include <linux/module.h> +#include <linux/scatterlist.h> +#include <linux/slab.h> +#include <linux/stat.h> +#include <linux/tee_drv.h> +#include <linux/types.h> +#include <linux/uaccess.h>
+#include "tstee_private.h"
+#define FFA_INVALID_MEM_HANDLE U64_MAX
+static void arg_list_to_ffa_data(const u32 *args, struct ffa_send_direct_data *data) +{
data->data0 = args[0];
data->data1 = args[1];
data->data2 = args[2];
data->data3 = args[3];
data->data4 = args[4];
+}
+static void arg_list_from_ffa_data(const struct ffa_send_direct_data *data, u32 *args) +{
args[0] = lower_32_bits(data->data0);
args[1] = lower_32_bits(data->data1);
args[2] = lower_32_bits(data->data2);
args[3] = lower_32_bits(data->data3);
args[4] = lower_32_bits(data->data4);
+}
+static void tstee_get_version(struct tee_device *teedev, struct tee_ioctl_version_data *vers) +{
struct tstee *tstee = tee_get_drvdata(teedev);
struct tee_ioctl_version_data v = {
.impl_id = TEE_IMPL_ID_TSTEE,
.impl_caps = tstee->ffa_dev->vm_id,
So while exploring the user-space interface, I observed an anomaly here. The ".impl_caps" refers to "Implementation specific capabilities" meant to support backwards compatibility of a particular TEE implementation. But here I observe you are using it instead for endpoint_id. Can you provide the reasoning behind it? Also, do you plan to support multiple endpoints via this driver?
The mapping between Trusted Services SPs and TEE devices is 1:1, i.e. each /dev/tee<n> device represents exactly one FF-A endpoint. To answer your second question, each instance of the driver represents a single endpoint, multiple endpoints are supported by having multiple instances of the driver.
I don't follow you here. How multiple instances of "arm_tstee" driver going to work? Also, I can only see a single FF-A based device: TS_RPC_UUID being probed here.
My terminology regarding "multiple instances of the driver" was incorrect, apologies. What I meant was that tstee_probe() gets called multiple times, thus allocating multiple instances of "struct tstee".
All of the Trusted Services SPs use the same FF-A UUID (TS_RPC_UUID). These will show up on the FF-A bus in Linux as multiple devices, having the same FF-A UUID, but different endpoint IDs, e.g.
Name FF-A EP ID FF-A UUID
OP-TEE 0x8001 486178e0-e7f8-11e3-bc5e0002a5d5c51b TS ITS SP 0x8002 bdcd76d7-825e-4751-963b86d4f84943ac TS Crypto SP 0x8003 bdcd76d7-825e-4751-963b86d4f84943ac TS Attest SP 0x8003 bdcd76d7-825e-4751-963b86d4f84943ac
Each of the Trusted Services FF-A devices will match the single UUID present in tstee_device_ids[], so tstee_probe() will be called multiple times, each time with a different FF-A device as argument. In the probe function a new TEE device is allocated, and a new "struct tstee" which represents the connection between a particular ffa_device and tee_device.
I can see now how it works but allocation of a tee_device for every ffa_device seems an overkill here. I suppose standalone secure partitions are somewhat analogous to trusted applications. I don't think we really need separate TEE devices in order to communicate with different secure partitions.
Makes sense, I was thinking about this too. In an earlier version of this driver I implemented a similar approach to what you've suggested, code available at [1]. However, there were some places where the single TEE device solution had shortcomings, that's why I ended up with the current version. I think from a layering point of view an SP is more similar to a TEE. In Trusted Services an SP can host one or multiple services, each one is identified by a service UUID and defines service specific operations and parameters. IMO these services are analogous to TAs and the collection of services accessible by a common RPC protocol is analogous to a TEE.
Based on whatever I have understood as of now regarding this interface, it should work with single TEE device as follows:
- During tstee_probe(), create a list of FF-A devices (representing
SPs) and create a single TEE device representing TSTEE RPC protocol.
This is fine, basically this is what I've implemented in [1].
- During open session IOCTL, pass endpoint ID as an argument and check
if corresponding FF-A device is present in the list. If present then you can return session ID derived from the endpoint ID to the user-space client.
There is an issue here: unless static endpoint IDs are used, user space has to be able to discover the endpoint IDs assigned to this driver first (i.e. currently using tee_ioctl_version_data.impl_caps that we've been discussing in the other thread). If there is only a single TEE device for all SPs, the impl_caps solution cannot be used. In [1] the workaround I implemented for this was to have a special case in the invoke_func() op that doesn't actually do a call to SWd, but just gathers the list of available endpoint IDs and returns them to user space. I think this is probably not in line with the intended usage of this ioctl, but couldn't find a better way.
Also, the current solution maps well to the TEE subsystem concepts of a context and a session. The context represents a connection to an SP, one or more services can be opened in the SP represented by sessions in that context. In the future we plan to introduce some form of login/ authentication mechanism for services in TS SPs where the related fields in open_session ioctl's argument would be useful. However, if a session would represent a whole SP, a new solution would be needed - specific to this driver - for representing the services inside the SP and passing the login/authentication arguments in the ioctl.
- For all further invoke commands IOCTL, you can retrieve endpoint ID
from session ID and then talk to corresponding FF-A device.
There is an issue with the memory sharing operations. The SHM alloc ioctl doesn't have a session ID field in its input argument struct, i.e. the memory share operation is "context specific", not "session specific". If a single TEE device is representing all TS SPs, when invoking an SHM alloc ioctl on that device there is no way to convey information about the destination endpoint ID to the driver.
In [1] the workaround I implemented for this was that only a single session can be opened in a context, i.e.:
- Open the single TEE device, available endpoint IDs not known yet. - Use the "special case" in invoke_func() to get the list of available endpoint IDs. - Open a session in the existing context, pass the selected endpoint ID as an argument. This operation binds the context to this session, when doing a memory share in this context the destination endpoint ID will be the endpoint ID of the single session existing in this context. - Opening a second session in the same context is not allowed. In order to communicate with another endpoint the TEE device has to be opened again, thus creating a new context.
This is working, but I think it's rather a workaround than a proper solution, regarding the single session per context limitation.
IMO, this way it would be more scalable and efficient usage of a TEE device.
Yes, on one hand I see your concern and agree with your suggestion. On the other hand because of the issues I've described above the single TEE device solution IMHO is less ergonomic, i.e. requires more workarounds and special cases than my current proposal. Also, based on my experience so far the expected maximum number of SPs in a system is not huge (usually less than 10, my guess is a few 10s at the very maximum in the future).
On a side note, I would suggest you document Trusted Services TEE for the next patch revision.
Sure, will do.
I think a similar scenario would be if we had e.g. Hafnium as S-EL2 SPMC and two OP-TEE instances as S-EL1 SPs running under Hafnium. In this case the FF-A devices representing the OP-TEE instances would have the same FF-A UUID but different endpoint IDs. The OP-TEE driver in Linux would register two TEE devices for the two OP-TEE SPs.
Do you have any particular use-case where two or more OP-TEE instances will be required by a single VM (Linux)? Those different TEE devices are intended to represent different TEE implementations (following unique communication protocol stack).
No, this was just a theoretical example. I think it is feasible from a technical point of view, but probably not a useful setup in real life.
Regards, Balint
[1] https://gitlab.arm.com/linux-arm/linux-trusted-services/-/blob/tee-v1.1.2/co...
On Tue, 14 Nov 2023 at 20:02, Balint Dobszay balint.dobszay@arm.com wrote:
Hi Sumit,
Apologies for the delay.
On 31 Oct 2023, at 12:39, Sumit Garg wrote:
On Tue, 31 Oct 2023 at 00:05, Balint Dobszay balint.dobszay@arm.com wrote:
On 26 Oct 2023, at 8:52, Sumit Garg wrote:
On Fri, 20 Oct 2023 at 19:22, Balint Dobszay balint.dobszay@arm.com wrote:
On 13 Oct 2023, at 14:47, Sumit Garg wrote:
On Wed, 27 Sept 2023 at 20:56, Balint Dobszay balint.dobszay@arm.com wrote:
[snip]
> diff --git a/drivers/tee/tstee/core.c b/drivers/tee/tstee/core.c > new file mode 100644 > index 000000000000..c0194638b7da > --- /dev/null > +++ b/drivers/tee/tstee/core.c > @@ -0,0 +1,473 @@ > +// SPDX-License-Identifier: GPL-2.0-only > +/* > + * Copyright (c) 2023, Arm Limited > + */ > + > +#define DRIVER_NAME "Arm TSTEE" > +#define pr_fmt(fmt) DRIVER_NAME ": " fmt > + > +#include <linux/arm_ffa.h> > +#include <linux/err.h> > +#include <linux/errno.h> > +#include <linux/kernel.h> > +#include <linux/limits.h> > +#include <linux/list.h> > +#include <linux/mm.h> > +#include <linux/module.h> > +#include <linux/scatterlist.h> > +#include <linux/slab.h> > +#include <linux/stat.h> > +#include <linux/tee_drv.h> > +#include <linux/types.h> > +#include <linux/uaccess.h> > + > +#include "tstee_private.h" > + > +#define FFA_INVALID_MEM_HANDLE U64_MAX > + > +static void arg_list_to_ffa_data(const u32 *args, struct ffa_send_direct_data *data) > +{ > + data->data0 = args[0]; > + data->data1 = args[1]; > + data->data2 = args[2]; > + data->data3 = args[3]; > + data->data4 = args[4]; > +} > + > +static void arg_list_from_ffa_data(const struct ffa_send_direct_data *data, u32 *args) > +{ > + args[0] = lower_32_bits(data->data0); > + args[1] = lower_32_bits(data->data1); > + args[2] = lower_32_bits(data->data2); > + args[3] = lower_32_bits(data->data3); > + args[4] = lower_32_bits(data->data4); > +} > + > +static void tstee_get_version(struct tee_device *teedev, struct tee_ioctl_version_data *vers) > +{ > + struct tstee *tstee = tee_get_drvdata(teedev); > + struct tee_ioctl_version_data v = { > + .impl_id = TEE_IMPL_ID_TSTEE, > + .impl_caps = tstee->ffa_dev->vm_id,
So while exploring the user-space interface, I observed an anomaly here. The ".impl_caps" refers to "Implementation specific capabilities" meant to support backwards compatibility of a particular TEE implementation. But here I observe you are using it instead for endpoint_id. Can you provide the reasoning behind it? Also, do you plan to support multiple endpoints via this driver?
The mapping between Trusted Services SPs and TEE devices is 1:1, i.e. each /dev/tee<n> device represents exactly one FF-A endpoint. To answer your second question, each instance of the driver represents a single endpoint, multiple endpoints are supported by having multiple instances of the driver.
I don't follow you here. How multiple instances of "arm_tstee" driver going to work? Also, I can only see a single FF-A based device: TS_RPC_UUID being probed here.
My terminology regarding "multiple instances of the driver" was incorrect, apologies. What I meant was that tstee_probe() gets called multiple times, thus allocating multiple instances of "struct tstee".
All of the Trusted Services SPs use the same FF-A UUID (TS_RPC_UUID). These will show up on the FF-A bus in Linux as multiple devices, having the same FF-A UUID, but different endpoint IDs, e.g.
Name FF-A EP ID FF-A UUID
OP-TEE 0x8001 486178e0-e7f8-11e3-bc5e0002a5d5c51b TS ITS SP 0x8002 bdcd76d7-825e-4751-963b86d4f84943ac TS Crypto SP 0x8003 bdcd76d7-825e-4751-963b86d4f84943ac TS Attest SP 0x8003 bdcd76d7-825e-4751-963b86d4f84943ac
Each of the Trusted Services FF-A devices will match the single UUID present in tstee_device_ids[], so tstee_probe() will be called multiple times, each time with a different FF-A device as argument. In the probe function a new TEE device is allocated, and a new "struct tstee" which represents the connection between a particular ffa_device and tee_device.
I can see now how it works but allocation of a tee_device for every ffa_device seems an overkill here. I suppose standalone secure partitions are somewhat analogous to trusted applications. I don't think we really need separate TEE devices in order to communicate with different secure partitions.
Makes sense, I was thinking about this too. In an earlier version of this driver I implemented a similar approach to what you've suggested, code available at [1]. However, there were some places where the single TEE device solution had shortcomings, that's why I ended up with the current version. I think from a layering point of view an SP is more similar to a TEE. In Trusted Services an SP can host one or multiple services, each one is identified by a service UUID and defines service specific operations and parameters. IMO these services are analogous to TAs and the collection of services accessible by a common RPC protocol is analogous to a TEE.
Based on whatever I have understood as of now regarding this interface, it should work with single TEE device as follows:
- During tstee_probe(), create a list of FF-A devices (representing
SPs) and create a single TEE device representing TSTEE RPC protocol.
This is fine, basically this is what I've implemented in [1].
- During open session IOCTL, pass endpoint ID as an argument and check
if corresponding FF-A device is present in the list. If present then you can return session ID derived from the endpoint ID to the user-space client.
There is an issue here: unless static endpoint IDs are used, user space has to be able to discover the endpoint IDs assigned to this driver first (i.e. currently using tee_ioctl_version_data.impl_caps that we've been discussing in the other thread). If there is only a single TEE device for all SPs, the impl_caps solution cannot be used. In [1] the workaround I implemented for this was to have a special case in the invoke_func() op that doesn't actually do a call to SWd, but just gathers the list of available endpoint IDs and returns them to user space. I think this is probably not in line with the intended usage of this ioctl, but couldn't find a better way.
Don't you think we need to provide unique static endpoint IDs for services supporting the RPC protocol? Are these currently dynamically allocated? How does user-space know if the endpoint ID belongs to attestation service or crypto service?
Also, the current solution maps well to the TEE subsystem concepts of a context and a session. The context represents a connection to an SP, one or more services can be opened in the SP represented by sessions in that context. In the future we plan to introduce some form of login/ authentication mechanism for services in TS SPs where the related fields in open_session ioctl's argument would be useful. However, if a session would represent a whole SP, a new solution would be needed - specific to this driver - for representing the services inside the SP and passing the login/authentication arguments in the ioctl.
- For all further invoke commands IOCTL, you can retrieve endpoint ID
from session ID and then talk to corresponding FF-A device.
There is an issue with the memory sharing operations. The SHM alloc ioctl doesn't have a session ID field in its input argument struct, i.e. the memory share operation is "context specific", not "session specific". If a single TEE device is representing all TS SPs, when invoking an SHM alloc ioctl on that device there is no way to convey information about the destination endpoint ID to the driver.
Okay I see that as a major limiting factor to the approach I proposed. The generic TEE driver design is to share memory at once with underlying TEE implementation (TSTEE in this case). It can then be further reused to communicate with multiple TAs.
However, in trusted services the user-space has to separately share memory with each trusted service using endpoint ID. That's the major reason I see that we have to create a separate TEE device for each trusted service. I would like to see that reasoning in the commit message description as well as the documentation for TSTEE.
-Sumit
In [1] the workaround I implemented for this was that only a single session can be opened in a context, i.e.:
- Open the single TEE device, available endpoint IDs not known yet.
- Use the "special case" in invoke_func() to get the list of available endpoint IDs.
- Open a session in the existing context, pass the selected endpoint ID as an argument. This operation binds the context to this session, when doing a memory share in this context the destination endpoint ID will be the endpoint ID of the single session existing in this context.
- Opening a second session in the same context is not allowed. In order to communicate with another endpoint the TEE device has to be opened again, thus creating a new context.
This is working, but I think it's rather a workaround than a proper solution, regarding the single session per context limitation.
IMO, this way it would be more scalable and efficient usage of a TEE device.
Yes, on one hand I see your concern and agree with your suggestion. On the other hand because of the issues I've described above the single TEE device solution IMHO is less ergonomic, i.e. requires more workarounds and special cases than my current proposal. Also, based on my experience so far the expected maximum number of SPs in a system is not huge (usually less than 10, my guess is a few 10s at the very maximum in the future).
On a side note, I would suggest you document Trusted Services TEE for the next patch revision.
Sure, will do.
I think a similar scenario would be if we had e.g. Hafnium as S-EL2 SPMC and two OP-TEE instances as S-EL1 SPs running under Hafnium. In this case the FF-A devices representing the OP-TEE instances would have the same FF-A UUID but different endpoint IDs. The OP-TEE driver in Linux would register two TEE devices for the two OP-TEE SPs.
Do you have any particular use-case where two or more OP-TEE instances will be required by a single VM (Linux)? Those different TEE devices are intended to represent different TEE implementations (following unique communication protocol stack).
No, this was just a theoretical example. I think it is feasible from a technical point of view, but probably not a useful setup in real life.
Regards, Balint
[1] https://gitlab.arm.com/linux-arm/linux-trusted-services/-/blob/tee-v1.1.2/co...
On 20 Nov 2023, at 9:01, Sumit Garg wrote:
On Tue, 14 Nov 2023 at 20:02, Balint Dobszay balint.dobszay@arm.com wrote:
On 31 Oct 2023, at 12:39, Sumit Garg wrote:
On Tue, 31 Oct 2023 at 00:05, Balint Dobszay balint.dobszay@arm.com wrote:
On 26 Oct 2023, at 8:52, Sumit Garg wrote:
On Fri, 20 Oct 2023 at 19:22, Balint Dobszay balint.dobszay@arm.com wrote:
On 13 Oct 2023, at 14:47, Sumit Garg wrote: > On Wed, 27 Sept 2023 at 20:56, Balint Dobszay balint.dobszay@arm.com wrote:
[snip]
>> diff --git a/drivers/tee/tstee/core.c b/drivers/tee/tstee/core.c >> new file mode 100644 >> index 000000000000..c0194638b7da >> --- /dev/null >> +++ b/drivers/tee/tstee/core.c >> @@ -0,0 +1,473 @@ >> +// SPDX-License-Identifier: GPL-2.0-only >> +/* >> + * Copyright (c) 2023, Arm Limited >> + */ >> + >> +#define DRIVER_NAME "Arm TSTEE" >> +#define pr_fmt(fmt) DRIVER_NAME ": " fmt >> + >> +#include <linux/arm_ffa.h> >> +#include <linux/err.h> >> +#include <linux/errno.h> >> +#include <linux/kernel.h> >> +#include <linux/limits.h> >> +#include <linux/list.h> >> +#include <linux/mm.h> >> +#include <linux/module.h> >> +#include <linux/scatterlist.h> >> +#include <linux/slab.h> >> +#include <linux/stat.h> >> +#include <linux/tee_drv.h> >> +#include <linux/types.h> >> +#include <linux/uaccess.h> >> + >> +#include "tstee_private.h" >> + >> +#define FFA_INVALID_MEM_HANDLE U64_MAX >> + >> +static void arg_list_to_ffa_data(const u32 *args, struct ffa_send_direct_data *data) >> +{ >> + data->data0 = args[0]; >> + data->data1 = args[1]; >> + data->data2 = args[2]; >> + data->data3 = args[3]; >> + data->data4 = args[4]; >> +} >> + >> +static void arg_list_from_ffa_data(const struct ffa_send_direct_data *data, u32 *args) >> +{ >> + args[0] = lower_32_bits(data->data0); >> + args[1] = lower_32_bits(data->data1); >> + args[2] = lower_32_bits(data->data2); >> + args[3] = lower_32_bits(data->data3); >> + args[4] = lower_32_bits(data->data4); >> +} >> + >> +static void tstee_get_version(struct tee_device *teedev, struct tee_ioctl_version_data *vers) >> +{ >> + struct tstee *tstee = tee_get_drvdata(teedev); >> + struct tee_ioctl_version_data v = { >> + .impl_id = TEE_IMPL_ID_TSTEE, >> + .impl_caps = tstee->ffa_dev->vm_id, > > So while exploring the user-space interface, I observed an anomaly > here. The ".impl_caps" refers to "Implementation specific > capabilities" meant to support backwards compatibility of a particular > TEE implementation. But here I observe you are using it instead for > endpoint_id. Can you provide the reasoning behind it? Also, do you > plan to support multiple endpoints via this driver?
The mapping between Trusted Services SPs and TEE devices is 1:1, i.e. each /dev/tee<n> device represents exactly one FF-A endpoint. To answer your second question, each instance of the driver represents a single endpoint, multiple endpoints are supported by having multiple instances of the driver.
I don't follow you here. How multiple instances of "arm_tstee" driver going to work? Also, I can only see a single FF-A based device: TS_RPC_UUID being probed here.
My terminology regarding "multiple instances of the driver" was incorrect, apologies. What I meant was that tstee_probe() gets called multiple times, thus allocating multiple instances of "struct tstee".
All of the Trusted Services SPs use the same FF-A UUID (TS_RPC_UUID). These will show up on the FF-A bus in Linux as multiple devices, having the same FF-A UUID, but different endpoint IDs, e.g.
Name FF-A EP ID FF-A UUID
OP-TEE 0x8001 486178e0-e7f8-11e3-bc5e0002a5d5c51b TS ITS SP 0x8002 bdcd76d7-825e-4751-963b86d4f84943ac TS Crypto SP 0x8003 bdcd76d7-825e-4751-963b86d4f84943ac TS Attest SP 0x8003 bdcd76d7-825e-4751-963b86d4f84943ac
Each of the Trusted Services FF-A devices will match the single UUID present in tstee_device_ids[], so tstee_probe() will be called multiple times, each time with a different FF-A device as argument. In the probe function a new TEE device is allocated, and a new "struct tstee" which represents the connection between a particular ffa_device and tee_device.
I can see now how it works but allocation of a tee_device for every ffa_device seems an overkill here. I suppose standalone secure partitions are somewhat analogous to trusted applications. I don't think we really need separate TEE devices in order to communicate with different secure partitions.
Makes sense, I was thinking about this too. In an earlier version of this driver I implemented a similar approach to what you've suggested, code available at [1]. However, there were some places where the single TEE device solution had shortcomings, that's why I ended up with the current version. I think from a layering point of view an SP is more similar to a TEE. In Trusted Services an SP can host one or multiple services, each one is identified by a service UUID and defines service specific operations and parameters. IMO these services are analogous to TAs and the collection of services accessible by a common RPC protocol is analogous to a TEE.
Based on whatever I have understood as of now regarding this interface, it should work with single TEE device as follows:
- During tstee_probe(), create a list of FF-A devices (representing
SPs) and create a single TEE device representing TSTEE RPC protocol.
This is fine, basically this is what I've implemented in [1].
- During open session IOCTL, pass endpoint ID as an argument and check
if corresponding FF-A device is present in the list. If present then you can return session ID derived from the endpoint ID to the user-space client.
There is an issue here: unless static endpoint IDs are used, user space has to be able to discover the endpoint IDs assigned to this driver first (i.e. currently using tee_ioctl_version_data.impl_caps that we've been discussing in the other thread). If there is only a single TEE device for all SPs, the impl_caps solution cannot be used. In [1] the workaround I implemented for this was to have a special case in the invoke_func() op that doesn't actually do a call to SWd, but just gathers the list of available endpoint IDs and returns them to user space. I think this is probably not in line with the intended usage of this ioctl, but couldn't find a better way.
Don't you think we need to provide unique static endpoint IDs for services supporting the RPC protocol? Are these currently dynamically allocated?
A unique endpoint ID is assigned to each SP at boot time by the SPMC. If an SP's manifest specifies a static endpoint ID, that one will be used, otherwise the SPMC will dynamically allocate one.
How does user-space know if the endpoint ID belongs to attestation service or crypto service?
A single SP can provide multiple services, i.e. it's 1:n mapping between endpoint IDs and services. The service types (e.g. PSA Crypto, ITS, etc) are identified by a service UUID (IMO this is somewhat analogous to a TA's UUID). To discover if a service is implemented by an endpoint the TS_RPC_OP_SERVICE_INFO request is used in open_session(). This RPC will either return success along with the service's short ID (interface ID) if the service is present, or an error if the queried service UUID is not implemented by the endpoint.
If user space has no prior knowledge of what services are implemented by which endpoint ID, to find a specific service UUID it calls open session ioctl for each TEE device that is TEE_IMPL_ID_TSTEE [1].
Also, the current solution maps well to the TEE subsystem concepts of a context and a session. The context represents a connection to an SP, one or more services can be opened in the SP represented by sessions in that context. In the future we plan to introduce some form of login/ authentication mechanism for services in TS SPs where the related fields in open_session ioctl's argument would be useful. However, if a session would represent a whole SP, a new solution would be needed - specific to this driver - for representing the services inside the SP and passing the login/authentication arguments in the ioctl.
- For all further invoke commands IOCTL, you can retrieve endpoint ID
from session ID and then talk to corresponding FF-A device.
There is an issue with the memory sharing operations. The SHM alloc ioctl doesn't have a session ID field in its input argument struct, i.e. the memory share operation is "context specific", not "session specific". If a single TEE device is representing all TS SPs, when invoking an SHM alloc ioctl on that device there is no way to convey information about the destination endpoint ID to the driver.
Okay I see that as a major limiting factor to the approach I proposed. The generic TEE driver design is to share memory at once with underlying TEE implementation (TSTEE in this case). It can then be further reused to communicate with multiple TAs.
However, in trusted services the user-space has to separately share memory with each trusted service using endpoint ID. That's the major reason I see that we have to create a separate TEE device for each trusted service. I would like to see that reasoning in the commit message description as well as the documentation for TSTEE.
Sure, I will document this topic.
Regards, Balint
[1] https://git.trustedfirmware.org/TS/trusted-services.git/tree/components/rpc/...
op-tee@lists.trustedfirmware.org