Hi,
This patch set allocates the restricted DMA-bufs from a DMA-heap instantiated from the TEE subsystem.
The TEE subsystem handles the DMA-buf allocations since it is the TEE (OP-TEE, AMD-TEE, TS-TEE, or perhaps a future QTEE) which sets up the restrictions for the memory used for the DMA-bufs.
The DMA-heap uses a restricted memory pool provided by the backend TEE driver, allowing it to choose how to allocate the restricted physical memory.
The allocated DMA-bufs must be imported with a new TEE_IOC_SHM_REGISTER_FD before they can be passed as arguments when requesting services from the secure world.
Three use-cases (Secure Video Playback, Trusted UI, and Secure Video Recording) has been identified so far to serve as examples of what can be expected. The use-cases has predefined DMA-heap names, "restricted,secure-video", "restricted,trusted-ui", and "restricted,secure-video-record". The backend driver registers restricted memory pools for the use-cases it supports.
Each use-case has it's own restricted memory pool since different use-cases requires isolation from different parts of the system. A restricted memory pool can be based on a static carveout instantiated while probing the TEE backend driver, or dynamically allocated from CMA and made restricted as needed by the TEE.
This can be tested on a RockPi 4B+ with the following steps: repo init -u https://github.com/jenswi-linaro/manifest.git -m rockpi4.xml \ -b prototype/sdp-v6 repo sync -j8 cd build make toolchains -j$(nproc) make all -j$(nproc) # Copy ../out/rockpi4.img to an SD card and boot the RockPi from that # Connect a monitor to the RockPi # login and at the prompt: gst-launch-1.0 videotestsrc ! \ aesenc key=1f9423681beb9a79215820f6bda73d0f \ iv=e9aa8e834d8d70b7e0d254ff670dd718 serialize-iv=true ! \ aesdec key=1f9423681beb9a79215820f6bda73d0f ! \ kmssink
The aesdec module has been hacked to use an OP-TEE TA to decrypt the stream into restricted DMA-bufs which are consumed by the kmssink.
The primitive QEMU tests from previous patch set can be tested on RockPi in the same way with: xtest --sdp-basic
The primitive test are tested on QEMU with the following steps: repo init -u https://github.com/jenswi-linaro/manifest.git -m qemu_v8.xml \ -b prototype/sdp-v6 repo sync -j8 cd build make toolchains -j$(nproc) make SPMC_AT_EL=1 all -j$(nproc) make SPMC_AT_EL=1 run-only # login and at the prompt: xtest --sdp-basic
The SPMC_AT_EL=1 parameter configures the build with FF-A and an SPMC at S-EL1 inside OP-TEE. The parameter can be changed into SPMC_AT_EL=n to test without FF-A using the original SMC ABI instead. Please remember to do %rm -rf ../trusted-firmware-a/build/qemu for TF-A to be rebuilt properly using the new configuration.
https://optee.readthedocs.io/en/latest/building/prerequisites.html list dependencies needed to build the above.
The tests are pretty basic, mostly checking that a Trusted Application in the secure world can access and manipulate the memory. There are also some negative tests for out of bounds buffers etc.
Thanks, Jens
Changes since V5: * Removing "tee: add restricted memory allocation" and "tee: add TEE_IOC_RSTMEM_FD_INFO" * Adding "tee: implement restricted DMA-heap", "tee: new ioctl to a register tee_shm from a dmabuf file descriptor", "tee: add tee_shm_alloc_cma_phys_mem()", "optee: pass parent device to tee_device_alloc()", and "tee: tee_device_alloc(): copy dma_mask from parent device" * The two TEE driver OPs "rstmem_alloc()" and "rstmem_free()" are replaced with a struct tee_rstmem_pool abstraction. * Replaced the the TEE_IOC_RSTMEM_ALLOC user space API with the DMA-heap API
Changes since V4: * Adding the patch "tee: add TEE_IOC_RSTMEM_FD_INFO" needed by the GStreamer demo * Removing the dummy CPU access and mmap functions from the dma_buf_ops * Fixing a compile error in "optee: FF-A: dynamic restricted memory allocation" reported by kernel test robot lkp@intel.com
Changes since V3: * Make the use_case and flags field in struct tee_shm u32's instead of u16's * Add more description for TEE_IOC_RSTMEM_ALLOC in the header file * Import namespace DMA_BUF in module tee, reported by lkp@intel.com * Added a note in the commit message for "optee: account for direction while converting parameters" why it's needed * Factor out dynamic restricted memory allocation from "optee: support restricted memory allocation" into two new commits "optee: FF-A: dynamic restricted memory allocation" and "optee: smc abi: dynamic restricted memory allocation" * Guard CMA usage with #ifdef CONFIG_CMA, effectively disabling dynamic restricted memory allocate if CMA isn't configured
Changes since the V2 RFC: * Based on v6.12 * Replaced the flags for SVP and Trusted UID memory with a u32 field with unique id for each use case * Added dynamic allocation of restricted memory pools * Added OP-TEE ABI both with and without FF-A for dynamic restricted memory * Added support for FF-A with FFA_LEND
Changes since the V1 RFC: * Based on v6.11 * Complete rewrite, replacing the restricted heap with TEE_IOC_RSTMEM_ALLOC
Changes since Olivier's post [2]: * Based on Yong Wu's post [1] where much of dma-buf handling is done in the generic restricted heap * Simplifications and cleanup * New commit message for "dma-buf: heaps: add Linaro restricted dmabuf heap support" * Replaced the word "secure" with "restricted" where applicable
Etienne Carriere (1): tee: new ioctl to a register tee_shm from a dmabuf file descriptor
Jens Wiklander (9): tee: tee_device_alloc(): copy dma_mask from parent device optee: pass parent device to tee_device_alloc() optee: account for direction while converting parameters optee: sync secure world ABI headers tee: implement restricted DMA-heap tee: add tee_shm_alloc_cma_phys_mem() optee: support restricted memory allocation optee: FF-A: dynamic restricted memory allocation optee: smc abi: dynamic restricted memory allocation
drivers/tee/Makefile | 1 + drivers/tee/optee/Makefile | 1 + drivers/tee/optee/call.c | 10 +- drivers/tee/optee/core.c | 1 + drivers/tee/optee/ffa_abi.c | 194 +++++++++++- drivers/tee/optee/optee_ffa.h | 27 +- drivers/tee/optee/optee_msg.h | 65 ++++- drivers/tee/optee/optee_private.h | 55 +++- drivers/tee/optee/optee_smc.h | 71 ++++- drivers/tee/optee/rpc.c | 31 +- drivers/tee/optee/rstmem.c | 329 +++++++++++++++++++++ drivers/tee/optee/smc_abi.c | 190 ++++++++++-- drivers/tee/tee_core.c | 147 +++++++--- drivers/tee/tee_heap.c | 470 ++++++++++++++++++++++++++++++ drivers/tee/tee_private.h | 7 + drivers/tee/tee_shm.c | 199 ++++++++++++- include/linux/tee_core.h | 67 +++++ include/linux/tee_drv.h | 10 + include/uapi/linux/tee.h | 29 ++ 19 files changed, 1781 insertions(+), 123 deletions(-) create mode 100644 drivers/tee/optee/rstmem.c create mode 100644 drivers/tee/tee_heap.c
base-commit: 7eb172143d5508b4da468ed59ee857c6e5e01da6
If a parent device is supplied to tee_device_alloc(), copy the dma_mask field into the new device. This avoids future warnings when mapping a DMA-buf for the device.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org --- drivers/tee/tee_core.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c index d113679b1e2d..685afcaa3ea1 100644 --- a/drivers/tee/tee_core.c +++ b/drivers/tee/tee_core.c @@ -922,6 +922,8 @@ struct tee_device *tee_device_alloc(const struct tee_desc *teedesc, teedev->dev.class = &tee_class; teedev->dev.release = tee_release_device; teedev->dev.parent = dev; + if (dev) + teedev->dev.dma_mask = dev->dma_mask;
teedev->dev.devt = MKDEV(MAJOR(tee_devt), teedev->id);
On Wed, Mar 05, 2025 at 02:04:07PM +0100, Jens Wiklander wrote:
If a parent device is supplied to tee_device_alloc(), copy the dma_mask field into the new device. This avoids future warnings when mapping a DMA-buf for the device.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/tee/tee_core.c | 2 ++ 1 file changed, 2 insertions(+)
Reviewed-by: Sumit Garg sumit.garg@kernel.org
-Sumit
diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c index d113679b1e2d..685afcaa3ea1 100644 --- a/drivers/tee/tee_core.c +++ b/drivers/tee/tee_core.c @@ -922,6 +922,8 @@ struct tee_device *tee_device_alloc(const struct tee_desc *teedesc, teedev->dev.class = &tee_class; teedev->dev.release = tee_release_device; teedev->dev.parent = dev;
- if (dev)
teedev->dev.dma_mask = dev->dma_mask;
teedev->dev.devt = MKDEV(MAJOR(tee_devt), teedev->id); -- 2.43.0
During probing of the OP-TEE driver, pass the parent device to tee_device_alloc() so the dma_mask of the new devices can be updated accordingly.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org --- drivers/tee/optee/ffa_abi.c | 8 ++++---- drivers/tee/optee/smc_abi.c | 4 ++-- 2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c index f3af5666bb11..4ca1d5161b82 100644 --- a/drivers/tee/optee/ffa_abi.c +++ b/drivers/tee/optee/ffa_abi.c @@ -914,16 +914,16 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev) (sec_caps & OPTEE_FFA_SEC_CAP_RPMB_PROBE)) optee->in_kernel_rpmb_routing = true;
- teedev = tee_device_alloc(&optee_ffa_clnt_desc, NULL, optee->pool, - optee); + teedev = tee_device_alloc(&optee_ffa_clnt_desc, &ffa_dev->dev, + optee->pool, optee); if (IS_ERR(teedev)) { rc = PTR_ERR(teedev); goto err_free_pool; } optee->teedev = teedev;
- teedev = tee_device_alloc(&optee_ffa_supp_desc, NULL, optee->pool, - optee); + teedev = tee_device_alloc(&optee_ffa_supp_desc, &ffa_dev->dev, + optee->pool, optee); if (IS_ERR(teedev)) { rc = PTR_ERR(teedev); goto err_unreg_teedev; diff --git a/drivers/tee/optee/smc_abi.c b/drivers/tee/optee/smc_abi.c index f0c3ac1103bb..165fadd9abc9 100644 --- a/drivers/tee/optee/smc_abi.c +++ b/drivers/tee/optee/smc_abi.c @@ -1691,14 +1691,14 @@ static int optee_probe(struct platform_device *pdev) (sec_caps & OPTEE_SMC_SEC_CAP_RPMB_PROBE)) optee->in_kernel_rpmb_routing = true;
- teedev = tee_device_alloc(&optee_clnt_desc, NULL, pool, optee); + teedev = tee_device_alloc(&optee_clnt_desc, &pdev->dev, pool, optee); if (IS_ERR(teedev)) { rc = PTR_ERR(teedev); goto err_free_optee; } optee->teedev = teedev;
- teedev = tee_device_alloc(&optee_supp_desc, NULL, pool, optee); + teedev = tee_device_alloc(&optee_supp_desc, &pdev->dev, pool, optee); if (IS_ERR(teedev)) { rc = PTR_ERR(teedev); goto err_unreg_teedev;
On Wed, Mar 05, 2025 at 02:04:08PM +0100, Jens Wiklander wrote:
During probing of the OP-TEE driver, pass the parent device to tee_device_alloc() so the dma_mask of the new devices can be updated accordingly.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/tee/optee/ffa_abi.c | 8 ++++---- drivers/tee/optee/smc_abi.c | 4 ++-- 2 files changed, 6 insertions(+), 6 deletions(-)
Reviewed-by: Sumit Garg sumit.garg@kernel.org
-Sumit
diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c index f3af5666bb11..4ca1d5161b82 100644 --- a/drivers/tee/optee/ffa_abi.c +++ b/drivers/tee/optee/ffa_abi.c @@ -914,16 +914,16 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev) (sec_caps & OPTEE_FFA_SEC_CAP_RPMB_PROBE)) optee->in_kernel_rpmb_routing = true;
- teedev = tee_device_alloc(&optee_ffa_clnt_desc, NULL, optee->pool,
optee);
- teedev = tee_device_alloc(&optee_ffa_clnt_desc, &ffa_dev->dev,
if (IS_ERR(teedev)) { rc = PTR_ERR(teedev); goto err_free_pool; } optee->teedev = teedev;optee->pool, optee);
- teedev = tee_device_alloc(&optee_ffa_supp_desc, NULL, optee->pool,
optee);
- teedev = tee_device_alloc(&optee_ffa_supp_desc, &ffa_dev->dev,
if (IS_ERR(teedev)) { rc = PTR_ERR(teedev); goto err_unreg_teedev;optee->pool, optee);
diff --git a/drivers/tee/optee/smc_abi.c b/drivers/tee/optee/smc_abi.c index f0c3ac1103bb..165fadd9abc9 100644 --- a/drivers/tee/optee/smc_abi.c +++ b/drivers/tee/optee/smc_abi.c @@ -1691,14 +1691,14 @@ static int optee_probe(struct platform_device *pdev) (sec_caps & OPTEE_SMC_SEC_CAP_RPMB_PROBE)) optee->in_kernel_rpmb_routing = true;
- teedev = tee_device_alloc(&optee_clnt_desc, NULL, pool, optee);
- teedev = tee_device_alloc(&optee_clnt_desc, &pdev->dev, pool, optee); if (IS_ERR(teedev)) { rc = PTR_ERR(teedev); goto err_free_optee; } optee->teedev = teedev;
- teedev = tee_device_alloc(&optee_supp_desc, NULL, pool, optee);
- teedev = tee_device_alloc(&optee_supp_desc, &pdev->dev, pool, optee); if (IS_ERR(teedev)) { rc = PTR_ERR(teedev); goto err_unreg_teedev;
-- 2.43.0
The OP-TEE backend driver has two internal function pointers to convert between the subsystem type struct tee_param and the OP-TEE type struct optee_msg_param.
The conversion is done from one of the types to the other, which is then involved in some operation and finally converted back to the original type. When converting to prepare the parameters for the operation, all fields must be taken into account, but then converting back, it's enough to update only out-values and out-sizes. So, an update_out parameter is added to the conversion functions to tell if all or only some fields must be copied.
This is needed in a later patch where it might get confusing when converting back in from_msg_param() callback since an allocated restricted SHM can be using the sec_world_id of the used restricted memory pool and that doesn't translate back well.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org --- drivers/tee/optee/call.c | 10 ++-- drivers/tee/optee/ffa_abi.c | 43 +++++++++++++---- drivers/tee/optee/optee_private.h | 42 +++++++++++------ drivers/tee/optee/rpc.c | 31 +++++++++---- drivers/tee/optee/smc_abi.c | 76 +++++++++++++++++++++++-------- 5 files changed, 144 insertions(+), 58 deletions(-)
diff --git a/drivers/tee/optee/call.c b/drivers/tee/optee/call.c index 16eb953e14bb..f1533b894726 100644 --- a/drivers/tee/optee/call.c +++ b/drivers/tee/optee/call.c @@ -400,7 +400,8 @@ int optee_open_session(struct tee_context *ctx, export_uuid(msg_arg->params[1].u.octets, &client_uuid);
rc = optee->ops->to_msg_param(optee, msg_arg->params + 2, - arg->num_params, param); + arg->num_params, param, + false /*!update_out*/); if (rc) goto out;
@@ -427,7 +428,8 @@ int optee_open_session(struct tee_context *ctx, }
if (optee->ops->from_msg_param(optee, param, arg->num_params, - msg_arg->params + 2)) { + msg_arg->params + 2, + true /*update_out*/)) { arg->ret = TEEC_ERROR_COMMUNICATION; arg->ret_origin = TEEC_ORIGIN_COMMS; /* Close session again to avoid leakage */ @@ -541,7 +543,7 @@ int optee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, msg_arg->cancel_id = arg->cancel_id;
rc = optee->ops->to_msg_param(optee, msg_arg->params, arg->num_params, - param); + param, false /*!update_out*/); if (rc) goto out;
@@ -551,7 +553,7 @@ int optee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, }
if (optee->ops->from_msg_param(optee, param, arg->num_params, - msg_arg->params)) { + msg_arg->params, true /*update_out*/)) { msg_arg->ret = TEEC_ERROR_COMMUNICATION; msg_arg->ret_origin = TEEC_ORIGIN_COMMS; } diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c index 4ca1d5161b82..e4b08cd195f3 100644 --- a/drivers/tee/optee/ffa_abi.c +++ b/drivers/tee/optee/ffa_abi.c @@ -122,15 +122,21 @@ static int optee_shm_rem_ffa_handle(struct optee *optee, u64 global_id) */
static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p, - u32 attr, const struct optee_msg_param *mp) + u32 attr, const struct optee_msg_param *mp, + bool update_out) { struct tee_shm *shm = NULL; u64 offs_high = 0; u64 offs_low = 0;
+ if (update_out) { + if (attr == OPTEE_MSG_ATTR_TYPE_FMEM_INPUT) + return; + goto out; + } + p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + attr - OPTEE_MSG_ATTR_TYPE_FMEM_INPUT; - p->u.memref.size = mp->u.fmem.size;
if (mp->u.fmem.global_id != OPTEE_MSG_FMEM_INVALID_GLOBAL_ID) shm = optee_shm_from_ffa_handle(optee, mp->u.fmem.global_id); @@ -141,6 +147,8 @@ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p, offs_high = mp->u.fmem.offs_high; } p->u.memref.shm_offs = offs_low | offs_high << 32; +out: + p->u.memref.size = mp->u.fmem.size; }
/** @@ -150,12 +158,14 @@ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p, * @params: subsystem internal parameter representation * @num_params: number of elements in the parameter arrays * @msg_params: OPTEE_MSG parameters + * @update_out: update parameter for output only * * Returns 0 on success or <0 on failure */ static int optee_ffa_from_msg_param(struct optee *optee, struct tee_param *params, size_t num_params, - const struct optee_msg_param *msg_params) + const struct optee_msg_param *msg_params, + bool update_out) { size_t n;
@@ -166,18 +176,20 @@ static int optee_ffa_from_msg_param(struct optee *optee,
switch (attr) { case OPTEE_MSG_ATTR_TYPE_NONE: + if (update_out) + break; p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&p->u, 0, sizeof(p->u)); break; case OPTEE_MSG_ATTR_TYPE_VALUE_INPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT: - optee_from_msg_param_value(p, attr, mp); + optee_from_msg_param_value(p, attr, mp, update_out); break; case OPTEE_MSG_ATTR_TYPE_FMEM_INPUT: case OPTEE_MSG_ATTR_TYPE_FMEM_OUTPUT: case OPTEE_MSG_ATTR_TYPE_FMEM_INOUT: - from_msg_param_ffa_mem(optee, p, attr, mp); + from_msg_param_ffa_mem(optee, p, attr, mp, update_out); break; default: return -EINVAL; @@ -188,10 +200,16 @@ static int optee_ffa_from_msg_param(struct optee *optee, }
static int to_msg_param_ffa_mem(struct optee_msg_param *mp, - const struct tee_param *p) + const struct tee_param *p, bool update_out) { struct tee_shm *shm = p->u.memref.shm;
+ if (update_out) { + if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT) + return 0; + goto out; + } + mp->attr = OPTEE_MSG_ATTR_TYPE_FMEM_INPUT + p->attr - TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT;
@@ -211,6 +229,7 @@ static int to_msg_param_ffa_mem(struct optee_msg_param *mp, memset(&mp->u, 0, sizeof(mp->u)); mp->u.fmem.global_id = OPTEE_MSG_FMEM_INVALID_GLOBAL_ID; } +out: mp->u.fmem.size = p->u.memref.size;
return 0; @@ -222,13 +241,15 @@ static int to_msg_param_ffa_mem(struct optee_msg_param *mp, * @optee: main service struct * @msg_params: OPTEE_MSG parameters * @num_params: number of elements in the parameter arrays - * @params: subsystem itnernal parameter representation + * @params: subsystem internal parameter representation + * @update_out: update parameter for output only * Returns 0 on success or <0 on failure */ static int optee_ffa_to_msg_param(struct optee *optee, struct optee_msg_param *msg_params, size_t num_params, - const struct tee_param *params) + const struct tee_param *params, + bool update_out) { size_t n;
@@ -238,18 +259,20 @@ static int optee_ffa_to_msg_param(struct optee *optee,
switch (p->attr) { case TEE_IOCTL_PARAM_ATTR_TYPE_NONE: + if (update_out) + break; mp->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&mp->u, 0, sizeof(mp->u)); break; case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT: - optee_to_msg_param_value(mp, p); + optee_to_msg_param_value(mp, p, update_out); break; case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT: - if (to_msg_param_ffa_mem(mp, p)) + if (to_msg_param_ffa_mem(mp, p, update_out)) return -EINVAL; break; default: diff --git a/drivers/tee/optee/optee_private.h b/drivers/tee/optee/optee_private.h index dc0f355ef72a..20eda508dbac 100644 --- a/drivers/tee/optee/optee_private.h +++ b/drivers/tee/optee/optee_private.h @@ -185,10 +185,12 @@ struct optee_ops { bool system_thread); int (*to_msg_param)(struct optee *optee, struct optee_msg_param *msg_params, - size_t num_params, const struct tee_param *params); + size_t num_params, const struct tee_param *params, + bool update_out); int (*from_msg_param)(struct optee *optee, struct tee_param *params, size_t num_params, - const struct optee_msg_param *msg_params); + const struct optee_msg_param *msg_params, + bool update_out); };
/** @@ -316,23 +318,35 @@ void optee_release(struct tee_context *ctx); void optee_release_supp(struct tee_context *ctx);
static inline void optee_from_msg_param_value(struct tee_param *p, u32 attr, - const struct optee_msg_param *mp) + const struct optee_msg_param *mp, + bool update_out) { - p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT + - attr - OPTEE_MSG_ATTR_TYPE_VALUE_INPUT; - p->u.value.a = mp->u.value.a; - p->u.value.b = mp->u.value.b; - p->u.value.c = mp->u.value.c; + if (!update_out) + p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT + + attr - OPTEE_MSG_ATTR_TYPE_VALUE_INPUT; + + if (attr == OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT || + attr == OPTEE_MSG_ATTR_TYPE_VALUE_INOUT || !update_out) { + p->u.value.a = mp->u.value.a; + p->u.value.b = mp->u.value.b; + p->u.value.c = mp->u.value.c; + } }
static inline void optee_to_msg_param_value(struct optee_msg_param *mp, - const struct tee_param *p) + const struct tee_param *p, + bool update_out) { - mp->attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT + p->attr - - TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT; - mp->u.value.a = p->u.value.a; - mp->u.value.b = p->u.value.b; - mp->u.value.c = p->u.value.c; + if (!update_out) + mp->attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT + p->attr - + TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT; + + if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT || + p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT || !update_out) { + mp->u.value.a = p->u.value.a; + mp->u.value.b = p->u.value.b; + mp->u.value.c = p->u.value.c; + } }
void optee_cq_init(struct optee_call_queue *cq, int thread_count); diff --git a/drivers/tee/optee/rpc.c b/drivers/tee/optee/rpc.c index ebbbd42b0e3e..580e6b9b0606 100644 --- a/drivers/tee/optee/rpc.c +++ b/drivers/tee/optee/rpc.c @@ -63,7 +63,7 @@ static void handle_rpc_func_cmd_i2c_transfer(struct tee_context *ctx, }
if (optee->ops->from_msg_param(optee, params, arg->num_params, - arg->params)) + arg->params, false /*!update_out*/)) goto bad;
for (i = 0; i < arg->num_params; i++) { @@ -107,7 +107,8 @@ static void handle_rpc_func_cmd_i2c_transfer(struct tee_context *ctx, } else { params[3].u.value.a = msg.len; if (optee->ops->to_msg_param(optee, arg->params, - arg->num_params, params)) + arg->num_params, params, + true /*update_out*/)) arg->ret = TEEC_ERROR_BAD_PARAMETERS; else arg->ret = TEEC_SUCCESS; @@ -188,6 +189,7 @@ static void handle_rpc_func_cmd_wait(struct optee_msg_arg *arg) static void handle_rpc_supp_cmd(struct tee_context *ctx, struct optee *optee, struct optee_msg_arg *arg) { + bool update_out = false; struct tee_param *params;
arg->ret_origin = TEEC_ORIGIN_COMMS; @@ -200,15 +202,21 @@ static void handle_rpc_supp_cmd(struct tee_context *ctx, struct optee *optee, }
if (optee->ops->from_msg_param(optee, params, arg->num_params, - arg->params)) { + arg->params, update_out)) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; goto out; }
arg->ret = optee_supp_thrd_req(ctx, arg->cmd, arg->num_params, params);
+ /* + * Special treatment for OPTEE_RPC_CMD_SHM_ALLOC since input is a + * value type, but the output is a memref type. + */ + if (arg->cmd != OPTEE_RPC_CMD_SHM_ALLOC) + update_out = true; if (optee->ops->to_msg_param(optee, arg->params, arg->num_params, - params)) + params, update_out)) arg->ret = TEEC_ERROR_BAD_PARAMETERS; out: kfree(params); @@ -270,7 +278,7 @@ static void handle_rpc_func_rpmb_probe_reset(struct tee_context *ctx,
if (arg->num_params != ARRAY_SIZE(params) || optee->ops->from_msg_param(optee, params, arg->num_params, - arg->params) || + arg->params, false /*!update_out*/) || params[0].attr != TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; return; @@ -280,7 +288,8 @@ static void handle_rpc_func_rpmb_probe_reset(struct tee_context *ctx, params[0].u.value.b = 0; params[0].u.value.c = 0; if (optee->ops->to_msg_param(optee, arg->params, - arg->num_params, params)) { + arg->num_params, params, + true /*update_out*/)) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; return; } @@ -324,7 +333,7 @@ static void handle_rpc_func_rpmb_probe_next(struct tee_context *ctx,
if (arg->num_params != ARRAY_SIZE(params) || optee->ops->from_msg_param(optee, params, arg->num_params, - arg->params) || + arg->params, false /*!update_out*/) || params[0].attr != TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT || params[1].attr != TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; @@ -358,7 +367,8 @@ static void handle_rpc_func_rpmb_probe_next(struct tee_context *ctx, params[0].u.value.b = rdev->descr.capacity; params[0].u.value.c = rdev->descr.reliable_wr_count; if (optee->ops->to_msg_param(optee, arg->params, - arg->num_params, params)) { + arg->num_params, params, + true /*update_out*/)) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; return; } @@ -384,7 +394,7 @@ static void handle_rpc_func_rpmb_frames(struct tee_context *ctx,
if (arg->num_params != ARRAY_SIZE(params) || optee->ops->from_msg_param(optee, params, arg->num_params, - arg->params) || + arg->params, false /*!update_out*/) || params[0].attr != TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT || params[1].attr != TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; @@ -401,7 +411,8 @@ static void handle_rpc_func_rpmb_frames(struct tee_context *ctx, goto out; } if (optee->ops->to_msg_param(optee, arg->params, - arg->num_params, params)) { + arg->num_params, params, + true /*update_out*/)) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; goto out; } diff --git a/drivers/tee/optee/smc_abi.c b/drivers/tee/optee/smc_abi.c index 165fadd9abc9..cfdae266548b 100644 --- a/drivers/tee/optee/smc_abi.c +++ b/drivers/tee/optee/smc_abi.c @@ -81,20 +81,26 @@ static int optee_cpuhp_disable_pcpu_irq(unsigned int cpu) */
static int from_msg_param_tmp_mem(struct tee_param *p, u32 attr, - const struct optee_msg_param *mp) + const struct optee_msg_param *mp, + bool update_out) { struct tee_shm *shm; phys_addr_t pa; int rc;
+ if (update_out) { + if (attr == OPTEE_MSG_ATTR_TYPE_TMEM_INPUT) + return 0; + goto out; + } + p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + attr - OPTEE_MSG_ATTR_TYPE_TMEM_INPUT; - p->u.memref.size = mp->u.tmem.size; shm = (struct tee_shm *)(unsigned long)mp->u.tmem.shm_ref; if (!shm) { p->u.memref.shm_offs = 0; p->u.memref.shm = NULL; - return 0; + goto out; }
rc = tee_shm_get_pa(shm, 0, &pa); @@ -103,18 +109,25 @@ static int from_msg_param_tmp_mem(struct tee_param *p, u32 attr,
p->u.memref.shm_offs = mp->u.tmem.buf_ptr - pa; p->u.memref.shm = shm; - +out: + p->u.memref.size = mp->u.tmem.size; return 0; }
static void from_msg_param_reg_mem(struct tee_param *p, u32 attr, - const struct optee_msg_param *mp) + const struct optee_msg_param *mp, + bool update_out) { struct tee_shm *shm;
+ if (update_out) { + if (attr == OPTEE_MSG_ATTR_TYPE_RMEM_INPUT) + return; + goto out; + } + p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + attr - OPTEE_MSG_ATTR_TYPE_RMEM_INPUT; - p->u.memref.size = mp->u.rmem.size; shm = (struct tee_shm *)(unsigned long)mp->u.rmem.shm_ref;
if (shm) { @@ -124,6 +137,8 @@ static void from_msg_param_reg_mem(struct tee_param *p, u32 attr, p->u.memref.shm_offs = 0; p->u.memref.shm = NULL; } +out: + p->u.memref.size = mp->u.rmem.size; }
/** @@ -133,11 +148,13 @@ static void from_msg_param_reg_mem(struct tee_param *p, u32 attr, * @params: subsystem internal parameter representation * @num_params: number of elements in the parameter arrays * @msg_params: OPTEE_MSG parameters + * @update_out: update parameter for output only * Returns 0 on success or <0 on failure */ static int optee_from_msg_param(struct optee *optee, struct tee_param *params, size_t num_params, - const struct optee_msg_param *msg_params) + const struct optee_msg_param *msg_params, + bool update_out) { int rc; size_t n; @@ -149,25 +166,27 @@ static int optee_from_msg_param(struct optee *optee, struct tee_param *params,
switch (attr) { case OPTEE_MSG_ATTR_TYPE_NONE: + if (update_out) + break; p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&p->u, 0, sizeof(p->u)); break; case OPTEE_MSG_ATTR_TYPE_VALUE_INPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT: - optee_from_msg_param_value(p, attr, mp); + optee_from_msg_param_value(p, attr, mp, update_out); break; case OPTEE_MSG_ATTR_TYPE_TMEM_INPUT: case OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT: case OPTEE_MSG_ATTR_TYPE_TMEM_INOUT: - rc = from_msg_param_tmp_mem(p, attr, mp); + rc = from_msg_param_tmp_mem(p, attr, mp, update_out); if (rc) return rc; break; case OPTEE_MSG_ATTR_TYPE_RMEM_INPUT: case OPTEE_MSG_ATTR_TYPE_RMEM_OUTPUT: case OPTEE_MSG_ATTR_TYPE_RMEM_INOUT: - from_msg_param_reg_mem(p, attr, mp); + from_msg_param_reg_mem(p, attr, mp, update_out); break;
default: @@ -178,20 +197,25 @@ static int optee_from_msg_param(struct optee *optee, struct tee_param *params, }
static int to_msg_param_tmp_mem(struct optee_msg_param *mp, - const struct tee_param *p) + const struct tee_param *p, bool update_out) { int rc; phys_addr_t pa;
+ if (update_out) { + if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT) + return 0; + goto out; + } + mp->attr = OPTEE_MSG_ATTR_TYPE_TMEM_INPUT + p->attr - TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT;
mp->u.tmem.shm_ref = (unsigned long)p->u.memref.shm; - mp->u.tmem.size = p->u.memref.size;
if (!p->u.memref.shm) { mp->u.tmem.buf_ptr = 0; - return 0; + goto out; }
rc = tee_shm_get_pa(p->u.memref.shm, p->u.memref.shm_offs, &pa); @@ -201,19 +225,27 @@ static int to_msg_param_tmp_mem(struct optee_msg_param *mp, mp->u.tmem.buf_ptr = pa; mp->attr |= OPTEE_MSG_ATTR_CACHE_PREDEFINED << OPTEE_MSG_ATTR_CACHE_SHIFT; - +out: + mp->u.tmem.size = p->u.memref.size; return 0; }
static int to_msg_param_reg_mem(struct optee_msg_param *mp, - const struct tee_param *p) + const struct tee_param *p, bool update_out) { + if (update_out) { + if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT) + return 0; + goto out; + } + mp->attr = OPTEE_MSG_ATTR_TYPE_RMEM_INPUT + p->attr - TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT;
mp->u.rmem.shm_ref = (unsigned long)p->u.memref.shm; - mp->u.rmem.size = p->u.memref.size; mp->u.rmem.offs = p->u.memref.shm_offs; +out: + mp->u.rmem.size = p->u.memref.size; return 0; }
@@ -223,11 +255,13 @@ static int to_msg_param_reg_mem(struct optee_msg_param *mp, * @msg_params: OPTEE_MSG parameters * @num_params: number of elements in the parameter arrays * @params: subsystem itnernal parameter representation + * @update_out: update parameter for output only * Returns 0 on success or <0 on failure */ static int optee_to_msg_param(struct optee *optee, struct optee_msg_param *msg_params, - size_t num_params, const struct tee_param *params) + size_t num_params, const struct tee_param *params, + bool update_out) { int rc; size_t n; @@ -238,21 +272,23 @@ static int optee_to_msg_param(struct optee *optee,
switch (p->attr) { case TEE_IOCTL_PARAM_ATTR_TYPE_NONE: + if (update_out) + break; mp->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&mp->u, 0, sizeof(mp->u)); break; case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT: - optee_to_msg_param_value(mp, p); + optee_to_msg_param_value(mp, p, update_out); break; case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT: if (tee_shm_is_dynamic(p->u.memref.shm)) - rc = to_msg_param_reg_mem(mp, p); + rc = to_msg_param_reg_mem(mp, p, update_out); else - rc = to_msg_param_tmp_mem(mp, p); + rc = to_msg_param_tmp_mem(mp, p, update_out); if (rc) return rc; break;
Hi Jens,
On Wed, Mar 05, 2025 at 02:04:09PM +0100, Jens Wiklander wrote:
The OP-TEE backend driver has two internal function pointers to convert between the subsystem type struct tee_param and the OP-TEE type struct optee_msg_param.
The conversion is done from one of the types to the other, which is then involved in some operation and finally converted back to the original type. When converting to prepare the parameters for the operation, all fields must be taken into account, but then converting back, it's enough to update only out-values and out-sizes. So, an update_out parameter is added to the conversion functions to tell if all or only some fields must be copied.
This is needed in a later patch where it might get confusing when converting back in from_msg_param() callback since an allocated restricted SHM can be using the sec_world_id of the used restricted memory pool and that doesn't translate back well.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/tee/optee/call.c | 10 ++-- drivers/tee/optee/ffa_abi.c | 43 +++++++++++++---- drivers/tee/optee/optee_private.h | 42 +++++++++++------ drivers/tee/optee/rpc.c | 31 +++++++++---- drivers/tee/optee/smc_abi.c | 76 +++++++++++++++++++++++-------- 5 files changed, 144 insertions(+), 58 deletions(-)
diff --git a/drivers/tee/optee/call.c b/drivers/tee/optee/call.c index 16eb953e14bb..f1533b894726 100644 --- a/drivers/tee/optee/call.c +++ b/drivers/tee/optee/call.c @@ -400,7 +400,8 @@ int optee_open_session(struct tee_context *ctx, export_uuid(msg_arg->params[1].u.octets, &client_uuid); rc = optee->ops->to_msg_param(optee, msg_arg->params + 2,
arg->num_params, param);
arg->num_params, param,
if (rc) goto out;false /*!update_out*/);
@@ -427,7 +428,8 @@ int optee_open_session(struct tee_context *ctx, } if (optee->ops->from_msg_param(optee, param, arg->num_params,
msg_arg->params + 2)) {
msg_arg->params + 2,
arg->ret = TEEC_ERROR_COMMUNICATION; arg->ret_origin = TEEC_ORIGIN_COMMS; /* Close session again to avoid leakage */true /*update_out*/)) {
@@ -541,7 +543,7 @@ int optee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, msg_arg->cancel_id = arg->cancel_id; rc = optee->ops->to_msg_param(optee, msg_arg->params, arg->num_params,
param);
if (rc) goto out;param, false /*!update_out*/);
@@ -551,7 +553,7 @@ int optee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, } if (optee->ops->from_msg_param(optee, param, arg->num_params,
msg_arg->params)) {
msg_arg->ret = TEEC_ERROR_COMMUNICATION; msg_arg->ret_origin = TEEC_ORIGIN_COMMS; }msg_arg->params, true /*update_out*/)) {
diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c index 4ca1d5161b82..e4b08cd195f3 100644 --- a/drivers/tee/optee/ffa_abi.c +++ b/drivers/tee/optee/ffa_abi.c @@ -122,15 +122,21 @@ static int optee_shm_rem_ffa_handle(struct optee *optee, u64 global_id) */ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p,
u32 attr, const struct optee_msg_param *mp)
u32 attr, const struct optee_msg_param *mp,
bool update_out)
{ struct tee_shm *shm = NULL; u64 offs_high = 0; u64 offs_low = 0;
- if (update_out) {
if (attr == OPTEE_MSG_ATTR_TYPE_FMEM_INPUT)
return;
goto out;
- }
- p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + attr - OPTEE_MSG_ATTR_TYPE_FMEM_INPUT;
- p->u.memref.size = mp->u.fmem.size;
if (mp->u.fmem.global_id != OPTEE_MSG_FMEM_INVALID_GLOBAL_ID) shm = optee_shm_from_ffa_handle(optee, mp->u.fmem.global_id); @@ -141,6 +147,8 @@ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p, offs_high = mp->u.fmem.offs_high; } p->u.memref.shm_offs = offs_low | offs_high << 32; +out:
- p->u.memref.size = mp->u.fmem.size;
} /** @@ -150,12 +158,14 @@ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p,
- @params: subsystem internal parameter representation
- @num_params: number of elements in the parameter arrays
- @msg_params: OPTEE_MSG parameters
*/
- @update_out: update parameter for output only
- Returns 0 on success or <0 on failure
static int optee_ffa_from_msg_param(struct optee *optee, struct tee_param *params, size_t num_params,
const struct optee_msg_param *msg_params)
const struct optee_msg_param *msg_params,
bool update_out)
{ size_t n; @@ -166,18 +176,20 @@ static int optee_ffa_from_msg_param(struct optee *optee, switch (attr) { case OPTEE_MSG_ATTR_TYPE_NONE:
if (update_out)
case OPTEE_MSG_ATTR_TYPE_VALUE_INPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT:break; p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&p->u, 0, sizeof(p->u)); break;
optee_from_msg_param_value(p, attr, mp);
case OPTEE_MSG_ATTR_TYPE_FMEM_INPUT: case OPTEE_MSG_ATTR_TYPE_FMEM_OUTPUT: case OPTEE_MSG_ATTR_TYPE_FMEM_INOUT:optee_from_msg_param_value(p, attr, mp, update_out); break;
from_msg_param_ffa_mem(optee, p, attr, mp);
default: return -EINVAL;from_msg_param_ffa_mem(optee, p, attr, mp, update_out); break;
@@ -188,10 +200,16 @@ static int optee_ffa_from_msg_param(struct optee *optee, } static int to_msg_param_ffa_mem(struct optee_msg_param *mp,
const struct tee_param *p)
const struct tee_param *p, bool update_out)
{ struct tee_shm *shm = p->u.memref.shm;
- if (update_out) {
if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT)
return 0;
goto out;
- }
- mp->attr = OPTEE_MSG_ATTR_TYPE_FMEM_INPUT + p->attr - TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT;
@@ -211,6 +229,7 @@ static int to_msg_param_ffa_mem(struct optee_msg_param *mp, memset(&mp->u, 0, sizeof(mp->u)); mp->u.fmem.global_id = OPTEE_MSG_FMEM_INVALID_GLOBAL_ID; } +out: mp->u.fmem.size = p->u.memref.size; return 0; @@ -222,13 +241,15 @@ static int to_msg_param_ffa_mem(struct optee_msg_param *mp,
- @optee: main service struct
- @msg_params: OPTEE_MSG parameters
- @num_params: number of elements in the parameter arrays
- @params: subsystem itnernal parameter representation
- @params: subsystem internal parameter representation
*/
- @update_out: update parameter for output only
- Returns 0 on success or <0 on failure
static int optee_ffa_to_msg_param(struct optee *optee, struct optee_msg_param *msg_params, size_t num_params,
const struct tee_param *params)
const struct tee_param *params,
bool update_out)
{ size_t n; @@ -238,18 +259,20 @@ static int optee_ffa_to_msg_param(struct optee *optee, switch (p->attr) { case TEE_IOCTL_PARAM_ATTR_TYPE_NONE:
if (update_out)
case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT:break; mp->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&mp->u, 0, sizeof(mp->u)); break;
optee_to_msg_param_value(mp, p);
case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT:optee_to_msg_param_value(mp, p, update_out); break;
if (to_msg_param_ffa_mem(mp, p))
default:if (to_msg_param_ffa_mem(mp, p, update_out)) return -EINVAL; break;
Can we rather handle it as follows to improve code readability and maintainence long term? Ditto for all other places.
static int optee_ffa_to_msg_param(struct optee *optee, struct optee_msg_param *msg_params, size_t num_params, const struct tee_param *params, bool update_out) { size_t n;
for (n = 0; n < num_params; n++) { const struct tee_param *p = params + n; struct optee_msg_param *mp = msg_params + n;
if (update_out && (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_NONE || p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT || p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT)) continue;
switch (p->attr) { case TEE_IOCTL_PARAM_ATTR_TYPE_NONE: mp->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&mp->u, 0, sizeof(mp->u)); break; case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT: optee_to_msg_param_value(mp, p); break; case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT: if (to_msg_param_ffa_mem(mp, p)) return -EINVAL; break; default: return -EINVAL; } }
return 0; }
-Sumit
diff --git a/drivers/tee/optee/optee_private.h b/drivers/tee/optee/optee_private.h index dc0f355ef72a..20eda508dbac 100644 --- a/drivers/tee/optee/optee_private.h +++ b/drivers/tee/optee/optee_private.h @@ -185,10 +185,12 @@ struct optee_ops { bool system_thread); int (*to_msg_param)(struct optee *optee, struct optee_msg_param *msg_params,
size_t num_params, const struct tee_param *params);
size_t num_params, const struct tee_param *params,
int (*from_msg_param)(struct optee *optee, struct tee_param *params, size_t num_params,bool update_out);
const struct optee_msg_param *msg_params);
const struct optee_msg_param *msg_params,
bool update_out);
}; /** @@ -316,23 +318,35 @@ void optee_release(struct tee_context *ctx); void optee_release_supp(struct tee_context *ctx); static inline void optee_from_msg_param_value(struct tee_param *p, u32 attr,
const struct optee_msg_param *mp)
const struct optee_msg_param *mp,
bool update_out)
{
- p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT +
attr - OPTEE_MSG_ATTR_TYPE_VALUE_INPUT;
- p->u.value.a = mp->u.value.a;
- p->u.value.b = mp->u.value.b;
- p->u.value.c = mp->u.value.c;
- if (!update_out)
p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT +
attr - OPTEE_MSG_ATTR_TYPE_VALUE_INPUT;
- if (attr == OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT ||
attr == OPTEE_MSG_ATTR_TYPE_VALUE_INOUT || !update_out) {
p->u.value.a = mp->u.value.a;
p->u.value.b = mp->u.value.b;
p->u.value.c = mp->u.value.c;
- }
} static inline void optee_to_msg_param_value(struct optee_msg_param *mp,
const struct tee_param *p)
const struct tee_param *p,
bool update_out)
{
- mp->attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT + p->attr -
TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT;
- mp->u.value.a = p->u.value.a;
- mp->u.value.b = p->u.value.b;
- mp->u.value.c = p->u.value.c;
- if (!update_out)
mp->attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT + p->attr -
TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT;
- if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT ||
p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT || !update_out) {
mp->u.value.a = p->u.value.a;
mp->u.value.b = p->u.value.b;
mp->u.value.c = p->u.value.c;
- }
} void optee_cq_init(struct optee_call_queue *cq, int thread_count); diff --git a/drivers/tee/optee/rpc.c b/drivers/tee/optee/rpc.c index ebbbd42b0e3e..580e6b9b0606 100644 --- a/drivers/tee/optee/rpc.c +++ b/drivers/tee/optee/rpc.c @@ -63,7 +63,7 @@ static void handle_rpc_func_cmd_i2c_transfer(struct tee_context *ctx, } if (optee->ops->from_msg_param(optee, params, arg->num_params,
arg->params))
goto bad;arg->params, false /*!update_out*/))
for (i = 0; i < arg->num_params; i++) { @@ -107,7 +107,8 @@ static void handle_rpc_func_cmd_i2c_transfer(struct tee_context *ctx, } else { params[3].u.value.a = msg.len; if (optee->ops->to_msg_param(optee, arg->params,
arg->num_params, params))
arg->num_params, params,
else arg->ret = TEEC_SUCCESS;true /*update_out*/)) arg->ret = TEEC_ERROR_BAD_PARAMETERS;
@@ -188,6 +189,7 @@ static void handle_rpc_func_cmd_wait(struct optee_msg_arg *arg) static void handle_rpc_supp_cmd(struct tee_context *ctx, struct optee *optee, struct optee_msg_arg *arg) {
- bool update_out = false; struct tee_param *params;
arg->ret_origin = TEEC_ORIGIN_COMMS; @@ -200,15 +202,21 @@ static void handle_rpc_supp_cmd(struct tee_context *ctx, struct optee *optee, } if (optee->ops->from_msg_param(optee, params, arg->num_params,
arg->params)) {
arg->ret = TEEC_ERROR_BAD_PARAMETERS; goto out; }arg->params, update_out)) {
arg->ret = optee_supp_thrd_req(ctx, arg->cmd, arg->num_params, params);
- /*
* Special treatment for OPTEE_RPC_CMD_SHM_ALLOC since input is a
* value type, but the output is a memref type.
*/
- if (arg->cmd != OPTEE_RPC_CMD_SHM_ALLOC)
if (optee->ops->to_msg_param(optee, arg->params, arg->num_params,update_out = true;
params))
arg->ret = TEEC_ERROR_BAD_PARAMETERS;params, update_out))
out: kfree(params); @@ -270,7 +278,7 @@ static void handle_rpc_func_rpmb_probe_reset(struct tee_context *ctx, if (arg->num_params != ARRAY_SIZE(params) || optee->ops->from_msg_param(optee, params, arg->num_params,
arg->params) ||
params[0].attr != TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; return;arg->params, false /*!update_out*/) ||
@@ -280,7 +288,8 @@ static void handle_rpc_func_rpmb_probe_reset(struct tee_context *ctx, params[0].u.value.b = 0; params[0].u.value.c = 0; if (optee->ops->to_msg_param(optee, arg->params,
arg->num_params, params)) {
arg->num_params, params,
arg->ret = TEEC_ERROR_BAD_PARAMETERS; return; }true /*update_out*/)) {
@@ -324,7 +333,7 @@ static void handle_rpc_func_rpmb_probe_next(struct tee_context *ctx, if (arg->num_params != ARRAY_SIZE(params) || optee->ops->from_msg_param(optee, params, arg->num_params,
arg->params) ||
params[0].attr != TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT || params[1].attr != TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT) { arg->ret = TEEC_ERROR_BAD_PARAMETERS;arg->params, false /*!update_out*/) ||
@@ -358,7 +367,8 @@ static void handle_rpc_func_rpmb_probe_next(struct tee_context *ctx, params[0].u.value.b = rdev->descr.capacity; params[0].u.value.c = rdev->descr.reliable_wr_count; if (optee->ops->to_msg_param(optee, arg->params,
arg->num_params, params)) {
arg->num_params, params,
arg->ret = TEEC_ERROR_BAD_PARAMETERS; return; }true /*update_out*/)) {
@@ -384,7 +394,7 @@ static void handle_rpc_func_rpmb_frames(struct tee_context *ctx, if (arg->num_params != ARRAY_SIZE(params) || optee->ops->from_msg_param(optee, params, arg->num_params,
arg->params) ||
params[0].attr != TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT || params[1].attr != TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT) { arg->ret = TEEC_ERROR_BAD_PARAMETERS;arg->params, false /*!update_out*/) ||
@@ -401,7 +411,8 @@ static void handle_rpc_func_rpmb_frames(struct tee_context *ctx, goto out; } if (optee->ops->to_msg_param(optee, arg->params,
arg->num_params, params)) {
arg->num_params, params,
arg->ret = TEEC_ERROR_BAD_PARAMETERS; goto out; }true /*update_out*/)) {
diff --git a/drivers/tee/optee/smc_abi.c b/drivers/tee/optee/smc_abi.c index 165fadd9abc9..cfdae266548b 100644 --- a/drivers/tee/optee/smc_abi.c +++ b/drivers/tee/optee/smc_abi.c @@ -81,20 +81,26 @@ static int optee_cpuhp_disable_pcpu_irq(unsigned int cpu) */ static int from_msg_param_tmp_mem(struct tee_param *p, u32 attr,
const struct optee_msg_param *mp)
const struct optee_msg_param *mp,
bool update_out)
{ struct tee_shm *shm; phys_addr_t pa; int rc;
- if (update_out) {
if (attr == OPTEE_MSG_ATTR_TYPE_TMEM_INPUT)
return 0;
goto out;
- }
- p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + attr - OPTEE_MSG_ATTR_TYPE_TMEM_INPUT;
- p->u.memref.size = mp->u.tmem.size; shm = (struct tee_shm *)(unsigned long)mp->u.tmem.shm_ref; if (!shm) { p->u.memref.shm_offs = 0; p->u.memref.shm = NULL;
return 0;
}goto out;
rc = tee_shm_get_pa(shm, 0, &pa); @@ -103,18 +109,25 @@ static int from_msg_param_tmp_mem(struct tee_param *p, u32 attr, p->u.memref.shm_offs = mp->u.tmem.buf_ptr - pa; p->u.memref.shm = shm;
+out:
- p->u.memref.size = mp->u.tmem.size; return 0;
} static void from_msg_param_reg_mem(struct tee_param *p, u32 attr,
const struct optee_msg_param *mp)
const struct optee_msg_param *mp,
bool update_out)
{ struct tee_shm *shm;
- if (update_out) {
if (attr == OPTEE_MSG_ATTR_TYPE_RMEM_INPUT)
return;
goto out;
- }
- p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + attr - OPTEE_MSG_ATTR_TYPE_RMEM_INPUT;
- p->u.memref.size = mp->u.rmem.size; shm = (struct tee_shm *)(unsigned long)mp->u.rmem.shm_ref;
if (shm) { @@ -124,6 +137,8 @@ static void from_msg_param_reg_mem(struct tee_param *p, u32 attr, p->u.memref.shm_offs = 0; p->u.memref.shm = NULL; } +out:
- p->u.memref.size = mp->u.rmem.size;
} /** @@ -133,11 +148,13 @@ static void from_msg_param_reg_mem(struct tee_param *p, u32 attr,
- @params: subsystem internal parameter representation
- @num_params: number of elements in the parameter arrays
- @msg_params: OPTEE_MSG parameters
*/
- @update_out: update parameter for output only
- Returns 0 on success or <0 on failure
static int optee_from_msg_param(struct optee *optee, struct tee_param *params, size_t num_params,
const struct optee_msg_param *msg_params)
const struct optee_msg_param *msg_params,
bool update_out)
{ int rc; size_t n; @@ -149,25 +166,27 @@ static int optee_from_msg_param(struct optee *optee, struct tee_param *params, switch (attr) { case OPTEE_MSG_ATTR_TYPE_NONE:
if (update_out)
case OPTEE_MSG_ATTR_TYPE_VALUE_INPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT:break; p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&p->u, 0, sizeof(p->u)); break;
optee_from_msg_param_value(p, attr, mp);
case OPTEE_MSG_ATTR_TYPE_TMEM_INPUT: case OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT: case OPTEE_MSG_ATTR_TYPE_TMEM_INOUT:optee_from_msg_param_value(p, attr, mp, update_out); break;
rc = from_msg_param_tmp_mem(p, attr, mp);
case OPTEE_MSG_ATTR_TYPE_RMEM_INPUT: case OPTEE_MSG_ATTR_TYPE_RMEM_OUTPUT: case OPTEE_MSG_ATTR_TYPE_RMEM_INOUT:rc = from_msg_param_tmp_mem(p, attr, mp, update_out); if (rc) return rc; break;
from_msg_param_reg_mem(p, attr, mp);
from_msg_param_reg_mem(p, attr, mp, update_out); break;
default: @@ -178,20 +197,25 @@ static int optee_from_msg_param(struct optee *optee, struct tee_param *params, } static int to_msg_param_tmp_mem(struct optee_msg_param *mp,
const struct tee_param *p)
const struct tee_param *p, bool update_out)
{ int rc; phys_addr_t pa;
- if (update_out) {
if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT)
return 0;
goto out;
- }
- mp->attr = OPTEE_MSG_ATTR_TYPE_TMEM_INPUT + p->attr - TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT;
mp->u.tmem.shm_ref = (unsigned long)p->u.memref.shm;
- mp->u.tmem.size = p->u.memref.size;
if (!p->u.memref.shm) { mp->u.tmem.buf_ptr = 0;
return 0;
}goto out;
rc = tee_shm_get_pa(p->u.memref.shm, p->u.memref.shm_offs, &pa); @@ -201,19 +225,27 @@ static int to_msg_param_tmp_mem(struct optee_msg_param *mp, mp->u.tmem.buf_ptr = pa; mp->attr |= OPTEE_MSG_ATTR_CACHE_PREDEFINED << OPTEE_MSG_ATTR_CACHE_SHIFT;
+out:
- mp->u.tmem.size = p->u.memref.size; return 0;
} static int to_msg_param_reg_mem(struct optee_msg_param *mp,
const struct tee_param *p)
const struct tee_param *p, bool update_out)
{
- if (update_out) {
if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT)
return 0;
goto out;
- }
- mp->attr = OPTEE_MSG_ATTR_TYPE_RMEM_INPUT + p->attr - TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT;
mp->u.rmem.shm_ref = (unsigned long)p->u.memref.shm;
- mp->u.rmem.size = p->u.memref.size; mp->u.rmem.offs = p->u.memref.shm_offs;
+out:
- mp->u.rmem.size = p->u.memref.size; return 0;
} @@ -223,11 +255,13 @@ static int to_msg_param_reg_mem(struct optee_msg_param *mp,
- @msg_params: OPTEE_MSG parameters
- @num_params: number of elements in the parameter arrays
- @params: subsystem itnernal parameter representation
*/
- @update_out: update parameter for output only
- Returns 0 on success or <0 on failure
static int optee_to_msg_param(struct optee *optee, struct optee_msg_param *msg_params,
size_t num_params, const struct tee_param *params)
size_t num_params, const struct tee_param *params,
bool update_out)
{ int rc; size_t n; @@ -238,21 +272,23 @@ static int optee_to_msg_param(struct optee *optee, switch (p->attr) { case TEE_IOCTL_PARAM_ATTR_TYPE_NONE:
if (update_out)
case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT:break; mp->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&mp->u, 0, sizeof(mp->u)); break;
optee_to_msg_param_value(mp, p);
case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT: if (tee_shm_is_dynamic(p->u.memref.shm))optee_to_msg_param_value(mp, p, update_out); break;
rc = to_msg_param_reg_mem(mp, p);
rc = to_msg_param_reg_mem(mp, p, update_out); else
rc = to_msg_param_tmp_mem(mp, p);
rc = to_msg_param_tmp_mem(mp, p, update_out); if (rc) return rc; break;
-- 2.43.0
Hi Sumit,
On Thu, Mar 13, 2025 at 11:41 AM Sumit Garg sumit.garg@kernel.org wrote:
Hi Jens,
On Wed, Mar 05, 2025 at 02:04:09PM +0100, Jens Wiklander wrote:
The OP-TEE backend driver has two internal function pointers to convert between the subsystem type struct tee_param and the OP-TEE type struct optee_msg_param.
The conversion is done from one of the types to the other, which is then involved in some operation and finally converted back to the original type. When converting to prepare the parameters for the operation, all fields must be taken into account, but then converting back, it's enough to update only out-values and out-sizes. So, an update_out parameter is added to the conversion functions to tell if all or only some fields must be copied.
This is needed in a later patch where it might get confusing when converting back in from_msg_param() callback since an allocated restricted SHM can be using the sec_world_id of the used restricted memory pool and that doesn't translate back well.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/tee/optee/call.c | 10 ++-- drivers/tee/optee/ffa_abi.c | 43 +++++++++++++---- drivers/tee/optee/optee_private.h | 42 +++++++++++------ drivers/tee/optee/rpc.c | 31 +++++++++---- drivers/tee/optee/smc_abi.c | 76 +++++++++++++++++++++++-------- 5 files changed, 144 insertions(+), 58 deletions(-)
diff --git a/drivers/tee/optee/call.c b/drivers/tee/optee/call.c index 16eb953e14bb..f1533b894726 100644 --- a/drivers/tee/optee/call.c +++ b/drivers/tee/optee/call.c @@ -400,7 +400,8 @@ int optee_open_session(struct tee_context *ctx, export_uuid(msg_arg->params[1].u.octets, &client_uuid);
rc = optee->ops->to_msg_param(optee, msg_arg->params + 2,
arg->num_params, param);
arg->num_params, param,
false /*!update_out*/); if (rc) goto out;
@@ -427,7 +428,8 @@ int optee_open_session(struct tee_context *ctx, }
if (optee->ops->from_msg_param(optee, param, arg->num_params,
msg_arg->params + 2)) {
msg_arg->params + 2,
true /*update_out*/)) { arg->ret = TEEC_ERROR_COMMUNICATION; arg->ret_origin = TEEC_ORIGIN_COMMS; /* Close session again to avoid leakage */
@@ -541,7 +543,7 @@ int optee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, msg_arg->cancel_id = arg->cancel_id;
rc = optee->ops->to_msg_param(optee, msg_arg->params, arg->num_params,
param);
param, false /*!update_out*/); if (rc) goto out;
@@ -551,7 +553,7 @@ int optee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, }
if (optee->ops->from_msg_param(optee, param, arg->num_params,
msg_arg->params)) {
msg_arg->params, true /*update_out*/)) { msg_arg->ret = TEEC_ERROR_COMMUNICATION; msg_arg->ret_origin = TEEC_ORIGIN_COMMS; }
diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c index 4ca1d5161b82..e4b08cd195f3 100644 --- a/drivers/tee/optee/ffa_abi.c +++ b/drivers/tee/optee/ffa_abi.c @@ -122,15 +122,21 @@ static int optee_shm_rem_ffa_handle(struct optee *optee, u64 global_id) */
static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p,
u32 attr, const struct optee_msg_param *mp)
u32 attr, const struct optee_msg_param *mp,
bool update_out)
{ struct tee_shm *shm = NULL; u64 offs_high = 0; u64 offs_low = 0;
if (update_out) {
if (attr == OPTEE_MSG_ATTR_TYPE_FMEM_INPUT)
return;
goto out;
}
p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + attr - OPTEE_MSG_ATTR_TYPE_FMEM_INPUT;
p->u.memref.size = mp->u.fmem.size; if (mp->u.fmem.global_id != OPTEE_MSG_FMEM_INVALID_GLOBAL_ID) shm = optee_shm_from_ffa_handle(optee, mp->u.fmem.global_id);
@@ -141,6 +147,8 @@ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p, offs_high = mp->u.fmem.offs_high; } p->u.memref.shm_offs = offs_low | offs_high << 32; +out:
p->u.memref.size = mp->u.fmem.size;
}
/** @@ -150,12 +158,14 @@ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p,
- @params: subsystem internal parameter representation
- @num_params: number of elements in the parameter arrays
- @msg_params: OPTEE_MSG parameters
*/
- @update_out: update parameter for output only
- Returns 0 on success or <0 on failure
static int optee_ffa_from_msg_param(struct optee *optee, struct tee_param *params, size_t num_params,
const struct optee_msg_param *msg_params)
const struct optee_msg_param *msg_params,
bool update_out)
{ size_t n;
@@ -166,18 +176,20 @@ static int optee_ffa_from_msg_param(struct optee *optee,
switch (attr) { case OPTEE_MSG_ATTR_TYPE_NONE:
if (update_out)
break; p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&p->u, 0, sizeof(p->u)); break; case OPTEE_MSG_ATTR_TYPE_VALUE_INPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT:
optee_from_msg_param_value(p, attr, mp);
optee_from_msg_param_value(p, attr, mp, update_out); break; case OPTEE_MSG_ATTR_TYPE_FMEM_INPUT: case OPTEE_MSG_ATTR_TYPE_FMEM_OUTPUT: case OPTEE_MSG_ATTR_TYPE_FMEM_INOUT:
from_msg_param_ffa_mem(optee, p, attr, mp);
from_msg_param_ffa_mem(optee, p, attr, mp, update_out); break; default: return -EINVAL;
@@ -188,10 +200,16 @@ static int optee_ffa_from_msg_param(struct optee *optee, }
static int to_msg_param_ffa_mem(struct optee_msg_param *mp,
const struct tee_param *p)
const struct tee_param *p, bool update_out)
{ struct tee_shm *shm = p->u.memref.shm;
if (update_out) {
if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT)
return 0;
goto out;
}
mp->attr = OPTEE_MSG_ATTR_TYPE_FMEM_INPUT + p->attr - TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT;
@@ -211,6 +229,7 @@ static int to_msg_param_ffa_mem(struct optee_msg_param *mp, memset(&mp->u, 0, sizeof(mp->u)); mp->u.fmem.global_id = OPTEE_MSG_FMEM_INVALID_GLOBAL_ID; } +out: mp->u.fmem.size = p->u.memref.size;
return 0;
@@ -222,13 +241,15 @@ static int to_msg_param_ffa_mem(struct optee_msg_param *mp,
- @optee: main service struct
- @msg_params: OPTEE_MSG parameters
- @num_params: number of elements in the parameter arrays
- @params: subsystem itnernal parameter representation
- @params: subsystem internal parameter representation
*/
- @update_out: update parameter for output only
- Returns 0 on success or <0 on failure
static int optee_ffa_to_msg_param(struct optee *optee, struct optee_msg_param *msg_params, size_t num_params,
const struct tee_param *params)
const struct tee_param *params,
bool update_out)
{ size_t n;
@@ -238,18 +259,20 @@ static int optee_ffa_to_msg_param(struct optee *optee,
switch (p->attr) { case TEE_IOCTL_PARAM_ATTR_TYPE_NONE:
if (update_out)
break; mp->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&mp->u, 0, sizeof(mp->u)); break; case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT:
optee_to_msg_param_value(mp, p);
optee_to_msg_param_value(mp, p, update_out); break; case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT:
if (to_msg_param_ffa_mem(mp, p))
if (to_msg_param_ffa_mem(mp, p, update_out)) return -EINVAL; break; default:
Can we rather handle it as follows to improve code readability and maintainence long term? Ditto for all other places.
static int optee_ffa_to_msg_param(struct optee *optee, struct optee_msg_param *msg_params, size_t num_params, const struct tee_param *params, bool update_out) { size_t n;
for (n = 0; n < num_params; n++) { const struct tee_param *p = params + n; struct optee_msg_param *mp = msg_params + n; if (update_out && (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_NONE || p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT || p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT)) continue;
You're missing updating the length field for memrefs.
Cheers, Jens
switch (p->attr) { case TEE_IOCTL_PARAM_ATTR_TYPE_NONE: mp->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&mp->u, 0, sizeof(mp->u)); break; case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT: optee_to_msg_param_value(mp, p); break; case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT: if (to_msg_param_ffa_mem(mp, p)) return -EINVAL; break; default: return -EINVAL; } } return 0;
}
-Sumit
diff --git a/drivers/tee/optee/optee_private.h b/drivers/tee/optee/optee_private.h index dc0f355ef72a..20eda508dbac 100644 --- a/drivers/tee/optee/optee_private.h +++ b/drivers/tee/optee/optee_private.h @@ -185,10 +185,12 @@ struct optee_ops { bool system_thread); int (*to_msg_param)(struct optee *optee, struct optee_msg_param *msg_params,
size_t num_params, const struct tee_param *params);
size_t num_params, const struct tee_param *params,
bool update_out); int (*from_msg_param)(struct optee *optee, struct tee_param *params, size_t num_params,
const struct optee_msg_param *msg_params);
const struct optee_msg_param *msg_params,
bool update_out);
};
/** @@ -316,23 +318,35 @@ void optee_release(struct tee_context *ctx); void optee_release_supp(struct tee_context *ctx);
static inline void optee_from_msg_param_value(struct tee_param *p, u32 attr,
const struct optee_msg_param *mp)
const struct optee_msg_param *mp,
bool update_out)
{
p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT +
attr - OPTEE_MSG_ATTR_TYPE_VALUE_INPUT;
p->u.value.a = mp->u.value.a;
p->u.value.b = mp->u.value.b;
p->u.value.c = mp->u.value.c;
if (!update_out)
p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT +
attr - OPTEE_MSG_ATTR_TYPE_VALUE_INPUT;
if (attr == OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT ||
attr == OPTEE_MSG_ATTR_TYPE_VALUE_INOUT || !update_out) {
p->u.value.a = mp->u.value.a;
p->u.value.b = mp->u.value.b;
p->u.value.c = mp->u.value.c;
}
}
static inline void optee_to_msg_param_value(struct optee_msg_param *mp,
const struct tee_param *p)
const struct tee_param *p,
bool update_out)
{
mp->attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT + p->attr -
TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT;
mp->u.value.a = p->u.value.a;
mp->u.value.b = p->u.value.b;
mp->u.value.c = p->u.value.c;
if (!update_out)
mp->attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT + p->attr -
TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT;
if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT ||
p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT || !update_out) {
mp->u.value.a = p->u.value.a;
mp->u.value.b = p->u.value.b;
mp->u.value.c = p->u.value.c;
}
}
void optee_cq_init(struct optee_call_queue *cq, int thread_count); diff --git a/drivers/tee/optee/rpc.c b/drivers/tee/optee/rpc.c index ebbbd42b0e3e..580e6b9b0606 100644 --- a/drivers/tee/optee/rpc.c +++ b/drivers/tee/optee/rpc.c @@ -63,7 +63,7 @@ static void handle_rpc_func_cmd_i2c_transfer(struct tee_context *ctx, }
if (optee->ops->from_msg_param(optee, params, arg->num_params,
arg->params))
arg->params, false /*!update_out*/)) goto bad; for (i = 0; i < arg->num_params; i++) {
@@ -107,7 +107,8 @@ static void handle_rpc_func_cmd_i2c_transfer(struct tee_context *ctx, } else { params[3].u.value.a = msg.len; if (optee->ops->to_msg_param(optee, arg->params,
arg->num_params, params))
arg->num_params, params,
true /*update_out*/)) arg->ret = TEEC_ERROR_BAD_PARAMETERS; else arg->ret = TEEC_SUCCESS;
@@ -188,6 +189,7 @@ static void handle_rpc_func_cmd_wait(struct optee_msg_arg *arg) static void handle_rpc_supp_cmd(struct tee_context *ctx, struct optee *optee, struct optee_msg_arg *arg) {
bool update_out = false; struct tee_param *params; arg->ret_origin = TEEC_ORIGIN_COMMS;
@@ -200,15 +202,21 @@ static void handle_rpc_supp_cmd(struct tee_context *ctx, struct optee *optee, }
if (optee->ops->from_msg_param(optee, params, arg->num_params,
arg->params)) {
arg->params, update_out)) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; goto out; } arg->ret = optee_supp_thrd_req(ctx, arg->cmd, arg->num_params, params);
/*
* Special treatment for OPTEE_RPC_CMD_SHM_ALLOC since input is a
* value type, but the output is a memref type.
*/
if (arg->cmd != OPTEE_RPC_CMD_SHM_ALLOC)
update_out = true; if (optee->ops->to_msg_param(optee, arg->params, arg->num_params,
params))
params, update_out)) arg->ret = TEEC_ERROR_BAD_PARAMETERS;
out: kfree(params); @@ -270,7 +278,7 @@ static void handle_rpc_func_rpmb_probe_reset(struct tee_context *ctx,
if (arg->num_params != ARRAY_SIZE(params) || optee->ops->from_msg_param(optee, params, arg->num_params,
arg->params) ||
arg->params, false /*!update_out*/) || params[0].attr != TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; return;
@@ -280,7 +288,8 @@ static void handle_rpc_func_rpmb_probe_reset(struct tee_context *ctx, params[0].u.value.b = 0; params[0].u.value.c = 0; if (optee->ops->to_msg_param(optee, arg->params,
arg->num_params, params)) {
arg->num_params, params,
true /*update_out*/)) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; return; }
@@ -324,7 +333,7 @@ static void handle_rpc_func_rpmb_probe_next(struct tee_context *ctx,
if (arg->num_params != ARRAY_SIZE(params) || optee->ops->from_msg_param(optee, params, arg->num_params,
arg->params) ||
arg->params, false /*!update_out*/) || params[0].attr != TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT || params[1].attr != TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT) { arg->ret = TEEC_ERROR_BAD_PARAMETERS;
@@ -358,7 +367,8 @@ static void handle_rpc_func_rpmb_probe_next(struct tee_context *ctx, params[0].u.value.b = rdev->descr.capacity; params[0].u.value.c = rdev->descr.reliable_wr_count; if (optee->ops->to_msg_param(optee, arg->params,
arg->num_params, params)) {
arg->num_params, params,
true /*update_out*/)) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; return; }
@@ -384,7 +394,7 @@ static void handle_rpc_func_rpmb_frames(struct tee_context *ctx,
if (arg->num_params != ARRAY_SIZE(params) || optee->ops->from_msg_param(optee, params, arg->num_params,
arg->params) ||
arg->params, false /*!update_out*/) || params[0].attr != TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT || params[1].attr != TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT) { arg->ret = TEEC_ERROR_BAD_PARAMETERS;
@@ -401,7 +411,8 @@ static void handle_rpc_func_rpmb_frames(struct tee_context *ctx, goto out; } if (optee->ops->to_msg_param(optee, arg->params,
arg->num_params, params)) {
arg->num_params, params,
true /*update_out*/)) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; goto out; }
diff --git a/drivers/tee/optee/smc_abi.c b/drivers/tee/optee/smc_abi.c index 165fadd9abc9..cfdae266548b 100644 --- a/drivers/tee/optee/smc_abi.c +++ b/drivers/tee/optee/smc_abi.c @@ -81,20 +81,26 @@ static int optee_cpuhp_disable_pcpu_irq(unsigned int cpu) */
static int from_msg_param_tmp_mem(struct tee_param *p, u32 attr,
const struct optee_msg_param *mp)
const struct optee_msg_param *mp,
bool update_out)
{ struct tee_shm *shm; phys_addr_t pa; int rc;
if (update_out) {
if (attr == OPTEE_MSG_ATTR_TYPE_TMEM_INPUT)
return 0;
goto out;
}
p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + attr - OPTEE_MSG_ATTR_TYPE_TMEM_INPUT;
p->u.memref.size = mp->u.tmem.size; shm = (struct tee_shm *)(unsigned long)mp->u.tmem.shm_ref; if (!shm) { p->u.memref.shm_offs = 0; p->u.memref.shm = NULL;
return 0;
goto out; } rc = tee_shm_get_pa(shm, 0, &pa);
@@ -103,18 +109,25 @@ static int from_msg_param_tmp_mem(struct tee_param *p, u32 attr,
p->u.memref.shm_offs = mp->u.tmem.buf_ptr - pa; p->u.memref.shm = shm;
+out:
p->u.memref.size = mp->u.tmem.size; return 0;
}
static void from_msg_param_reg_mem(struct tee_param *p, u32 attr,
const struct optee_msg_param *mp)
const struct optee_msg_param *mp,
bool update_out)
{ struct tee_shm *shm;
if (update_out) {
if (attr == OPTEE_MSG_ATTR_TYPE_RMEM_INPUT)
return;
goto out;
}
p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + attr - OPTEE_MSG_ATTR_TYPE_RMEM_INPUT;
p->u.memref.size = mp->u.rmem.size; shm = (struct tee_shm *)(unsigned long)mp->u.rmem.shm_ref; if (shm) {
@@ -124,6 +137,8 @@ static void from_msg_param_reg_mem(struct tee_param *p, u32 attr, p->u.memref.shm_offs = 0; p->u.memref.shm = NULL; } +out:
p->u.memref.size = mp->u.rmem.size;
}
/** @@ -133,11 +148,13 @@ static void from_msg_param_reg_mem(struct tee_param *p, u32 attr,
- @params: subsystem internal parameter representation
- @num_params: number of elements in the parameter arrays
- @msg_params: OPTEE_MSG parameters
*/
- @update_out: update parameter for output only
- Returns 0 on success or <0 on failure
static int optee_from_msg_param(struct optee *optee, struct tee_param *params, size_t num_params,
const struct optee_msg_param *msg_params)
const struct optee_msg_param *msg_params,
bool update_out)
{ int rc; size_t n; @@ -149,25 +166,27 @@ static int optee_from_msg_param(struct optee *optee, struct tee_param *params,
switch (attr) { case OPTEE_MSG_ATTR_TYPE_NONE:
if (update_out)
break; p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&p->u, 0, sizeof(p->u)); break; case OPTEE_MSG_ATTR_TYPE_VALUE_INPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT:
optee_from_msg_param_value(p, attr, mp);
optee_from_msg_param_value(p, attr, mp, update_out); break; case OPTEE_MSG_ATTR_TYPE_TMEM_INPUT: case OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT: case OPTEE_MSG_ATTR_TYPE_TMEM_INOUT:
rc = from_msg_param_tmp_mem(p, attr, mp);
rc = from_msg_param_tmp_mem(p, attr, mp, update_out); if (rc) return rc; break; case OPTEE_MSG_ATTR_TYPE_RMEM_INPUT: case OPTEE_MSG_ATTR_TYPE_RMEM_OUTPUT: case OPTEE_MSG_ATTR_TYPE_RMEM_INOUT:
from_msg_param_reg_mem(p, attr, mp);
from_msg_param_reg_mem(p, attr, mp, update_out); break; default:
@@ -178,20 +197,25 @@ static int optee_from_msg_param(struct optee *optee, struct tee_param *params, }
static int to_msg_param_tmp_mem(struct optee_msg_param *mp,
const struct tee_param *p)
const struct tee_param *p, bool update_out)
{ int rc; phys_addr_t pa;
if (update_out) {
if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT)
return 0;
goto out;
}
mp->attr = OPTEE_MSG_ATTR_TYPE_TMEM_INPUT + p->attr - TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT; mp->u.tmem.shm_ref = (unsigned long)p->u.memref.shm;
mp->u.tmem.size = p->u.memref.size; if (!p->u.memref.shm) { mp->u.tmem.buf_ptr = 0;
return 0;
goto out; } rc = tee_shm_get_pa(p->u.memref.shm, p->u.memref.shm_offs, &pa);
@@ -201,19 +225,27 @@ static int to_msg_param_tmp_mem(struct optee_msg_param *mp, mp->u.tmem.buf_ptr = pa; mp->attr |= OPTEE_MSG_ATTR_CACHE_PREDEFINED << OPTEE_MSG_ATTR_CACHE_SHIFT;
+out:
mp->u.tmem.size = p->u.memref.size; return 0;
}
static int to_msg_param_reg_mem(struct optee_msg_param *mp,
const struct tee_param *p)
const struct tee_param *p, bool update_out)
{
if (update_out) {
if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT)
return 0;
goto out;
}
mp->attr = OPTEE_MSG_ATTR_TYPE_RMEM_INPUT + p->attr - TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT; mp->u.rmem.shm_ref = (unsigned long)p->u.memref.shm;
mp->u.rmem.size = p->u.memref.size; mp->u.rmem.offs = p->u.memref.shm_offs;
+out:
mp->u.rmem.size = p->u.memref.size; return 0;
}
@@ -223,11 +255,13 @@ static int to_msg_param_reg_mem(struct optee_msg_param *mp,
- @msg_params: OPTEE_MSG parameters
- @num_params: number of elements in the parameter arrays
- @params: subsystem itnernal parameter representation
*/
- @update_out: update parameter for output only
- Returns 0 on success or <0 on failure
static int optee_to_msg_param(struct optee *optee, struct optee_msg_param *msg_params,
size_t num_params, const struct tee_param *params)
size_t num_params, const struct tee_param *params,
bool update_out)
{ int rc; size_t n; @@ -238,21 +272,23 @@ static int optee_to_msg_param(struct optee *optee,
switch (p->attr) { case TEE_IOCTL_PARAM_ATTR_TYPE_NONE:
if (update_out)
break; mp->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&mp->u, 0, sizeof(mp->u)); break; case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT:
optee_to_msg_param_value(mp, p);
optee_to_msg_param_value(mp, p, update_out); break; case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT: if (tee_shm_is_dynamic(p->u.memref.shm))
rc = to_msg_param_reg_mem(mp, p);
rc = to_msg_param_reg_mem(mp, p, update_out); else
rc = to_msg_param_tmp_mem(mp, p);
rc = to_msg_param_tmp_mem(mp, p, update_out); if (rc) return rc; break;
-- 2.43.0
Hi Jens,
On Mon, Mar 17, 2025 at 08:42:01AM +0100, Jens Wiklander wrote:
Hi Sumit,
On Thu, Mar 13, 2025 at 11:41 AM Sumit Garg sumit.garg@kernel.org wrote:
Hi Jens,
On Wed, Mar 05, 2025 at 02:04:09PM +0100, Jens Wiklander wrote:
The OP-TEE backend driver has two internal function pointers to convert between the subsystem type struct tee_param and the OP-TEE type struct optee_msg_param.
The conversion is done from one of the types to the other, which is then involved in some operation and finally converted back to the original type. When converting to prepare the parameters for the operation, all fields must be taken into account, but then converting back, it's enough to update only out-values and out-sizes. So, an update_out parameter is added to the conversion functions to tell if all or only some fields must be copied.
This is needed in a later patch where it might get confusing when converting back in from_msg_param() callback since an allocated restricted SHM can be using the sec_world_id of the used restricted memory pool and that doesn't translate back well.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/tee/optee/call.c | 10 ++-- drivers/tee/optee/ffa_abi.c | 43 +++++++++++++---- drivers/tee/optee/optee_private.h | 42 +++++++++++------ drivers/tee/optee/rpc.c | 31 +++++++++---- drivers/tee/optee/smc_abi.c | 76 +++++++++++++++++++++++-------- 5 files changed, 144 insertions(+), 58 deletions(-)
diff --git a/drivers/tee/optee/call.c b/drivers/tee/optee/call.c index 16eb953e14bb..f1533b894726 100644 --- a/drivers/tee/optee/call.c +++ b/drivers/tee/optee/call.c @@ -400,7 +400,8 @@ int optee_open_session(struct tee_context *ctx, export_uuid(msg_arg->params[1].u.octets, &client_uuid);
rc = optee->ops->to_msg_param(optee, msg_arg->params + 2,
arg->num_params, param);
arg->num_params, param,
false /*!update_out*/); if (rc) goto out;
@@ -427,7 +428,8 @@ int optee_open_session(struct tee_context *ctx, }
if (optee->ops->from_msg_param(optee, param, arg->num_params,
msg_arg->params + 2)) {
msg_arg->params + 2,
true /*update_out*/)) { arg->ret = TEEC_ERROR_COMMUNICATION; arg->ret_origin = TEEC_ORIGIN_COMMS; /* Close session again to avoid leakage */
@@ -541,7 +543,7 @@ int optee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, msg_arg->cancel_id = arg->cancel_id;
rc = optee->ops->to_msg_param(optee, msg_arg->params, arg->num_params,
param);
param, false /*!update_out*/); if (rc) goto out;
@@ -551,7 +553,7 @@ int optee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, }
if (optee->ops->from_msg_param(optee, param, arg->num_params,
msg_arg->params)) {
msg_arg->params, true /*update_out*/)) { msg_arg->ret = TEEC_ERROR_COMMUNICATION; msg_arg->ret_origin = TEEC_ORIGIN_COMMS; }
diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c index 4ca1d5161b82..e4b08cd195f3 100644 --- a/drivers/tee/optee/ffa_abi.c +++ b/drivers/tee/optee/ffa_abi.c @@ -122,15 +122,21 @@ static int optee_shm_rem_ffa_handle(struct optee *optee, u64 global_id) */
static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p,
u32 attr, const struct optee_msg_param *mp)
u32 attr, const struct optee_msg_param *mp,
bool update_out)
{ struct tee_shm *shm = NULL; u64 offs_high = 0; u64 offs_low = 0;
if (update_out) {
if (attr == OPTEE_MSG_ATTR_TYPE_FMEM_INPUT)
return;
goto out;
}
p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + attr - OPTEE_MSG_ATTR_TYPE_FMEM_INPUT;
p->u.memref.size = mp->u.fmem.size; if (mp->u.fmem.global_id != OPTEE_MSG_FMEM_INVALID_GLOBAL_ID) shm = optee_shm_from_ffa_handle(optee, mp->u.fmem.global_id);
@@ -141,6 +147,8 @@ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p, offs_high = mp->u.fmem.offs_high; } p->u.memref.shm_offs = offs_low | offs_high << 32; +out:
p->u.memref.size = mp->u.fmem.size;
}
/** @@ -150,12 +158,14 @@ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p,
- @params: subsystem internal parameter representation
- @num_params: number of elements in the parameter arrays
- @msg_params: OPTEE_MSG parameters
*/
- @update_out: update parameter for output only
- Returns 0 on success or <0 on failure
static int optee_ffa_from_msg_param(struct optee *optee, struct tee_param *params, size_t num_params,
const struct optee_msg_param *msg_params)
const struct optee_msg_param *msg_params,
bool update_out)
{ size_t n;
@@ -166,18 +176,20 @@ static int optee_ffa_from_msg_param(struct optee *optee,
switch (attr) { case OPTEE_MSG_ATTR_TYPE_NONE:
if (update_out)
break; p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&p->u, 0, sizeof(p->u)); break; case OPTEE_MSG_ATTR_TYPE_VALUE_INPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT:
optee_from_msg_param_value(p, attr, mp);
optee_from_msg_param_value(p, attr, mp, update_out); break; case OPTEE_MSG_ATTR_TYPE_FMEM_INPUT: case OPTEE_MSG_ATTR_TYPE_FMEM_OUTPUT: case OPTEE_MSG_ATTR_TYPE_FMEM_INOUT:
from_msg_param_ffa_mem(optee, p, attr, mp);
from_msg_param_ffa_mem(optee, p, attr, mp, update_out); break; default: return -EINVAL;
@@ -188,10 +200,16 @@ static int optee_ffa_from_msg_param(struct optee *optee, }
static int to_msg_param_ffa_mem(struct optee_msg_param *mp,
const struct tee_param *p)
const struct tee_param *p, bool update_out)
{ struct tee_shm *shm = p->u.memref.shm;
if (update_out) {
if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT)
return 0;
goto out;
}
mp->attr = OPTEE_MSG_ATTR_TYPE_FMEM_INPUT + p->attr - TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT;
@@ -211,6 +229,7 @@ static int to_msg_param_ffa_mem(struct optee_msg_param *mp, memset(&mp->u, 0, sizeof(mp->u)); mp->u.fmem.global_id = OPTEE_MSG_FMEM_INVALID_GLOBAL_ID; } +out: mp->u.fmem.size = p->u.memref.size;
return 0;
@@ -222,13 +241,15 @@ static int to_msg_param_ffa_mem(struct optee_msg_param *mp,
- @optee: main service struct
- @msg_params: OPTEE_MSG parameters
- @num_params: number of elements in the parameter arrays
- @params: subsystem itnernal parameter representation
- @params: subsystem internal parameter representation
*/
- @update_out: update parameter for output only
- Returns 0 on success or <0 on failure
static int optee_ffa_to_msg_param(struct optee *optee, struct optee_msg_param *msg_params, size_t num_params,
const struct tee_param *params)
const struct tee_param *params,
bool update_out)
{ size_t n;
@@ -238,18 +259,20 @@ static int optee_ffa_to_msg_param(struct optee *optee,
switch (p->attr) { case TEE_IOCTL_PARAM_ATTR_TYPE_NONE:
if (update_out)
break; mp->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&mp->u, 0, sizeof(mp->u)); break; case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT:
optee_to_msg_param_value(mp, p);
optee_to_msg_param_value(mp, p, update_out); break; case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT:
if (to_msg_param_ffa_mem(mp, p))
if (to_msg_param_ffa_mem(mp, p, update_out)) return -EINVAL; break; default:
Can we rather handle it as follows to improve code readability and maintainence long term? Ditto for all other places.
static int optee_ffa_to_msg_param(struct optee *optee, struct optee_msg_param *msg_params, size_t num_params, const struct tee_param *params, bool update_out) { size_t n;
for (n = 0; n < num_params; n++) { const struct tee_param *p = params + n; struct optee_msg_param *mp = msg_params + n; if (update_out && (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_NONE || p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT || p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT)) continue;
You're missing updating the length field for memrefs.
Do we need to update length field for input memrefs when update_out is set? I don't see that happening in your existing patch too.
-Sumit
Cheers, Jens
switch (p->attr) { case TEE_IOCTL_PARAM_ATTR_TYPE_NONE: mp->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&mp->u, 0, sizeof(mp->u)); break; case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT: optee_to_msg_param_value(mp, p); break; case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT: if (to_msg_param_ffa_mem(mp, p)) return -EINVAL; break; default: return -EINVAL; } } return 0;
}
-Sumit
Hi Sumit,
On Thu, Mar 20, 2025 at 10:25 AM Sumit Garg sumit.garg@kernel.org wrote:
Hi Jens,
On Mon, Mar 17, 2025 at 08:42:01AM +0100, Jens Wiklander wrote:
Hi Sumit,
On Thu, Mar 13, 2025 at 11:41 AM Sumit Garg sumit.garg@kernel.org wrote:
Hi Jens,
On Wed, Mar 05, 2025 at 02:04:09PM +0100, Jens Wiklander wrote:
The OP-TEE backend driver has two internal function pointers to convert between the subsystem type struct tee_param and the OP-TEE type struct optee_msg_param.
The conversion is done from one of the types to the other, which is then involved in some operation and finally converted back to the original type. When converting to prepare the parameters for the operation, all fields must be taken into account, but then converting back, it's enough to update only out-values and out-sizes. So, an update_out parameter is added to the conversion functions to tell if all or only some fields must be copied.
This is needed in a later patch where it might get confusing when converting back in from_msg_param() callback since an allocated restricted SHM can be using the sec_world_id of the used restricted memory pool and that doesn't translate back well.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/tee/optee/call.c | 10 ++-- drivers/tee/optee/ffa_abi.c | 43 +++++++++++++---- drivers/tee/optee/optee_private.h | 42 +++++++++++------ drivers/tee/optee/rpc.c | 31 +++++++++---- drivers/tee/optee/smc_abi.c | 76 +++++++++++++++++++++++-------- 5 files changed, 144 insertions(+), 58 deletions(-)
diff --git a/drivers/tee/optee/call.c b/drivers/tee/optee/call.c index 16eb953e14bb..f1533b894726 100644 --- a/drivers/tee/optee/call.c +++ b/drivers/tee/optee/call.c @@ -400,7 +400,8 @@ int optee_open_session(struct tee_context *ctx, export_uuid(msg_arg->params[1].u.octets, &client_uuid);
rc = optee->ops->to_msg_param(optee, msg_arg->params + 2,
arg->num_params, param);
arg->num_params, param,
false /*!update_out*/); if (rc) goto out;
@@ -427,7 +428,8 @@ int optee_open_session(struct tee_context *ctx, }
if (optee->ops->from_msg_param(optee, param, arg->num_params,
msg_arg->params + 2)) {
msg_arg->params + 2,
true /*update_out*/)) { arg->ret = TEEC_ERROR_COMMUNICATION; arg->ret_origin = TEEC_ORIGIN_COMMS; /* Close session again to avoid leakage */
@@ -541,7 +543,7 @@ int optee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, msg_arg->cancel_id = arg->cancel_id;
rc = optee->ops->to_msg_param(optee, msg_arg->params, arg->num_params,
param);
param, false /*!update_out*/); if (rc) goto out;
@@ -551,7 +553,7 @@ int optee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, }
if (optee->ops->from_msg_param(optee, param, arg->num_params,
msg_arg->params)) {
msg_arg->params, true /*update_out*/)) { msg_arg->ret = TEEC_ERROR_COMMUNICATION; msg_arg->ret_origin = TEEC_ORIGIN_COMMS; }
diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c index 4ca1d5161b82..e4b08cd195f3 100644 --- a/drivers/tee/optee/ffa_abi.c +++ b/drivers/tee/optee/ffa_abi.c @@ -122,15 +122,21 @@ static int optee_shm_rem_ffa_handle(struct optee *optee, u64 global_id) */
static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p,
u32 attr, const struct optee_msg_param *mp)
u32 attr, const struct optee_msg_param *mp,
bool update_out)
{ struct tee_shm *shm = NULL; u64 offs_high = 0; u64 offs_low = 0;
if (update_out) {
if (attr == OPTEE_MSG_ATTR_TYPE_FMEM_INPUT)
return;
goto out;
}
p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + attr - OPTEE_MSG_ATTR_TYPE_FMEM_INPUT;
p->u.memref.size = mp->u.fmem.size; if (mp->u.fmem.global_id != OPTEE_MSG_FMEM_INVALID_GLOBAL_ID) shm = optee_shm_from_ffa_handle(optee, mp->u.fmem.global_id);
@@ -141,6 +147,8 @@ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p, offs_high = mp->u.fmem.offs_high; } p->u.memref.shm_offs = offs_low | offs_high << 32; +out:
p->u.memref.size = mp->u.fmem.size;
}
/** @@ -150,12 +158,14 @@ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p,
- @params: subsystem internal parameter representation
- @num_params: number of elements in the parameter arrays
- @msg_params: OPTEE_MSG parameters
*/
- @update_out: update parameter for output only
- Returns 0 on success or <0 on failure
static int optee_ffa_from_msg_param(struct optee *optee, struct tee_param *params, size_t num_params,
const struct optee_msg_param *msg_params)
const struct optee_msg_param *msg_params,
bool update_out)
{ size_t n;
@@ -166,18 +176,20 @@ static int optee_ffa_from_msg_param(struct optee *optee,
switch (attr) { case OPTEE_MSG_ATTR_TYPE_NONE:
if (update_out)
break; p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&p->u, 0, sizeof(p->u)); break; case OPTEE_MSG_ATTR_TYPE_VALUE_INPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT:
optee_from_msg_param_value(p, attr, mp);
optee_from_msg_param_value(p, attr, mp, update_out); break; case OPTEE_MSG_ATTR_TYPE_FMEM_INPUT: case OPTEE_MSG_ATTR_TYPE_FMEM_OUTPUT: case OPTEE_MSG_ATTR_TYPE_FMEM_INOUT:
from_msg_param_ffa_mem(optee, p, attr, mp);
from_msg_param_ffa_mem(optee, p, attr, mp, update_out); break; default: return -EINVAL;
@@ -188,10 +200,16 @@ static int optee_ffa_from_msg_param(struct optee *optee, }
static int to_msg_param_ffa_mem(struct optee_msg_param *mp,
const struct tee_param *p)
const struct tee_param *p, bool update_out)
{ struct tee_shm *shm = p->u.memref.shm;
if (update_out) {
if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT)
return 0;
goto out;
}
mp->attr = OPTEE_MSG_ATTR_TYPE_FMEM_INPUT + p->attr - TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT;
@@ -211,6 +229,7 @@ static int to_msg_param_ffa_mem(struct optee_msg_param *mp, memset(&mp->u, 0, sizeof(mp->u)); mp->u.fmem.global_id = OPTEE_MSG_FMEM_INVALID_GLOBAL_ID; } +out: mp->u.fmem.size = p->u.memref.size;
return 0;
@@ -222,13 +241,15 @@ static int to_msg_param_ffa_mem(struct optee_msg_param *mp,
- @optee: main service struct
- @msg_params: OPTEE_MSG parameters
- @num_params: number of elements in the parameter arrays
- @params: subsystem itnernal parameter representation
- @params: subsystem internal parameter representation
*/
- @update_out: update parameter for output only
- Returns 0 on success or <0 on failure
static int optee_ffa_to_msg_param(struct optee *optee, struct optee_msg_param *msg_params, size_t num_params,
const struct tee_param *params)
const struct tee_param *params,
bool update_out)
{ size_t n;
@@ -238,18 +259,20 @@ static int optee_ffa_to_msg_param(struct optee *optee,
switch (p->attr) { case TEE_IOCTL_PARAM_ATTR_TYPE_NONE:
if (update_out)
break; mp->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&mp->u, 0, sizeof(mp->u)); break; case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT:
optee_to_msg_param_value(mp, p);
optee_to_msg_param_value(mp, p, update_out); break; case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT:
if (to_msg_param_ffa_mem(mp, p))
if (to_msg_param_ffa_mem(mp, p, update_out)) return -EINVAL; break; default:
Can we rather handle it as follows to improve code readability and maintainence long term? Ditto for all other places.
static int optee_ffa_to_msg_param(struct optee *optee, struct optee_msg_param *msg_params, size_t num_params, const struct tee_param *params, bool update_out) { size_t n;
for (n = 0; n < num_params; n++) { const struct tee_param *p = params + n; struct optee_msg_param *mp = msg_params + n; if (update_out && (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_NONE || p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT || p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT)) continue;
You're missing updating the length field for memrefs.
Do we need to update length field for input memrefs when update_out is set? I don't see that happening in your existing patch too.
I'm sorry, I was unclear. The update_out parameter means only the output fields should be updated, not the attribute, offsets, ids, etc. That is, the length field for memrefs, and the value fields a, b, c for value params. Some of the memrefs aren't translated one-to-one with SDP, but the length field can and must be updated.
Cheers, Jens
-Sumit
Cheers, Jens
switch (p->attr) { case TEE_IOCTL_PARAM_ATTR_TYPE_NONE: mp->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&mp->u, 0, sizeof(mp->u)); break; case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT: optee_to_msg_param_value(mp, p); break; case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT: if (to_msg_param_ffa_mem(mp, p)) return -EINVAL; break; default: return -EINVAL; } } return 0;
}
-Sumit
On Thu, Mar 20, 2025 at 02:00:57PM +0100, Jens Wiklander wrote:
Hi Sumit,
On Thu, Mar 20, 2025 at 10:25 AM Sumit Garg sumit.garg@kernel.org wrote:
Hi Jens,
On Mon, Mar 17, 2025 at 08:42:01AM +0100, Jens Wiklander wrote:
Hi Sumit,
On Thu, Mar 13, 2025 at 11:41 AM Sumit Garg sumit.garg@kernel.org wrote:
Hi Jens,
On Wed, Mar 05, 2025 at 02:04:09PM +0100, Jens Wiklander wrote:
The OP-TEE backend driver has two internal function pointers to convert between the subsystem type struct tee_param and the OP-TEE type struct optee_msg_param.
The conversion is done from one of the types to the other, which is then involved in some operation and finally converted back to the original type. When converting to prepare the parameters for the operation, all fields must be taken into account, but then converting back, it's enough to update only out-values and out-sizes. So, an update_out parameter is added to the conversion functions to tell if all or only some fields must be copied.
This is needed in a later patch where it might get confusing when converting back in from_msg_param() callback since an allocated restricted SHM can be using the sec_world_id of the used restricted memory pool and that doesn't translate back well.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/tee/optee/call.c | 10 ++-- drivers/tee/optee/ffa_abi.c | 43 +++++++++++++---- drivers/tee/optee/optee_private.h | 42 +++++++++++------ drivers/tee/optee/rpc.c | 31 +++++++++---- drivers/tee/optee/smc_abi.c | 76 +++++++++++++++++++++++-------- 5 files changed, 144 insertions(+), 58 deletions(-)
diff --git a/drivers/tee/optee/call.c b/drivers/tee/optee/call.c index 16eb953e14bb..f1533b894726 100644 --- a/drivers/tee/optee/call.c +++ b/drivers/tee/optee/call.c @@ -400,7 +400,8 @@ int optee_open_session(struct tee_context *ctx, export_uuid(msg_arg->params[1].u.octets, &client_uuid);
rc = optee->ops->to_msg_param(optee, msg_arg->params + 2,
arg->num_params, param);
arg->num_params, param,
false /*!update_out*/); if (rc) goto out;
@@ -427,7 +428,8 @@ int optee_open_session(struct tee_context *ctx, }
if (optee->ops->from_msg_param(optee, param, arg->num_params,
msg_arg->params + 2)) {
msg_arg->params + 2,
true /*update_out*/)) { arg->ret = TEEC_ERROR_COMMUNICATION; arg->ret_origin = TEEC_ORIGIN_COMMS; /* Close session again to avoid leakage */
@@ -541,7 +543,7 @@ int optee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, msg_arg->cancel_id = arg->cancel_id;
rc = optee->ops->to_msg_param(optee, msg_arg->params, arg->num_params,
param);
param, false /*!update_out*/); if (rc) goto out;
@@ -551,7 +553,7 @@ int optee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, }
if (optee->ops->from_msg_param(optee, param, arg->num_params,
msg_arg->params)) {
msg_arg->params, true /*update_out*/)) { msg_arg->ret = TEEC_ERROR_COMMUNICATION; msg_arg->ret_origin = TEEC_ORIGIN_COMMS; }
diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c index 4ca1d5161b82..e4b08cd195f3 100644 --- a/drivers/tee/optee/ffa_abi.c +++ b/drivers/tee/optee/ffa_abi.c @@ -122,15 +122,21 @@ static int optee_shm_rem_ffa_handle(struct optee *optee, u64 global_id) */
static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p,
u32 attr, const struct optee_msg_param *mp)
u32 attr, const struct optee_msg_param *mp,
bool update_out)
{ struct tee_shm *shm = NULL; u64 offs_high = 0; u64 offs_low = 0;
if (update_out) {
if (attr == OPTEE_MSG_ATTR_TYPE_FMEM_INPUT)
return;
goto out;
}
p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + attr - OPTEE_MSG_ATTR_TYPE_FMEM_INPUT;
p->u.memref.size = mp->u.fmem.size; if (mp->u.fmem.global_id != OPTEE_MSG_FMEM_INVALID_GLOBAL_ID) shm = optee_shm_from_ffa_handle(optee, mp->u.fmem.global_id);
@@ -141,6 +147,8 @@ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p, offs_high = mp->u.fmem.offs_high; } p->u.memref.shm_offs = offs_low | offs_high << 32; +out:
p->u.memref.size = mp->u.fmem.size;
}
/** @@ -150,12 +158,14 @@ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p,
- @params: subsystem internal parameter representation
- @num_params: number of elements in the parameter arrays
- @msg_params: OPTEE_MSG parameters
*/
- @update_out: update parameter for output only
- Returns 0 on success or <0 on failure
static int optee_ffa_from_msg_param(struct optee *optee, struct tee_param *params, size_t num_params,
const struct optee_msg_param *msg_params)
const struct optee_msg_param *msg_params,
bool update_out)
{ size_t n;
@@ -166,18 +176,20 @@ static int optee_ffa_from_msg_param(struct optee *optee,
switch (attr) { case OPTEE_MSG_ATTR_TYPE_NONE:
if (update_out)
break; p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&p->u, 0, sizeof(p->u)); break; case OPTEE_MSG_ATTR_TYPE_VALUE_INPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT:
optee_from_msg_param_value(p, attr, mp);
optee_from_msg_param_value(p, attr, mp, update_out); break; case OPTEE_MSG_ATTR_TYPE_FMEM_INPUT: case OPTEE_MSG_ATTR_TYPE_FMEM_OUTPUT: case OPTEE_MSG_ATTR_TYPE_FMEM_INOUT:
from_msg_param_ffa_mem(optee, p, attr, mp);
from_msg_param_ffa_mem(optee, p, attr, mp, update_out); break; default: return -EINVAL;
@@ -188,10 +200,16 @@ static int optee_ffa_from_msg_param(struct optee *optee, }
static int to_msg_param_ffa_mem(struct optee_msg_param *mp,
const struct tee_param *p)
const struct tee_param *p, bool update_out)
{ struct tee_shm *shm = p->u.memref.shm;
if (update_out) {
if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT)
return 0;
goto out;
}
mp->attr = OPTEE_MSG_ATTR_TYPE_FMEM_INPUT + p->attr - TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT;
@@ -211,6 +229,7 @@ static int to_msg_param_ffa_mem(struct optee_msg_param *mp, memset(&mp->u, 0, sizeof(mp->u)); mp->u.fmem.global_id = OPTEE_MSG_FMEM_INVALID_GLOBAL_ID; } +out: mp->u.fmem.size = p->u.memref.size;
return 0;
@@ -222,13 +241,15 @@ static int to_msg_param_ffa_mem(struct optee_msg_param *mp,
- @optee: main service struct
- @msg_params: OPTEE_MSG parameters
- @num_params: number of elements in the parameter arrays
- @params: subsystem itnernal parameter representation
- @params: subsystem internal parameter representation
*/
- @update_out: update parameter for output only
- Returns 0 on success or <0 on failure
static int optee_ffa_to_msg_param(struct optee *optee, struct optee_msg_param *msg_params, size_t num_params,
const struct tee_param *params)
const struct tee_param *params,
bool update_out)
{ size_t n;
@@ -238,18 +259,20 @@ static int optee_ffa_to_msg_param(struct optee *optee,
switch (p->attr) { case TEE_IOCTL_PARAM_ATTR_TYPE_NONE:
if (update_out)
break; mp->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&mp->u, 0, sizeof(mp->u)); break; case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT:
optee_to_msg_param_value(mp, p);
optee_to_msg_param_value(mp, p, update_out); break; case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT:
if (to_msg_param_ffa_mem(mp, p))
if (to_msg_param_ffa_mem(mp, p, update_out)) return -EINVAL; break; default:
Can we rather handle it as follows to improve code readability and maintainence long term? Ditto for all other places.
static int optee_ffa_to_msg_param(struct optee *optee, struct optee_msg_param *msg_params, size_t num_params, const struct tee_param *params, bool update_out) { size_t n;
for (n = 0; n < num_params; n++) { const struct tee_param *p = params + n; struct optee_msg_param *mp = msg_params + n; if (update_out && (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_NONE || p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT || p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT)) continue;
You're missing updating the length field for memrefs.
Do we need to update length field for input memrefs when update_out is set? I don't see that happening in your existing patch too.
I'm sorry, I was unclear. The update_out parameter means only the output fields should be updated, not the attribute, offsets, ids, etc. That is, the length field for memrefs, and the value fields a, b, c for value params. Some of the memrefs aren't translated one-to-one with SDP, but the length field can and must be updated.
Isn't it rather better to add another attribute type to handled SDP special handling?
-Sumit
Cheers, Jens
-Sumit
Cheers, Jens
switch (p->attr) { case TEE_IOCTL_PARAM_ATTR_TYPE_NONE: mp->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&mp->u, 0, sizeof(mp->u)); break; case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT: optee_to_msg_param_value(mp, p); break; case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT: if (to_msg_param_ffa_mem(mp, p)) return -EINVAL; break; default: return -EINVAL; } } return 0;
}
-Sumit
On Tue, Mar 25, 2025 at 6:56 AM Sumit Garg sumit.garg@kernel.org wrote:
On Thu, Mar 20, 2025 at 02:00:57PM +0100, Jens Wiklander wrote:
Hi Sumit,
On Thu, Mar 20, 2025 at 10:25 AM Sumit Garg sumit.garg@kernel.org wrote:
Hi Jens,
On Mon, Mar 17, 2025 at 08:42:01AM +0100, Jens Wiklander wrote:
Hi Sumit,
On Thu, Mar 13, 2025 at 11:41 AM Sumit Garg sumit.garg@kernel.org wrote:
Hi Jens,
On Wed, Mar 05, 2025 at 02:04:09PM +0100, Jens Wiklander wrote:
The OP-TEE backend driver has two internal function pointers to convert between the subsystem type struct tee_param and the OP-TEE type struct optee_msg_param.
The conversion is done from one of the types to the other, which is then involved in some operation and finally converted back to the original type. When converting to prepare the parameters for the operation, all fields must be taken into account, but then converting back, it's enough to update only out-values and out-sizes. So, an update_out parameter is added to the conversion functions to tell if all or only some fields must be copied.
This is needed in a later patch where it might get confusing when converting back in from_msg_param() callback since an allocated restricted SHM can be using the sec_world_id of the used restricted memory pool and that doesn't translate back well.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/tee/optee/call.c | 10 ++-- drivers/tee/optee/ffa_abi.c | 43 +++++++++++++---- drivers/tee/optee/optee_private.h | 42 +++++++++++------ drivers/tee/optee/rpc.c | 31 +++++++++---- drivers/tee/optee/smc_abi.c | 76 +++++++++++++++++++++++-------- 5 files changed, 144 insertions(+), 58 deletions(-)
diff --git a/drivers/tee/optee/call.c b/drivers/tee/optee/call.c index 16eb953e14bb..f1533b894726 100644 --- a/drivers/tee/optee/call.c +++ b/drivers/tee/optee/call.c @@ -400,7 +400,8 @@ int optee_open_session(struct tee_context *ctx, export_uuid(msg_arg->params[1].u.octets, &client_uuid);
rc = optee->ops->to_msg_param(optee, msg_arg->params + 2,
arg->num_params, param);
arg->num_params, param,
false /*!update_out*/); if (rc) goto out;
@@ -427,7 +428,8 @@ int optee_open_session(struct tee_context *ctx, }
if (optee->ops->from_msg_param(optee, param, arg->num_params,
msg_arg->params + 2)) {
msg_arg->params + 2,
true /*update_out*/)) { arg->ret = TEEC_ERROR_COMMUNICATION; arg->ret_origin = TEEC_ORIGIN_COMMS; /* Close session again to avoid leakage */
@@ -541,7 +543,7 @@ int optee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, msg_arg->cancel_id = arg->cancel_id;
rc = optee->ops->to_msg_param(optee, msg_arg->params, arg->num_params,
param);
param, false /*!update_out*/); if (rc) goto out;
@@ -551,7 +553,7 @@ int optee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, }
if (optee->ops->from_msg_param(optee, param, arg->num_params,
msg_arg->params)) {
msg_arg->params, true /*update_out*/)) { msg_arg->ret = TEEC_ERROR_COMMUNICATION; msg_arg->ret_origin = TEEC_ORIGIN_COMMS; }
diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c index 4ca1d5161b82..e4b08cd195f3 100644 --- a/drivers/tee/optee/ffa_abi.c +++ b/drivers/tee/optee/ffa_abi.c @@ -122,15 +122,21 @@ static int optee_shm_rem_ffa_handle(struct optee *optee, u64 global_id) */
static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p,
u32 attr, const struct optee_msg_param *mp)
u32 attr, const struct optee_msg_param *mp,
bool update_out)
{ struct tee_shm *shm = NULL; u64 offs_high = 0; u64 offs_low = 0;
if (update_out) {
if (attr == OPTEE_MSG_ATTR_TYPE_FMEM_INPUT)
return;
goto out;
}
p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + attr - OPTEE_MSG_ATTR_TYPE_FMEM_INPUT;
p->u.memref.size = mp->u.fmem.size; if (mp->u.fmem.global_id != OPTEE_MSG_FMEM_INVALID_GLOBAL_ID) shm = optee_shm_from_ffa_handle(optee, mp->u.fmem.global_id);
@@ -141,6 +147,8 @@ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p, offs_high = mp->u.fmem.offs_high; } p->u.memref.shm_offs = offs_low | offs_high << 32; +out:
p->u.memref.size = mp->u.fmem.size;
}
/** @@ -150,12 +158,14 @@ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p,
- @params: subsystem internal parameter representation
- @num_params: number of elements in the parameter arrays
- @msg_params: OPTEE_MSG parameters
*/
- @update_out: update parameter for output only
- Returns 0 on success or <0 on failure
static int optee_ffa_from_msg_param(struct optee *optee, struct tee_param *params, size_t num_params,
const struct optee_msg_param *msg_params)
const struct optee_msg_param *msg_params,
bool update_out)
{ size_t n;
@@ -166,18 +176,20 @@ static int optee_ffa_from_msg_param(struct optee *optee,
switch (attr) { case OPTEE_MSG_ATTR_TYPE_NONE:
if (update_out)
break; p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&p->u, 0, sizeof(p->u)); break; case OPTEE_MSG_ATTR_TYPE_VALUE_INPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT:
optee_from_msg_param_value(p, attr, mp);
optee_from_msg_param_value(p, attr, mp, update_out); break; case OPTEE_MSG_ATTR_TYPE_FMEM_INPUT: case OPTEE_MSG_ATTR_TYPE_FMEM_OUTPUT: case OPTEE_MSG_ATTR_TYPE_FMEM_INOUT:
from_msg_param_ffa_mem(optee, p, attr, mp);
from_msg_param_ffa_mem(optee, p, attr, mp, update_out); break; default: return -EINVAL;
@@ -188,10 +200,16 @@ static int optee_ffa_from_msg_param(struct optee *optee, }
static int to_msg_param_ffa_mem(struct optee_msg_param *mp,
const struct tee_param *p)
const struct tee_param *p, bool update_out)
{ struct tee_shm *shm = p->u.memref.shm;
if (update_out) {
if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT)
return 0;
goto out;
}
mp->attr = OPTEE_MSG_ATTR_TYPE_FMEM_INPUT + p->attr - TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT;
@@ -211,6 +229,7 @@ static int to_msg_param_ffa_mem(struct optee_msg_param *mp, memset(&mp->u, 0, sizeof(mp->u)); mp->u.fmem.global_id = OPTEE_MSG_FMEM_INVALID_GLOBAL_ID; } +out: mp->u.fmem.size = p->u.memref.size;
return 0;
@@ -222,13 +241,15 @@ static int to_msg_param_ffa_mem(struct optee_msg_param *mp,
- @optee: main service struct
- @msg_params: OPTEE_MSG parameters
- @num_params: number of elements in the parameter arrays
- @params: subsystem itnernal parameter representation
- @params: subsystem internal parameter representation
*/
- @update_out: update parameter for output only
- Returns 0 on success or <0 on failure
static int optee_ffa_to_msg_param(struct optee *optee, struct optee_msg_param *msg_params, size_t num_params,
const struct tee_param *params)
const struct tee_param *params,
bool update_out)
{ size_t n;
@@ -238,18 +259,20 @@ static int optee_ffa_to_msg_param(struct optee *optee,
switch (p->attr) { case TEE_IOCTL_PARAM_ATTR_TYPE_NONE:
if (update_out)
break; mp->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&mp->u, 0, sizeof(mp->u)); break; case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT:
optee_to_msg_param_value(mp, p);
optee_to_msg_param_value(mp, p, update_out); break; case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT:
if (to_msg_param_ffa_mem(mp, p))
if (to_msg_param_ffa_mem(mp, p, update_out)) return -EINVAL; break; default:
Can we rather handle it as follows to improve code readability and maintainence long term? Ditto for all other places.
static int optee_ffa_to_msg_param(struct optee *optee, struct optee_msg_param *msg_params, size_t num_params, const struct tee_param *params, bool update_out) { size_t n;
for (n = 0; n < num_params; n++) { const struct tee_param *p = params + n; struct optee_msg_param *mp = msg_params + n; if (update_out && (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_NONE || p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT || p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT)) continue;
You're missing updating the length field for memrefs.
Do we need to update length field for input memrefs when update_out is set? I don't see that happening in your existing patch too.
I'm sorry, I was unclear. The update_out parameter means only the output fields should be updated, not the attribute, offsets, ids, etc. That is, the length field for memrefs, and the value fields a, b, c for value params. Some of the memrefs aren't translated one-to-one with SDP, but the length field can and must be updated.
Isn't it rather better to add another attribute type to handled SDP special handling?
This isn't special handling, all parameters get the same treatment. When updating a parameter after it has been used, this is all that needs to be done, regardless of whether it's an SDP buffer. The updates we did before this patch were redundant.
This patch was introduced in the v3 of this patch set, but I don't think it's strictly needed any longer since SDP buffers are allocated differently now. I think it's nice to only update what's needed when translating back a parameter (just as in params_to_user() in drivers/tee/tee_core.c), but if you don't like it, we can drop this patch.
Cheers, Jens
-Sumit
Cheers, Jens
-Sumit
Cheers, Jens
switch (p->attr) { case TEE_IOCTL_PARAM_ATTR_TYPE_NONE: mp->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&mp->u, 0, sizeof(mp->u)); break; case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT: optee_to_msg_param_value(mp, p); break; case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT: if (to_msg_param_ffa_mem(mp, p)) return -EINVAL; break; default: return -EINVAL; } } return 0;
}
-Sumit
On Tue, Mar 25, 2025 at 09:50:35AM +0100, Jens Wiklander wrote:
On Tue, Mar 25, 2025 at 6:56 AM Sumit Garg sumit.garg@kernel.org wrote:
On Thu, Mar 20, 2025 at 02:00:57PM +0100, Jens Wiklander wrote:
Hi Sumit,
On Thu, Mar 20, 2025 at 10:25 AM Sumit Garg sumit.garg@kernel.org wrote:
Hi Jens,
On Mon, Mar 17, 2025 at 08:42:01AM +0100, Jens Wiklander wrote:
Hi Sumit,
On Thu, Mar 13, 2025 at 11:41 AM Sumit Garg sumit.garg@kernel.org wrote:
Hi Jens,
On Wed, Mar 05, 2025 at 02:04:09PM +0100, Jens Wiklander wrote: > The OP-TEE backend driver has two internal function pointers to convert > between the subsystem type struct tee_param and the OP-TEE type struct > optee_msg_param. > > The conversion is done from one of the types to the other, which is then > involved in some operation and finally converted back to the original > type. When converting to prepare the parameters for the operation, all > fields must be taken into account, but then converting back, it's enough > to update only out-values and out-sizes. So, an update_out parameter is > added to the conversion functions to tell if all or only some fields > must be copied. > > This is needed in a later patch where it might get confusing when > converting back in from_msg_param() callback since an allocated > restricted SHM can be using the sec_world_id of the used restricted > memory pool and that doesn't translate back well. > > Signed-off-by: Jens Wiklander jens.wiklander@linaro.org > --- > drivers/tee/optee/call.c | 10 ++-- > drivers/tee/optee/ffa_abi.c | 43 +++++++++++++---- > drivers/tee/optee/optee_private.h | 42 +++++++++++------ > drivers/tee/optee/rpc.c | 31 +++++++++---- > drivers/tee/optee/smc_abi.c | 76 +++++++++++++++++++++++-------- > 5 files changed, 144 insertions(+), 58 deletions(-) > > diff --git a/drivers/tee/optee/call.c b/drivers/tee/optee/call.c > index 16eb953e14bb..f1533b894726 100644 > --- a/drivers/tee/optee/call.c > +++ b/drivers/tee/optee/call.c > @@ -400,7 +400,8 @@ int optee_open_session(struct tee_context *ctx, > export_uuid(msg_arg->params[1].u.octets, &client_uuid); > > rc = optee->ops->to_msg_param(optee, msg_arg->params + 2, > - arg->num_params, param); > + arg->num_params, param, > + false /*!update_out*/); > if (rc) > goto out; > > @@ -427,7 +428,8 @@ int optee_open_session(struct tee_context *ctx, > } > > if (optee->ops->from_msg_param(optee, param, arg->num_params, > - msg_arg->params + 2)) { > + msg_arg->params + 2, > + true /*update_out*/)) { > arg->ret = TEEC_ERROR_COMMUNICATION; > arg->ret_origin = TEEC_ORIGIN_COMMS; > /* Close session again to avoid leakage */ > @@ -541,7 +543,7 @@ int optee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, > msg_arg->cancel_id = arg->cancel_id; > > rc = optee->ops->to_msg_param(optee, msg_arg->params, arg->num_params, > - param); > + param, false /*!update_out*/); > if (rc) > goto out; > > @@ -551,7 +553,7 @@ int optee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, > } > > if (optee->ops->from_msg_param(optee, param, arg->num_params, > - msg_arg->params)) { > + msg_arg->params, true /*update_out*/)) { > msg_arg->ret = TEEC_ERROR_COMMUNICATION; > msg_arg->ret_origin = TEEC_ORIGIN_COMMS; > } > diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c > index 4ca1d5161b82..e4b08cd195f3 100644 > --- a/drivers/tee/optee/ffa_abi.c > +++ b/drivers/tee/optee/ffa_abi.c > @@ -122,15 +122,21 @@ static int optee_shm_rem_ffa_handle(struct optee *optee, u64 global_id) > */ > > static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p, > - u32 attr, const struct optee_msg_param *mp) > + u32 attr, const struct optee_msg_param *mp, > + bool update_out) > { > struct tee_shm *shm = NULL; > u64 offs_high = 0; > u64 offs_low = 0; > > + if (update_out) { > + if (attr == OPTEE_MSG_ATTR_TYPE_FMEM_INPUT) > + return; > + goto out; > + } > + > p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + > attr - OPTEE_MSG_ATTR_TYPE_FMEM_INPUT; > - p->u.memref.size = mp->u.fmem.size; > > if (mp->u.fmem.global_id != OPTEE_MSG_FMEM_INVALID_GLOBAL_ID) > shm = optee_shm_from_ffa_handle(optee, mp->u.fmem.global_id); > @@ -141,6 +147,8 @@ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p, > offs_high = mp->u.fmem.offs_high; > } > p->u.memref.shm_offs = offs_low | offs_high << 32; > +out: > + p->u.memref.size = mp->u.fmem.size; > } > > /** > @@ -150,12 +158,14 @@ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p, > * @params: subsystem internal parameter representation > * @num_params: number of elements in the parameter arrays > * @msg_params: OPTEE_MSG parameters > + * @update_out: update parameter for output only > * > * Returns 0 on success or <0 on failure > */ > static int optee_ffa_from_msg_param(struct optee *optee, > struct tee_param *params, size_t num_params, > - const struct optee_msg_param *msg_params) > + const struct optee_msg_param *msg_params, > + bool update_out) > { > size_t n; > > @@ -166,18 +176,20 @@ static int optee_ffa_from_msg_param(struct optee *optee, > > switch (attr) { > case OPTEE_MSG_ATTR_TYPE_NONE: > + if (update_out) > + break; > p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; > memset(&p->u, 0, sizeof(p->u)); > break; > case OPTEE_MSG_ATTR_TYPE_VALUE_INPUT: > case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT: > case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT: > - optee_from_msg_param_value(p, attr, mp); > + optee_from_msg_param_value(p, attr, mp, update_out); > break; > case OPTEE_MSG_ATTR_TYPE_FMEM_INPUT: > case OPTEE_MSG_ATTR_TYPE_FMEM_OUTPUT: > case OPTEE_MSG_ATTR_TYPE_FMEM_INOUT: > - from_msg_param_ffa_mem(optee, p, attr, mp); > + from_msg_param_ffa_mem(optee, p, attr, mp, update_out); > break; > default: > return -EINVAL; > @@ -188,10 +200,16 @@ static int optee_ffa_from_msg_param(struct optee *optee, > } > > static int to_msg_param_ffa_mem(struct optee_msg_param *mp, > - const struct tee_param *p) > + const struct tee_param *p, bool update_out) > { > struct tee_shm *shm = p->u.memref.shm; > > + if (update_out) { > + if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT) > + return 0; > + goto out; > + } > + > mp->attr = OPTEE_MSG_ATTR_TYPE_FMEM_INPUT + p->attr - > TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT; > > @@ -211,6 +229,7 @@ static int to_msg_param_ffa_mem(struct optee_msg_param *mp, > memset(&mp->u, 0, sizeof(mp->u)); > mp->u.fmem.global_id = OPTEE_MSG_FMEM_INVALID_GLOBAL_ID; > } > +out: > mp->u.fmem.size = p->u.memref.size; > > return 0; > @@ -222,13 +241,15 @@ static int to_msg_param_ffa_mem(struct optee_msg_param *mp, > * @optee: main service struct > * @msg_params: OPTEE_MSG parameters > * @num_params: number of elements in the parameter arrays > - * @params: subsystem itnernal parameter representation > + * @params: subsystem internal parameter representation > + * @update_out: update parameter for output only > * Returns 0 on success or <0 on failure > */ > static int optee_ffa_to_msg_param(struct optee *optee, > struct optee_msg_param *msg_params, > size_t num_params, > - const struct tee_param *params) > + const struct tee_param *params, > + bool update_out) > { > size_t n; > > @@ -238,18 +259,20 @@ static int optee_ffa_to_msg_param(struct optee *optee, > > switch (p->attr) { > case TEE_IOCTL_PARAM_ATTR_TYPE_NONE: > + if (update_out) > + break; > mp->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; > memset(&mp->u, 0, sizeof(mp->u)); > break; > case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT: > case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT: > case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT: > - optee_to_msg_param_value(mp, p); > + optee_to_msg_param_value(mp, p, update_out); > break; > case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: > case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: > case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT: > - if (to_msg_param_ffa_mem(mp, p)) > + if (to_msg_param_ffa_mem(mp, p, update_out)) > return -EINVAL; > break; > default:
Can we rather handle it as follows to improve code readability and maintainence long term? Ditto for all other places.
static int optee_ffa_to_msg_param(struct optee *optee, struct optee_msg_param *msg_params, size_t num_params, const struct tee_param *params, bool update_out) { size_t n;
for (n = 0; n < num_params; n++) { const struct tee_param *p = params + n; struct optee_msg_param *mp = msg_params + n; if (update_out && (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_NONE || p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT || p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT)) continue;
You're missing updating the length field for memrefs.
Do we need to update length field for input memrefs when update_out is set? I don't see that happening in your existing patch too.
I'm sorry, I was unclear. The update_out parameter means only the output fields should be updated, not the attribute, offsets, ids, etc. That is, the length field for memrefs, and the value fields a, b, c for value params. Some of the memrefs aren't translated one-to-one with SDP, but the length field can and must be updated.
Isn't it rather better to add another attribute type to handled SDP special handling?
This isn't special handling, all parameters get the same treatment. When updating a parameter after it has been used, this is all that needs to be done, regardless of whether it's an SDP buffer. The updates we did before this patch were redundant.
This patch was introduced in the v3 of this patch set, but I don't think it's strictly needed any longer since SDP buffers are allocated differently now. I think it's nice to only update what's needed when translating back a parameter (just as in params_to_user() in drivers/tee/tee_core.c), but if you don't like it, we can drop this patch.
params_to_user() doesn't take any additional parameter like "update_out" which is complicating the program flow here. Can't we rather follow similar practice for {to/from}_msg_param() APIs?
-Sumit
On Tue, Apr 1, 2025 at 9:45 AM Sumit Garg sumit.garg@kernel.org wrote:
On Tue, Mar 25, 2025 at 09:50:35AM +0100, Jens Wiklander wrote:
On Tue, Mar 25, 2025 at 6:56 AM Sumit Garg sumit.garg@kernel.org wrote:
On Thu, Mar 20, 2025 at 02:00:57PM +0100, Jens Wiklander wrote:
Hi Sumit,
On Thu, Mar 20, 2025 at 10:25 AM Sumit Garg sumit.garg@kernel.org wrote:
Hi Jens,
On Mon, Mar 17, 2025 at 08:42:01AM +0100, Jens Wiklander wrote:
Hi Sumit,
On Thu, Mar 13, 2025 at 11:41 AM Sumit Garg sumit.garg@kernel.org wrote: > > Hi Jens, > > On Wed, Mar 05, 2025 at 02:04:09PM +0100, Jens Wiklander wrote: > > The OP-TEE backend driver has two internal function pointers to convert > > between the subsystem type struct tee_param and the OP-TEE type struct > > optee_msg_param. > > > > The conversion is done from one of the types to the other, which is then > > involved in some operation and finally converted back to the original > > type. When converting to prepare the parameters for the operation, all > > fields must be taken into account, but then converting back, it's enough > > to update only out-values and out-sizes. So, an update_out parameter is > > added to the conversion functions to tell if all or only some fields > > must be copied. > > > > This is needed in a later patch where it might get confusing when > > converting back in from_msg_param() callback since an allocated > > restricted SHM can be using the sec_world_id of the used restricted > > memory pool and that doesn't translate back well. > > > > Signed-off-by: Jens Wiklander jens.wiklander@linaro.org > > --- > > drivers/tee/optee/call.c | 10 ++-- > > drivers/tee/optee/ffa_abi.c | 43 +++++++++++++---- > > drivers/tee/optee/optee_private.h | 42 +++++++++++------ > > drivers/tee/optee/rpc.c | 31 +++++++++---- > > drivers/tee/optee/smc_abi.c | 76 +++++++++++++++++++++++-------- > > 5 files changed, 144 insertions(+), 58 deletions(-) > > > > diff --git a/drivers/tee/optee/call.c b/drivers/tee/optee/call.c > > index 16eb953e14bb..f1533b894726 100644 > > --- a/drivers/tee/optee/call.c > > +++ b/drivers/tee/optee/call.c > > @@ -400,7 +400,8 @@ int optee_open_session(struct tee_context *ctx, > > export_uuid(msg_arg->params[1].u.octets, &client_uuid); > > > > rc = optee->ops->to_msg_param(optee, msg_arg->params + 2, > > - arg->num_params, param); > > + arg->num_params, param, > > + false /*!update_out*/); > > if (rc) > > goto out; > > > > @@ -427,7 +428,8 @@ int optee_open_session(struct tee_context *ctx, > > } > > > > if (optee->ops->from_msg_param(optee, param, arg->num_params, > > - msg_arg->params + 2)) { > > + msg_arg->params + 2, > > + true /*update_out*/)) { > > arg->ret = TEEC_ERROR_COMMUNICATION; > > arg->ret_origin = TEEC_ORIGIN_COMMS; > > /* Close session again to avoid leakage */ > > @@ -541,7 +543,7 @@ int optee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, > > msg_arg->cancel_id = arg->cancel_id; > > > > rc = optee->ops->to_msg_param(optee, msg_arg->params, arg->num_params, > > - param); > > + param, false /*!update_out*/); > > if (rc) > > goto out; > > > > @@ -551,7 +553,7 @@ int optee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, > > } > > > > if (optee->ops->from_msg_param(optee, param, arg->num_params, > > - msg_arg->params)) { > > + msg_arg->params, true /*update_out*/)) { > > msg_arg->ret = TEEC_ERROR_COMMUNICATION; > > msg_arg->ret_origin = TEEC_ORIGIN_COMMS; > > } > > diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c > > index 4ca1d5161b82..e4b08cd195f3 100644 > > --- a/drivers/tee/optee/ffa_abi.c > > +++ b/drivers/tee/optee/ffa_abi.c > > @@ -122,15 +122,21 @@ static int optee_shm_rem_ffa_handle(struct optee *optee, u64 global_id) > > */ > > > > static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p, > > - u32 attr, const struct optee_msg_param *mp) > > + u32 attr, const struct optee_msg_param *mp, > > + bool update_out) > > { > > struct tee_shm *shm = NULL; > > u64 offs_high = 0; > > u64 offs_low = 0; > > > > + if (update_out) { > > + if (attr == OPTEE_MSG_ATTR_TYPE_FMEM_INPUT) > > + return; > > + goto out; > > + } > > + > > p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + > > attr - OPTEE_MSG_ATTR_TYPE_FMEM_INPUT; > > - p->u.memref.size = mp->u.fmem.size; > > > > if (mp->u.fmem.global_id != OPTEE_MSG_FMEM_INVALID_GLOBAL_ID) > > shm = optee_shm_from_ffa_handle(optee, mp->u.fmem.global_id); > > @@ -141,6 +147,8 @@ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p, > > offs_high = mp->u.fmem.offs_high; > > } > > p->u.memref.shm_offs = offs_low | offs_high << 32; > > +out: > > + p->u.memref.size = mp->u.fmem.size; > > } > > > > /** > > @@ -150,12 +158,14 @@ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p, > > * @params: subsystem internal parameter representation > > * @num_params: number of elements in the parameter arrays > > * @msg_params: OPTEE_MSG parameters > > + * @update_out: update parameter for output only > > * > > * Returns 0 on success or <0 on failure > > */ > > static int optee_ffa_from_msg_param(struct optee *optee, > > struct tee_param *params, size_t num_params, > > - const struct optee_msg_param *msg_params) > > + const struct optee_msg_param *msg_params, > > + bool update_out) > > { > > size_t n; > > > > @@ -166,18 +176,20 @@ static int optee_ffa_from_msg_param(struct optee *optee, > > > > switch (attr) { > > case OPTEE_MSG_ATTR_TYPE_NONE: > > + if (update_out) > > + break; > > p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; > > memset(&p->u, 0, sizeof(p->u)); > > break; > > case OPTEE_MSG_ATTR_TYPE_VALUE_INPUT: > > case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT: > > case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT: > > - optee_from_msg_param_value(p, attr, mp); > > + optee_from_msg_param_value(p, attr, mp, update_out); > > break; > > case OPTEE_MSG_ATTR_TYPE_FMEM_INPUT: > > case OPTEE_MSG_ATTR_TYPE_FMEM_OUTPUT: > > case OPTEE_MSG_ATTR_TYPE_FMEM_INOUT: > > - from_msg_param_ffa_mem(optee, p, attr, mp); > > + from_msg_param_ffa_mem(optee, p, attr, mp, update_out); > > break; > > default: > > return -EINVAL; > > @@ -188,10 +200,16 @@ static int optee_ffa_from_msg_param(struct optee *optee, > > } > > > > static int to_msg_param_ffa_mem(struct optee_msg_param *mp, > > - const struct tee_param *p) > > + const struct tee_param *p, bool update_out) > > { > > struct tee_shm *shm = p->u.memref.shm; > > > > + if (update_out) { > > + if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT) > > + return 0; > > + goto out; > > + } > > + > > mp->attr = OPTEE_MSG_ATTR_TYPE_FMEM_INPUT + p->attr - > > TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT; > > > > @@ -211,6 +229,7 @@ static int to_msg_param_ffa_mem(struct optee_msg_param *mp, > > memset(&mp->u, 0, sizeof(mp->u)); > > mp->u.fmem.global_id = OPTEE_MSG_FMEM_INVALID_GLOBAL_ID; > > } > > +out: > > mp->u.fmem.size = p->u.memref.size; > > > > return 0; > > @@ -222,13 +241,15 @@ static int to_msg_param_ffa_mem(struct optee_msg_param *mp, > > * @optee: main service struct > > * @msg_params: OPTEE_MSG parameters > > * @num_params: number of elements in the parameter arrays > > - * @params: subsystem itnernal parameter representation > > + * @params: subsystem internal parameter representation > > + * @update_out: update parameter for output only > > * Returns 0 on success or <0 on failure > > */ > > static int optee_ffa_to_msg_param(struct optee *optee, > > struct optee_msg_param *msg_params, > > size_t num_params, > > - const struct tee_param *params) > > + const struct tee_param *params, > > + bool update_out) > > { > > size_t n; > > > > @@ -238,18 +259,20 @@ static int optee_ffa_to_msg_param(struct optee *optee, > > > > switch (p->attr) { > > case TEE_IOCTL_PARAM_ATTR_TYPE_NONE: > > + if (update_out) > > + break; > > mp->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; > > memset(&mp->u, 0, sizeof(mp->u)); > > break; > > case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT: > > case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT: > > case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT: > > - optee_to_msg_param_value(mp, p); > > + optee_to_msg_param_value(mp, p, update_out); > > break; > > case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: > > case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: > > case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT: > > - if (to_msg_param_ffa_mem(mp, p)) > > + if (to_msg_param_ffa_mem(mp, p, update_out)) > > return -EINVAL; > > break; > > default: > > Can we rather handle it as follows to improve code readability and > maintainence long term? Ditto for all other places. > > static int optee_ffa_to_msg_param(struct optee *optee, > struct optee_msg_param *msg_params, > size_t num_params, > const struct tee_param *params, > bool update_out) > { > size_t n; > > for (n = 0; n < num_params; n++) { > const struct tee_param *p = params + n; > struct optee_msg_param *mp = msg_params + n; > > if (update_out && (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_NONE || > p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT || > p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT)) > continue;
You're missing updating the length field for memrefs.
Do we need to update length field for input memrefs when update_out is set? I don't see that happening in your existing patch too.
I'm sorry, I was unclear. The update_out parameter means only the output fields should be updated, not the attribute, offsets, ids, etc. That is, the length field for memrefs, and the value fields a, b, c for value params. Some of the memrefs aren't translated one-to-one with SDP, but the length field can and must be updated.
Isn't it rather better to add another attribute type to handled SDP special handling?
This isn't special handling, all parameters get the same treatment. When updating a parameter after it has been used, this is all that needs to be done, regardless of whether it's an SDP buffer. The updates we did before this patch were redundant.
This patch was introduced in the v3 of this patch set, but I don't think it's strictly needed any longer since SDP buffers are allocated differently now. I think it's nice to only update what's needed when translating back a parameter (just as in params_to_user() in drivers/tee/tee_core.c), but if you don't like it, we can drop this patch.
params_to_user() doesn't take any additional parameter like "update_out" which is complicating the program flow here. Can't we rather follow similar practice for {to/from}_msg_param() APIs?
I'm afraid something special is needed. handle_rpc_supp_cmd() needs all the fields to be updated, from_msg_param() prepares input and attribute type may have changed when to_msg_param() is called.
Cheers, Jens
Update the header files describing the secure world ABI, both with and without FF-A. The ABI is extended to deal with restricted memory, but as usual backward compatible.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org --- drivers/tee/optee/optee_ffa.h | 27 ++++++++++--- drivers/tee/optee/optee_msg.h | 65 ++++++++++++++++++++++++++++++-- drivers/tee/optee/optee_smc.h | 71 ++++++++++++++++++++++++++++++++++- 3 files changed, 154 insertions(+), 9 deletions(-)
diff --git a/drivers/tee/optee/optee_ffa.h b/drivers/tee/optee/optee_ffa.h index 257735ae5b56..7bd037200343 100644 --- a/drivers/tee/optee/optee_ffa.h +++ b/drivers/tee/optee/optee_ffa.h @@ -81,7 +81,7 @@ * as the second MSG arg struct for * OPTEE_FFA_YIELDING_CALL_WITH_ARG. * Bit[31:8]: Reserved (MBZ) - * w5: Bitfield of secure world capabilities OPTEE_FFA_SEC_CAP_* below, + * w5: Bitfield of OP-TEE capabilities OPTEE_FFA_SEC_CAP_* * w6: The maximum secure world notification number * w7: Not used (MBZ) */ @@ -94,6 +94,8 @@ #define OPTEE_FFA_SEC_CAP_ASYNC_NOTIF BIT(1) /* OP-TEE supports probing for RPMB device if needed */ #define OPTEE_FFA_SEC_CAP_RPMB_PROBE BIT(2) +/* OP-TEE supports Restricted Memory for secure data path */ +#define OPTEE_FFA_SEC_CAP_RSTMEM BIT(3)
#define OPTEE_FFA_EXCHANGE_CAPABILITIES OPTEE_FFA_BLOCKING_CALL(2)
@@ -108,7 +110,7 @@ * * Return register usage: * w3: Error code, 0 on success - * w4-w7: Note used (MBZ) + * w4-w7: Not used (MBZ) */ #define OPTEE_FFA_UNREGISTER_SHM OPTEE_FFA_BLOCKING_CALL(3)
@@ -119,16 +121,31 @@ * Call register usage: * w3: Service ID, OPTEE_FFA_ENABLE_ASYNC_NOTIF * w4: Notification value to request bottom half processing, should be - * less than OPTEE_FFA_MAX_ASYNC_NOTIF_VALUE. + * less than OPTEE_FFA_MAX_ASYNC_NOTIF_VALUE * w5-w7: Not used (MBZ) * * Return register usage: * w3: Error code, 0 on success - * w4-w7: Note used (MBZ) + * w4-w7: Not used (MBZ) */ #define OPTEE_FFA_ENABLE_ASYNC_NOTIF OPTEE_FFA_BLOCKING_CALL(5)
-#define OPTEE_FFA_MAX_ASYNC_NOTIF_VALUE 64 +#define OPTEE_FFA_MAX_ASYNC_NOTIF_VALUE 64 + +/* + * Release Restricted memory + * + * Call register usage: + * w3: Service ID, OPTEE_FFA_RECLAIM_RSTMEM + * w4: Shared memory handle, lower bits + * w5: Shared memory handle, higher bits + * w6-w7: Not used (MBZ) + * + * Return register usage: + * w3: Error code, 0 on success + * w4-w7: Note used (MBZ) + */ +#define OPTEE_FFA_RELEASE_RSTMEM OPTEE_FFA_BLOCKING_CALL(8)
/* * Call with struct optee_msg_arg as argument in the supplied shared memory diff --git a/drivers/tee/optee/optee_msg.h b/drivers/tee/optee/optee_msg.h index e8840a82b983..1b558526e7d9 100644 --- a/drivers/tee/optee/optee_msg.h +++ b/drivers/tee/optee/optee_msg.h @@ -133,13 +133,13 @@ struct optee_msg_param_rmem { };
/** - * struct optee_msg_param_fmem - ffa memory reference parameter + * struct optee_msg_param_fmem - FF-A memory reference parameter * @offs_lower: Lower bits of offset into shared memory reference * @offs_upper: Upper bits of offset into shared memory reference * @internal_offs: Internal offset into the first page of shared memory * reference * @size: Size of the buffer - * @global_id: Global identifier of Shared memory + * @global_id: Global identifier of the shared memory */ struct optee_msg_param_fmem { u32 offs_low; @@ -165,7 +165,7 @@ struct optee_msg_param_value { * @attr: attributes * @tmem: parameter by temporary memory reference * @rmem: parameter by registered memory reference - * @fmem: parameter by ffa registered memory reference + * @fmem: parameter by FF-A registered memory reference * @value: parameter by opaque value * @octets: parameter by octet string * @@ -296,6 +296,18 @@ struct optee_msg_arg { */ #define OPTEE_MSG_FUNCID_GET_OS_REVISION 0x0001
+/* + * Values used in OPTEE_MSG_CMD_LEND_RSTMEM below + * OPTEE_MSG_RSTMEM_RESERVED Reserved + * OPTEE_MSG_RSTMEM_SECURE_VIDEO_PLAY Secure Video Playback + * OPTEE_MSG_RSTMEM_TRUSTED_UI Trused UI + * OPTEE_MSG_RSTMEM_SECURE_VIDEO_RECORD Secure Video Recording + */ +#define OPTEE_MSG_RSTMEM_RESERVED 0 +#define OPTEE_MSG_RSTMEM_SECURE_VIDEO_PLAY 1 +#define OPTEE_MSG_RSTMEM_TRUSTED_UI 2 +#define OPTEE_MSG_RSTMEM_SECURE_VIDEO_RECORD 3 + /* * Do a secure call with struct optee_msg_arg as argument * The OPTEE_MSG_CMD_* below defines what goes in struct optee_msg_arg::cmd @@ -337,6 +349,49 @@ struct optee_msg_arg { * OPTEE_MSG_CMD_STOP_ASYNC_NOTIF informs secure world that from now is * normal world unable to process asynchronous notifications. Typically * used when the driver is shut down. + * + * OPTEE_MSG_CMD_LEND_RSTMEM lends restricted memory. The passed normal + * physical memory is restricted from normal world access. The memory + * should be unmapped prior to this call since it becomes inaccessible + * during the request. + * Parameters are passed as: + * [in] param[0].attr OPTEE_MSG_ATTR_TYPE_VALUE_INPUT + * [in] param[0].u.value.a OPTEE_MSG_RSTMEM_* defined above + * [in] param[1].attr OPTEE_MSG_ATTR_TYPE_TMEM_INPUT + * [in] param[1].u.tmem.buf_ptr physical address + * [in] param[1].u.tmem.size size + * [in] param[1].u.tmem.shm_ref holds restricted memory reference + * + * OPTEE_MSG_CMD_RECLAIM_RSTMEM reclaims a previously lent restricted + * memory reference. The physical memory is accessible by the normal world + * after this function has return and can be mapped again. The information + * is passed as: + * [in] param[0].attr OPTEE_MSG_ATTR_TYPE_VALUE_INPUT + * [in] param[0].u.value.a holds restricted memory cookie + * + * OPTEE_MSG_CMD_GET_RSTMEM_CONFIG get configuration for a specific + * restricted memory use case. Parameters are passed as: + * [in] param[0].attr OPTEE_MSG_ATTR_TYPE_VALUE_INOUT + * [in] param[0].value.a OPTEE_MSG_RSTMEM_* + * [in] param[1].attr OPTEE_MSG_ATTR_TYPE_{R,F}MEM_OUTPUT + * [in] param[1].u.{r,f}mem Buffer or NULL + * [in] param[1].u.{r,f}mem.size Provided size of buffer or 0 for query + * output for the restricted use case: + * [out] param[0].value.a Minimal size of SDP memory + * [out] param[0].value.b Required alignment of size and start of + * restricted memory + * [out] param[1].{r,f}mem.size Size of output data + * [out] param[1].{r,f}mem If non-NULL, contains an array of + * uint16_t holding endpoints that + * must be included when lending + * memory for this use case + * + * OPTEE_MSG_CMD_ASSIGN_RSTMEM assigns use-case to restricted memory + * previously lent using the FFA_LEND framework ABI. Parameters are passed + * as: + * [in] param[0].attr OPTEE_MSG_ATTR_TYPE_VALUE_INPUT + * [in] param[0].u.value.a holds restricted memory cookie + * [in] param[0].u.value.b OPTEE_MSG_RSTMEM_* defined above */ #define OPTEE_MSG_CMD_OPEN_SESSION 0 #define OPTEE_MSG_CMD_INVOKE_COMMAND 1 @@ -346,6 +401,10 @@ struct optee_msg_arg { #define OPTEE_MSG_CMD_UNREGISTER_SHM 5 #define OPTEE_MSG_CMD_DO_BOTTOM_HALF 6 #define OPTEE_MSG_CMD_STOP_ASYNC_NOTIF 7 +#define OPTEE_MSG_CMD_LEND_RSTMEM 8 +#define OPTEE_MSG_CMD_RECLAIM_RSTMEM 9 +#define OPTEE_MSG_CMD_GET_RSTMEM_CONFIG 10 +#define OPTEE_MSG_CMD_ASSIGN_RSTMEM 11 #define OPTEE_MSG_FUNCID_CALL_WITH_ARG 0x0004
#endif /* _OPTEE_MSG_H */ diff --git a/drivers/tee/optee/optee_smc.h b/drivers/tee/optee/optee_smc.h index 879426300821..abc379ce190c 100644 --- a/drivers/tee/optee/optee_smc.h +++ b/drivers/tee/optee/optee_smc.h @@ -264,7 +264,6 @@ struct optee_smc_get_shm_config_result { #define OPTEE_SMC_SEC_CAP_HAVE_RESERVED_SHM BIT(0) /* Secure world can communicate via previously unregistered shared memory */ #define OPTEE_SMC_SEC_CAP_UNREGISTERED_SHM BIT(1) - /* * Secure world supports commands "register/unregister shared memory", * secure world accepts command buffers located in any parts of non-secure RAM @@ -280,6 +279,10 @@ struct optee_smc_get_shm_config_result { #define OPTEE_SMC_SEC_CAP_RPC_ARG BIT(6) /* Secure world supports probing for RPMB device if needed */ #define OPTEE_SMC_SEC_CAP_RPMB_PROBE BIT(7) +/* Secure world supports Secure Data Path */ +#define OPTEE_SMC_SEC_CAP_SDP BIT(8) +/* Secure world supports dynamic restricted memory */ +#define OPTEE_SMC_SEC_CAP_DYNAMIC_RSTMEM BIT(9)
#define OPTEE_SMC_FUNCID_EXCHANGE_CAPABILITIES 9 #define OPTEE_SMC_EXCHANGE_CAPABILITIES \ @@ -451,6 +454,72 @@ struct optee_smc_disable_shm_cache_result {
/* See OPTEE_SMC_CALL_WITH_REGD_ARG above */ #define OPTEE_SMC_FUNCID_CALL_WITH_REGD_ARG 19 +/* + * Get Secure Data Path memory config + * + * Returns the Secure Data Path memory config. + * + * Call register usage: + * a0 SMC Function ID, OPTEE_SMC_GET_SDP_CONFIG + * a2-6 Not used, must be zero + * a7 Hypervisor Client ID register + * + * Have config return register usage: + * a0 OPTEE_SMC_RETURN_OK + * a1 Physical address of start of SDP memory + * a2 Size of SDP memory + * a3 Not used + * a4-7 Preserved + * + * Not available register usage: + * a0 OPTEE_SMC_RETURN_ENOTAVAIL + * a1-3 Not used + * a4-7 Preserved + */ +#define OPTEE_SMC_FUNCID_GET_SDP_CONFIG 20 +#define OPTEE_SMC_GET_SDP_CONFIG \ + OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_GET_SDP_CONFIG) + +struct optee_smc_get_sdp_config_result { + unsigned long status; + unsigned long start; + unsigned long size; + unsigned long flags; +}; + +/* + * Get Secure Data Path dynamic memory config + * + * Returns the Secure Data Path dynamic memory config. + * + * Call register usage: + * a0 SMC Function ID, OPTEE_SMC_GET_DYN_SHM_CONFIG + * a2-6 Not used, must be zero + * a7 Hypervisor Client ID register + * + * Have config return register usage: + * a0 OPTEE_SMC_RETURN_OK + * a1 Minamal size of SDP memory + * a2 Required alignment of size and start of registered SDP memory + * a3 Not used + * a4-7 Preserved + * + * Not available register usage: + * a0 OPTEE_SMC_RETURN_ENOTAVAIL + * a1-3 Not used + * a4-7 Preserved + */ + +#define OPTEE_SMC_FUNCID_GET_DYN_SDP_CONFIG 21 +#define OPTEE_SMC_GET_DYN_SDP_CONFIG \ + OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_GET_DYN_SDP_CONFIG) + +struct optee_smc_get_dyn_sdp_config_result { + unsigned long status; + unsigned long size; + unsigned long align; + unsigned long flags; +};
/* * Resume from RPC (for example after processing a foreign interrupt)
Hi Jens,
It has taken a bit of time for me to review this patch-set as I am settling in my new role.
On Wed, Mar 05, 2025 at 02:04:10PM +0100, Jens Wiklander wrote:
Update the header files describing the secure world ABI, both with and without FF-A. The ABI is extended to deal with restricted memory, but as usual backward compatible.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/tee/optee/optee_ffa.h | 27 ++++++++++--- drivers/tee/optee/optee_msg.h | 65 ++++++++++++++++++++++++++++++-- drivers/tee/optee/optee_smc.h | 71 ++++++++++++++++++++++++++++++++++- 3 files changed, 154 insertions(+), 9 deletions(-)
diff --git a/drivers/tee/optee/optee_ffa.h b/drivers/tee/optee/optee_ffa.h index 257735ae5b56..7bd037200343 100644 --- a/drivers/tee/optee/optee_ffa.h +++ b/drivers/tee/optee/optee_ffa.h @@ -81,7 +81,7 @@
as the second MSG arg struct for
OPTEE_FFA_YIELDING_CALL_WITH_ARG.
Bit[31:8]: Reserved (MBZ)
- w5: Bitfield of secure world capabilities OPTEE_FFA_SEC_CAP_* below,
*/
- w5: Bitfield of OP-TEE capabilities OPTEE_FFA_SEC_CAP_*
- w6: The maximum secure world notification number
- w7: Not used (MBZ)
@@ -94,6 +94,8 @@ #define OPTEE_FFA_SEC_CAP_ASYNC_NOTIF BIT(1) /* OP-TEE supports probing for RPMB device if needed */ #define OPTEE_FFA_SEC_CAP_RPMB_PROBE BIT(2) +/* OP-TEE supports Restricted Memory for secure data path */ +#define OPTEE_FFA_SEC_CAP_RSTMEM BIT(3) #define OPTEE_FFA_EXCHANGE_CAPABILITIES OPTEE_FFA_BLOCKING_CALL(2) @@ -108,7 +110,7 @@
- Return register usage:
- w3: Error code, 0 on success
- w4-w7: Note used (MBZ)
*/
- w4-w7: Not used (MBZ)
#define OPTEE_FFA_UNREGISTER_SHM OPTEE_FFA_BLOCKING_CALL(3) @@ -119,16 +121,31 @@
- Call register usage:
- w3: Service ID, OPTEE_FFA_ENABLE_ASYNC_NOTIF
- w4: Notification value to request bottom half processing, should be
less than OPTEE_FFA_MAX_ASYNC_NOTIF_VALUE.
less than OPTEE_FFA_MAX_ASYNC_NOTIF_VALUE
- w5-w7: Not used (MBZ)
- Return register usage:
- w3: Error code, 0 on success
- w4-w7: Note used (MBZ)
*/
- w4-w7: Not used (MBZ)
#define OPTEE_FFA_ENABLE_ASYNC_NOTIF OPTEE_FFA_BLOCKING_CALL(5) -#define OPTEE_FFA_MAX_ASYNC_NOTIF_VALUE 64 +#define OPTEE_FFA_MAX_ASYNC_NOTIF_VALUE 64
+/*
- Release Restricted memory
- Call register usage:
- w3: Service ID, OPTEE_FFA_RECLAIM_RSTMEM
- w4: Shared memory handle, lower bits
- w5: Shared memory handle, higher bits
- w6-w7: Not used (MBZ)
- Return register usage:
- w3: Error code, 0 on success
- w4-w7: Note used (MBZ)
- */
+#define OPTEE_FFA_RELEASE_RSTMEM OPTEE_FFA_BLOCKING_CALL(8) /*
- Call with struct optee_msg_arg as argument in the supplied shared memory
diff --git a/drivers/tee/optee/optee_msg.h b/drivers/tee/optee/optee_msg.h index e8840a82b983..1b558526e7d9 100644 --- a/drivers/tee/optee/optee_msg.h +++ b/drivers/tee/optee/optee_msg.h @@ -133,13 +133,13 @@ struct optee_msg_param_rmem { }; /**
- struct optee_msg_param_fmem - ffa memory reference parameter
- struct optee_msg_param_fmem - FF-A memory reference parameter
- @offs_lower: Lower bits of offset into shared memory reference
- @offs_upper: Upper bits of offset into shared memory reference
- @internal_offs: Internal offset into the first page of shared memory
reference
- @size: Size of the buffer
- @global_id: Global identifier of Shared memory
*/
- @global_id: Global identifier of the shared memory
struct optee_msg_param_fmem { u32 offs_low; @@ -165,7 +165,7 @@ struct optee_msg_param_value {
- @attr: attributes
- @tmem: parameter by temporary memory reference
- @rmem: parameter by registered memory reference
- @fmem: parameter by ffa registered memory reference
- @fmem: parameter by FF-A registered memory reference
- @value: parameter by opaque value
- @octets: parameter by octet string
@@ -296,6 +296,18 @@ struct optee_msg_arg { */ #define OPTEE_MSG_FUNCID_GET_OS_REVISION 0x0001 +/*
- Values used in OPTEE_MSG_CMD_LEND_RSTMEM below
- OPTEE_MSG_RSTMEM_RESERVED Reserved
- OPTEE_MSG_RSTMEM_SECURE_VIDEO_PLAY Secure Video Playback
- OPTEE_MSG_RSTMEM_TRUSTED_UI Trused UI
- OPTEE_MSG_RSTMEM_SECURE_VIDEO_RECORD Secure Video Recording
- */
+#define OPTEE_MSG_RSTMEM_RESERVED 0 +#define OPTEE_MSG_RSTMEM_SECURE_VIDEO_PLAY 1 +#define OPTEE_MSG_RSTMEM_TRUSTED_UI 2 +#define OPTEE_MSG_RSTMEM_SECURE_VIDEO_RECORD 3
/*
- Do a secure call with struct optee_msg_arg as argument
- The OPTEE_MSG_CMD_* below defines what goes in struct optee_msg_arg::cmd
@@ -337,6 +349,49 @@ struct optee_msg_arg {
- OPTEE_MSG_CMD_STOP_ASYNC_NOTIF informs secure world that from now is
- normal world unable to process asynchronous notifications. Typically
- used when the driver is shut down.
- OPTEE_MSG_CMD_LEND_RSTMEM lends restricted memory. The passed normal
- physical memory is restricted from normal world access. The memory
- should be unmapped prior to this call since it becomes inaccessible
- during the request.
- Parameters are passed as:
- [in] param[0].attr OPTEE_MSG_ATTR_TYPE_VALUE_INPUT
- [in] param[0].u.value.a OPTEE_MSG_RSTMEM_* defined above
- [in] param[1].attr OPTEE_MSG_ATTR_TYPE_TMEM_INPUT
- [in] param[1].u.tmem.buf_ptr physical address
- [in] param[1].u.tmem.size size
- [in] param[1].u.tmem.shm_ref holds restricted memory reference
- OPTEE_MSG_CMD_RECLAIM_RSTMEM reclaims a previously lent restricted
- memory reference. The physical memory is accessible by the normal world
- after this function has return and can be mapped again. The information
- is passed as:
- [in] param[0].attr OPTEE_MSG_ATTR_TYPE_VALUE_INPUT
- [in] param[0].u.value.a holds restricted memory cookie
- OPTEE_MSG_CMD_GET_RSTMEM_CONFIG get configuration for a specific
- restricted memory use case. Parameters are passed as:
- [in] param[0].attr OPTEE_MSG_ATTR_TYPE_VALUE_INOUT
- [in] param[0].value.a OPTEE_MSG_RSTMEM_*
- [in] param[1].attr OPTEE_MSG_ATTR_TYPE_{R,F}MEM_OUTPUT
- [in] param[1].u.{r,f}mem Buffer or NULL
- [in] param[1].u.{r,f}mem.size Provided size of buffer or 0 for query
- output for the restricted use case:
- [out] param[0].value.a Minimal size of SDP memory
- [out] param[0].value.b Required alignment of size and start of
restricted memory
- [out] param[1].{r,f}mem.size Size of output data
- [out] param[1].{r,f}mem If non-NULL, contains an array of
uint16_t holding endpoints that
must be included when lending
memory for this use case
- OPTEE_MSG_CMD_ASSIGN_RSTMEM assigns use-case to restricted memory
- previously lent using the FFA_LEND framework ABI. Parameters are passed
- as:
- [in] param[0].attr OPTEE_MSG_ATTR_TYPE_VALUE_INPUT
- [in] param[0].u.value.a holds restricted memory cookie
*/
- [in] param[0].u.value.b OPTEE_MSG_RSTMEM_* defined above
#define OPTEE_MSG_CMD_OPEN_SESSION 0 #define OPTEE_MSG_CMD_INVOKE_COMMAND 1 @@ -346,6 +401,10 @@ struct optee_msg_arg { #define OPTEE_MSG_CMD_UNREGISTER_SHM 5 #define OPTEE_MSG_CMD_DO_BOTTOM_HALF 6 #define OPTEE_MSG_CMD_STOP_ASYNC_NOTIF 7 +#define OPTEE_MSG_CMD_LEND_RSTMEM 8 +#define OPTEE_MSG_CMD_RECLAIM_RSTMEM 9 +#define OPTEE_MSG_CMD_GET_RSTMEM_CONFIG 10 +#define OPTEE_MSG_CMD_ASSIGN_RSTMEM 11 #define OPTEE_MSG_FUNCID_CALL_WITH_ARG 0x0004 #endif /* _OPTEE_MSG_H */ diff --git a/drivers/tee/optee/optee_smc.h b/drivers/tee/optee/optee_smc.h index 879426300821..abc379ce190c 100644 --- a/drivers/tee/optee/optee_smc.h +++ b/drivers/tee/optee/optee_smc.h @@ -264,7 +264,6 @@ struct optee_smc_get_shm_config_result { #define OPTEE_SMC_SEC_CAP_HAVE_RESERVED_SHM BIT(0) /* Secure world can communicate via previously unregistered shared memory */ #define OPTEE_SMC_SEC_CAP_UNREGISTERED_SHM BIT(1)
/*
- Secure world supports commands "register/unregister shared memory",
- secure world accepts command buffers located in any parts of non-secure RAM
@@ -280,6 +279,10 @@ struct optee_smc_get_shm_config_result { #define OPTEE_SMC_SEC_CAP_RPC_ARG BIT(6) /* Secure world supports probing for RPMB device if needed */ #define OPTEE_SMC_SEC_CAP_RPMB_PROBE BIT(7) +/* Secure world supports Secure Data Path */ +#define OPTEE_SMC_SEC_CAP_SDP BIT(8) +/* Secure world supports dynamic restricted memory */ +#define OPTEE_SMC_SEC_CAP_DYNAMIC_RSTMEM BIT(9) #define OPTEE_SMC_FUNCID_EXCHANGE_CAPABILITIES 9 #define OPTEE_SMC_EXCHANGE_CAPABILITIES \ @@ -451,6 +454,72 @@ struct optee_smc_disable_shm_cache_result { /* See OPTEE_SMC_CALL_WITH_REGD_ARG above */ #define OPTEE_SMC_FUNCID_CALL_WITH_REGD_ARG 19 +/*
- Get Secure Data Path memory config
- Returns the Secure Data Path memory config.
- Call register usage:
- a0 SMC Function ID, OPTEE_SMC_GET_SDP_CONFIG
- a2-6 Not used, must be zero
- a7 Hypervisor Client ID register
- Have config return register usage:
- a0 OPTEE_SMC_RETURN_OK
- a1 Physical address of start of SDP memory
- a2 Size of SDP memory
- a3 Not used
- a4-7 Preserved
- Not available register usage:
- a0 OPTEE_SMC_RETURN_ENOTAVAIL
- a1-3 Not used
- a4-7 Preserved
- */
+#define OPTEE_SMC_FUNCID_GET_SDP_CONFIG 20
Let's have a consistent ABI naming here. I think RSTMEM is more generic rather than SDP, so let's use same naming convention as we use for FF-A ABI.
-Sumit
+#define OPTEE_SMC_GET_SDP_CONFIG \
- OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_GET_SDP_CONFIG)
+struct optee_smc_get_sdp_config_result {
- unsigned long status;
- unsigned long start;
- unsigned long size;
- unsigned long flags;
+};
+/*
- Get Secure Data Path dynamic memory config
- Returns the Secure Data Path dynamic memory config.
- Call register usage:
- a0 SMC Function ID, OPTEE_SMC_GET_DYN_SHM_CONFIG
- a2-6 Not used, must be zero
- a7 Hypervisor Client ID register
- Have config return register usage:
- a0 OPTEE_SMC_RETURN_OK
- a1 Minamal size of SDP memory
- a2 Required alignment of size and start of registered SDP memory
- a3 Not used
- a4-7 Preserved
- Not available register usage:
- a0 OPTEE_SMC_RETURN_ENOTAVAIL
- a1-3 Not used
- a4-7 Preserved
- */
+#define OPTEE_SMC_FUNCID_GET_DYN_SDP_CONFIG 21 +#define OPTEE_SMC_GET_DYN_SDP_CONFIG \
- OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_GET_DYN_SDP_CONFIG)
+struct optee_smc_get_dyn_sdp_config_result {
- unsigned long status;
- unsigned long size;
- unsigned long align;
- unsigned long flags;
+}; /*
- Resume from RPC (for example after processing a foreign interrupt)
-- 2.43.0
Hi Sumit,
On Tue, Mar 25, 2025 at 7:20 AM Sumit Garg sumit.garg@kernel.org wrote:
Hi Jens,
It has taken a bit of time for me to review this patch-set as I am settling in my new role.
On Wed, Mar 05, 2025 at 02:04:10PM +0100, Jens Wiklander wrote:
Update the header files describing the secure world ABI, both with and without FF-A. The ABI is extended to deal with restricted memory, but as usual backward compatible.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/tee/optee/optee_ffa.h | 27 ++++++++++--- drivers/tee/optee/optee_msg.h | 65 ++++++++++++++++++++++++++++++-- drivers/tee/optee/optee_smc.h | 71 ++++++++++++++++++++++++++++++++++- 3 files changed, 154 insertions(+), 9 deletions(-)
diff --git a/drivers/tee/optee/optee_ffa.h b/drivers/tee/optee/optee_ffa.h index 257735ae5b56..7bd037200343 100644 --- a/drivers/tee/optee/optee_ffa.h +++ b/drivers/tee/optee/optee_ffa.h @@ -81,7 +81,7 @@
as the second MSG arg struct for
OPTEE_FFA_YIELDING_CALL_WITH_ARG.
Bit[31:8]: Reserved (MBZ)
- w5: Bitfield of secure world capabilities OPTEE_FFA_SEC_CAP_* below,
*/
- w5: Bitfield of OP-TEE capabilities OPTEE_FFA_SEC_CAP_*
- w6: The maximum secure world notification number
- w7: Not used (MBZ)
@@ -94,6 +94,8 @@ #define OPTEE_FFA_SEC_CAP_ASYNC_NOTIF BIT(1) /* OP-TEE supports probing for RPMB device if needed */ #define OPTEE_FFA_SEC_CAP_RPMB_PROBE BIT(2) +/* OP-TEE supports Restricted Memory for secure data path */ +#define OPTEE_FFA_SEC_CAP_RSTMEM BIT(3)
#define OPTEE_FFA_EXCHANGE_CAPABILITIES OPTEE_FFA_BLOCKING_CALL(2)
@@ -108,7 +110,7 @@
- Return register usage:
- w3: Error code, 0 on success
- w4-w7: Note used (MBZ)
*/
- w4-w7: Not used (MBZ)
#define OPTEE_FFA_UNREGISTER_SHM OPTEE_FFA_BLOCKING_CALL(3)
@@ -119,16 +121,31 @@
- Call register usage:
- w3: Service ID, OPTEE_FFA_ENABLE_ASYNC_NOTIF
- w4: Notification value to request bottom half processing, should be
less than OPTEE_FFA_MAX_ASYNC_NOTIF_VALUE.
less than OPTEE_FFA_MAX_ASYNC_NOTIF_VALUE
- w5-w7: Not used (MBZ)
- Return register usage:
- w3: Error code, 0 on success
- w4-w7: Note used (MBZ)
*/
- w4-w7: Not used (MBZ)
#define OPTEE_FFA_ENABLE_ASYNC_NOTIF OPTEE_FFA_BLOCKING_CALL(5)
-#define OPTEE_FFA_MAX_ASYNC_NOTIF_VALUE 64 +#define OPTEE_FFA_MAX_ASYNC_NOTIF_VALUE 64
+/*
- Release Restricted memory
- Call register usage:
- w3: Service ID, OPTEE_FFA_RECLAIM_RSTMEM
- w4: Shared memory handle, lower bits
- w5: Shared memory handle, higher bits
- w6-w7: Not used (MBZ)
- Return register usage:
- w3: Error code, 0 on success
- w4-w7: Note used (MBZ)
- */
+#define OPTEE_FFA_RELEASE_RSTMEM OPTEE_FFA_BLOCKING_CALL(8)
/*
- Call with struct optee_msg_arg as argument in the supplied shared memory
diff --git a/drivers/tee/optee/optee_msg.h b/drivers/tee/optee/optee_msg.h index e8840a82b983..1b558526e7d9 100644 --- a/drivers/tee/optee/optee_msg.h +++ b/drivers/tee/optee/optee_msg.h @@ -133,13 +133,13 @@ struct optee_msg_param_rmem { };
/**
- struct optee_msg_param_fmem - ffa memory reference parameter
- struct optee_msg_param_fmem - FF-A memory reference parameter
- @offs_lower: Lower bits of offset into shared memory reference
- @offs_upper: Upper bits of offset into shared memory reference
- @internal_offs: Internal offset into the first page of shared memory
reference
- @size: Size of the buffer
- @global_id: Global identifier of Shared memory
*/
- @global_id: Global identifier of the shared memory
struct optee_msg_param_fmem { u32 offs_low; @@ -165,7 +165,7 @@ struct optee_msg_param_value {
- @attr: attributes
- @tmem: parameter by temporary memory reference
- @rmem: parameter by registered memory reference
- @fmem: parameter by ffa registered memory reference
- @fmem: parameter by FF-A registered memory reference
- @value: parameter by opaque value
- @octets: parameter by octet string
@@ -296,6 +296,18 @@ struct optee_msg_arg { */ #define OPTEE_MSG_FUNCID_GET_OS_REVISION 0x0001
+/*
- Values used in OPTEE_MSG_CMD_LEND_RSTMEM below
- OPTEE_MSG_RSTMEM_RESERVED Reserved
- OPTEE_MSG_RSTMEM_SECURE_VIDEO_PLAY Secure Video Playback
- OPTEE_MSG_RSTMEM_TRUSTED_UI Trused UI
- OPTEE_MSG_RSTMEM_SECURE_VIDEO_RECORD Secure Video Recording
- */
+#define OPTEE_MSG_RSTMEM_RESERVED 0 +#define OPTEE_MSG_RSTMEM_SECURE_VIDEO_PLAY 1 +#define OPTEE_MSG_RSTMEM_TRUSTED_UI 2 +#define OPTEE_MSG_RSTMEM_SECURE_VIDEO_RECORD 3
/*
- Do a secure call with struct optee_msg_arg as argument
- The OPTEE_MSG_CMD_* below defines what goes in struct optee_msg_arg::cmd
@@ -337,6 +349,49 @@ struct optee_msg_arg {
- OPTEE_MSG_CMD_STOP_ASYNC_NOTIF informs secure world that from now is
- normal world unable to process asynchronous notifications. Typically
- used when the driver is shut down.
- OPTEE_MSG_CMD_LEND_RSTMEM lends restricted memory. The passed normal
- physical memory is restricted from normal world access. The memory
- should be unmapped prior to this call since it becomes inaccessible
- during the request.
- Parameters are passed as:
- [in] param[0].attr OPTEE_MSG_ATTR_TYPE_VALUE_INPUT
- [in] param[0].u.value.a OPTEE_MSG_RSTMEM_* defined above
- [in] param[1].attr OPTEE_MSG_ATTR_TYPE_TMEM_INPUT
- [in] param[1].u.tmem.buf_ptr physical address
- [in] param[1].u.tmem.size size
- [in] param[1].u.tmem.shm_ref holds restricted memory reference
- OPTEE_MSG_CMD_RECLAIM_RSTMEM reclaims a previously lent restricted
- memory reference. The physical memory is accessible by the normal world
- after this function has return and can be mapped again. The information
- is passed as:
- [in] param[0].attr OPTEE_MSG_ATTR_TYPE_VALUE_INPUT
- [in] param[0].u.value.a holds restricted memory cookie
- OPTEE_MSG_CMD_GET_RSTMEM_CONFIG get configuration for a specific
- restricted memory use case. Parameters are passed as:
- [in] param[0].attr OPTEE_MSG_ATTR_TYPE_VALUE_INOUT
- [in] param[0].value.a OPTEE_MSG_RSTMEM_*
- [in] param[1].attr OPTEE_MSG_ATTR_TYPE_{R,F}MEM_OUTPUT
- [in] param[1].u.{r,f}mem Buffer or NULL
- [in] param[1].u.{r,f}mem.size Provided size of buffer or 0 for query
- output for the restricted use case:
- [out] param[0].value.a Minimal size of SDP memory
- [out] param[0].value.b Required alignment of size and start of
restricted memory
- [out] param[1].{r,f}mem.size Size of output data
- [out] param[1].{r,f}mem If non-NULL, contains an array of
uint16_t holding endpoints that
must be included when lending
memory for this use case
- OPTEE_MSG_CMD_ASSIGN_RSTMEM assigns use-case to restricted memory
- previously lent using the FFA_LEND framework ABI. Parameters are passed
- as:
- [in] param[0].attr OPTEE_MSG_ATTR_TYPE_VALUE_INPUT
- [in] param[0].u.value.a holds restricted memory cookie
*/
- [in] param[0].u.value.b OPTEE_MSG_RSTMEM_* defined above
#define OPTEE_MSG_CMD_OPEN_SESSION 0 #define OPTEE_MSG_CMD_INVOKE_COMMAND 1 @@ -346,6 +401,10 @@ struct optee_msg_arg { #define OPTEE_MSG_CMD_UNREGISTER_SHM 5 #define OPTEE_MSG_CMD_DO_BOTTOM_HALF 6 #define OPTEE_MSG_CMD_STOP_ASYNC_NOTIF 7 +#define OPTEE_MSG_CMD_LEND_RSTMEM 8 +#define OPTEE_MSG_CMD_RECLAIM_RSTMEM 9 +#define OPTEE_MSG_CMD_GET_RSTMEM_CONFIG 10 +#define OPTEE_MSG_CMD_ASSIGN_RSTMEM 11 #define OPTEE_MSG_FUNCID_CALL_WITH_ARG 0x0004
#endif /* _OPTEE_MSG_H */ diff --git a/drivers/tee/optee/optee_smc.h b/drivers/tee/optee/optee_smc.h index 879426300821..abc379ce190c 100644 --- a/drivers/tee/optee/optee_smc.h +++ b/drivers/tee/optee/optee_smc.h @@ -264,7 +264,6 @@ struct optee_smc_get_shm_config_result { #define OPTEE_SMC_SEC_CAP_HAVE_RESERVED_SHM BIT(0) /* Secure world can communicate via previously unregistered shared memory */ #define OPTEE_SMC_SEC_CAP_UNREGISTERED_SHM BIT(1)
/*
- Secure world supports commands "register/unregister shared memory",
- secure world accepts command buffers located in any parts of non-secure RAM
@@ -280,6 +279,10 @@ struct optee_smc_get_shm_config_result { #define OPTEE_SMC_SEC_CAP_RPC_ARG BIT(6) /* Secure world supports probing for RPMB device if needed */ #define OPTEE_SMC_SEC_CAP_RPMB_PROBE BIT(7) +/* Secure world supports Secure Data Path */ +#define OPTEE_SMC_SEC_CAP_SDP BIT(8) +/* Secure world supports dynamic restricted memory */ +#define OPTEE_SMC_SEC_CAP_DYNAMIC_RSTMEM BIT(9)
#define OPTEE_SMC_FUNCID_EXCHANGE_CAPABILITIES 9 #define OPTEE_SMC_EXCHANGE_CAPABILITIES \ @@ -451,6 +454,72 @@ struct optee_smc_disable_shm_cache_result {
/* See OPTEE_SMC_CALL_WITH_REGD_ARG above */ #define OPTEE_SMC_FUNCID_CALL_WITH_REGD_ARG 19 +/*
- Get Secure Data Path memory config
- Returns the Secure Data Path memory config.
- Call register usage:
- a0 SMC Function ID, OPTEE_SMC_GET_SDP_CONFIG
- a2-6 Not used, must be zero
- a7 Hypervisor Client ID register
- Have config return register usage:
- a0 OPTEE_SMC_RETURN_OK
- a1 Physical address of start of SDP memory
- a2 Size of SDP memory
- a3 Not used
- a4-7 Preserved
- Not available register usage:
- a0 OPTEE_SMC_RETURN_ENOTAVAIL
- a1-3 Not used
- a4-7 Preserved
- */
+#define OPTEE_SMC_FUNCID_GET_SDP_CONFIG 20
Let's have a consistent ABI naming here. I think RSTMEM is more generic rather than SDP, so let's use same naming convention as we use for FF-A ABI.
Yes, I'll fix it.
Cheers, Jens
-Sumit
+#define OPTEE_SMC_GET_SDP_CONFIG \
OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_GET_SDP_CONFIG)
+struct optee_smc_get_sdp_config_result {
unsigned long status;
unsigned long start;
unsigned long size;
unsigned long flags;
+};
+/*
- Get Secure Data Path dynamic memory config
- Returns the Secure Data Path dynamic memory config.
- Call register usage:
- a0 SMC Function ID, OPTEE_SMC_GET_DYN_SHM_CONFIG
- a2-6 Not used, must be zero
- a7 Hypervisor Client ID register
- Have config return register usage:
- a0 OPTEE_SMC_RETURN_OK
- a1 Minamal size of SDP memory
- a2 Required alignment of size and start of registered SDP memory
- a3 Not used
- a4-7 Preserved
- Not available register usage:
- a0 OPTEE_SMC_RETURN_ENOTAVAIL
- a1-3 Not used
- a4-7 Preserved
- */
+#define OPTEE_SMC_FUNCID_GET_DYN_SDP_CONFIG 21 +#define OPTEE_SMC_GET_DYN_SDP_CONFIG \
OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_GET_DYN_SDP_CONFIG)
+struct optee_smc_get_dyn_sdp_config_result {
unsigned long status;
unsigned long size;
unsigned long align;
unsigned long flags;
+};
/*
- Resume from RPC (for example after processing a foreign interrupt)
-- 2.43.0
Implement DMA heap for restricted DMA-buf allocation in the TEE subsystem.
Restricted memory refers to memory buffers behind a hardware enforced firewall. It is not accessible to the kernel during normal circumstances but rather only accessible to certain hardware IPs or CPUs executing in higher or differently privileged mode than the kernel itself. This interface allows to allocate and manage such restricted memory buffers via interaction with a TEE implementation.
The restricted memory is allocated for a specific use-case, like Secure Video Playback, Trusted UI, or Secure Video Recording where certain hardware devices can access the memory.
The DMA-heaps are enabled explicitly by the TEE backend driver. The TEE backend drivers needs to implement restricted memory pool to manage the restricted memory.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org --- drivers/tee/Makefile | 1 + drivers/tee/tee_heap.c | 470 ++++++++++++++++++++++++++++++++++++++ drivers/tee/tee_private.h | 6 + include/linux/tee_core.h | 62 +++++ 4 files changed, 539 insertions(+) create mode 100644 drivers/tee/tee_heap.c
diff --git a/drivers/tee/Makefile b/drivers/tee/Makefile index 5488cba30bd2..949a6a79fb06 100644 --- a/drivers/tee/Makefile +++ b/drivers/tee/Makefile @@ -1,6 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_TEE) += tee.o tee-objs += tee_core.o +tee-objs += tee_heap.o tee-objs += tee_shm.o tee-objs += tee_shm_pool.o obj-$(CONFIG_OPTEE) += optee/ diff --git a/drivers/tee/tee_heap.c b/drivers/tee/tee_heap.c new file mode 100644 index 000000000000..476ab2e27260 --- /dev/null +++ b/drivers/tee/tee_heap.c @@ -0,0 +1,470 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2025, Linaro Limited + */ + +#include <linux/scatterlist.h> +#include <linux/dma-buf.h> +#include <linux/dma-heap.h> +#include <linux/genalloc.h> +#include <linux/module.h> +#include <linux/scatterlist.h> +#include <linux/slab.h> +#include <linux/tee_core.h> +#include <linux/xarray.h> + +#include "tee_private.h" + +struct tee_dma_heap { + struct dma_heap *heap; + enum tee_dma_heap_id id; + struct tee_rstmem_pool *pool; + struct tee_device *teedev; + /* Protects pool and teedev above */ + struct mutex mu; +}; + +struct tee_heap_buffer { + struct tee_rstmem_pool *pool; + struct tee_device *teedev; + size_t size; + size_t offs; + struct sg_table table; +}; + +struct tee_heap_attachment { + struct sg_table table; + struct device *dev; +}; + +struct tee_rstmem_static_pool { + struct tee_rstmem_pool pool; + struct gen_pool *gen_pool; + phys_addr_t pa_base; +}; + +#if !IS_MODULE(CONFIG_TEE) && IS_ENABLED(CONFIG_DMABUF_HEAPS) +static DEFINE_XARRAY_ALLOC(tee_dma_heap); + +static int copy_sg_table(struct sg_table *dst, struct sg_table *src) +{ + struct scatterlist *dst_sg; + struct scatterlist *src_sg; + int ret; + int i; + + ret = sg_alloc_table(dst, src->orig_nents, GFP_KERNEL); + if (ret) + return ret; + + dst_sg = dst->sgl; + for_each_sgtable_sg(src, src_sg, i) { + sg_set_page(dst_sg, sg_page(src_sg), src_sg->length, + src_sg->offset); + dst_sg = sg_next(dst_sg); + } + + return 0; +} + +static int tee_heap_attach(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) +{ + struct tee_heap_buffer *buf = dmabuf->priv; + struct tee_heap_attachment *a; + int ret; + + a = kzalloc(sizeof(*a), GFP_KERNEL); + if (!a) + return -ENOMEM; + + ret = copy_sg_table(&a->table, &buf->table); + if (ret) { + kfree(a); + return ret; + } + + a->dev = attachment->dev; + attachment->priv = a; + + return 0; +} + +static void tee_heap_detach(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) +{ + struct tee_heap_attachment *a = attachment->priv; + + sg_free_table(&a->table); + kfree(a); +} + +static struct sg_table * +tee_heap_map_dma_buf(struct dma_buf_attachment *attachment, + enum dma_data_direction direction) +{ + struct tee_heap_attachment *a = attachment->priv; + int ret; + + ret = dma_map_sgtable(attachment->dev, &a->table, direction, + DMA_ATTR_SKIP_CPU_SYNC); + if (ret) + return ERR_PTR(ret); + + return &a->table; +} + +static void tee_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, + struct sg_table *table, + enum dma_data_direction direction) +{ + struct tee_heap_attachment *a = attachment->priv; + + WARN_ON(&a->table != table); + + dma_unmap_sgtable(attachment->dev, table, direction, + DMA_ATTR_SKIP_CPU_SYNC); +} + +static void tee_heap_buf_free(struct dma_buf *dmabuf) +{ + struct tee_heap_buffer *buf = dmabuf->priv; + struct tee_device *teedev = buf->teedev; + + buf->pool->ops->free(buf->pool, &buf->table); + tee_device_put(teedev); +} + +static const struct dma_buf_ops tee_heap_buf_ops = { + .attach = tee_heap_attach, + .detach = tee_heap_detach, + .map_dma_buf = tee_heap_map_dma_buf, + .unmap_dma_buf = tee_heap_unmap_dma_buf, + .release = tee_heap_buf_free, +}; + +static struct dma_buf *tee_dma_heap_alloc(struct dma_heap *heap, + unsigned long len, u32 fd_flags, + u64 heap_flags) +{ + struct tee_dma_heap *h = dma_heap_get_drvdata(heap); + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); + struct tee_device *teedev = NULL; + struct tee_heap_buffer *buf; + struct tee_rstmem_pool *pool; + struct dma_buf *dmabuf; + int rc; + + mutex_lock(&h->mu); + if (tee_device_get(h->teedev)) { + teedev = h->teedev; + pool = h->pool; + } + mutex_unlock(&h->mu); + + if (!teedev) + return ERR_PTR(-EINVAL); + + buf = kzalloc(sizeof(*buf), GFP_KERNEL); + if (!buf) { + dmabuf = ERR_PTR(-ENOMEM); + goto err; + } + buf->size = len; + buf->pool = pool; + buf->teedev = teedev; + + rc = pool->ops->alloc(pool, &buf->table, len, &buf->offs); + if (rc) { + dmabuf = ERR_PTR(rc); + goto err_kfree; + } + + exp_info.ops = &tee_heap_buf_ops; + exp_info.size = len; + exp_info.priv = buf; + exp_info.flags = fd_flags; + dmabuf = dma_buf_export(&exp_info); + if (IS_ERR(dmabuf)) + goto err_rstmem_free; + + return dmabuf; + +err_rstmem_free: + pool->ops->free(pool, &buf->table); +err_kfree: + kfree(buf); +err: + tee_device_put(h->teedev); + return dmabuf; +} + +static const struct dma_heap_ops tee_dma_heap_ops = { + .allocate = tee_dma_heap_alloc, +}; + +static const char *heap_id_2_name(enum tee_dma_heap_id id) +{ + switch (id) { + case TEE_DMA_HEAP_SECURE_VIDEO_PLAY: + return "restricted,secure-video"; + case TEE_DMA_HEAP_TRUSTED_UI: + return "restricted,trusted-ui"; + case TEE_DMA_HEAP_SECURE_VIDEO_RECORD: + return "restricted,secure-video-record"; + default: + return NULL; + } +} + +static int alloc_dma_heap(struct tee_device *teedev, enum tee_dma_heap_id id, + struct tee_rstmem_pool *pool) +{ + struct dma_heap_export_info exp_info = { + .ops = &tee_dma_heap_ops, + .name = heap_id_2_name(id), + }; + struct tee_dma_heap *h; + int rc; + + if (!exp_info.name) + return -EINVAL; + + if (xa_reserve(&tee_dma_heap, id, GFP_KERNEL)) { + if (!xa_load(&tee_dma_heap, id)) + return -EEXIST; + return -ENOMEM; + } + + h = kzalloc(sizeof(*h), GFP_KERNEL); + if (!h) + return -ENOMEM; + h->id = id; + h->teedev = teedev; + h->pool = pool; + mutex_init(&h->mu); + + exp_info.priv = h; + h->heap = dma_heap_add(&exp_info); + if (IS_ERR(h->heap)) { + rc = PTR_ERR(h->heap); + kfree(h); + + return rc; + } + + /* "can't fail" due to the call to xa_reserve() above */ + return WARN(xa_store(&tee_dma_heap, id, h, GFP_KERNEL), + "xa_store() failed"); +} + +int tee_device_register_dma_heap(struct tee_device *teedev, + enum tee_dma_heap_id id, + struct tee_rstmem_pool *pool) +{ + struct tee_dma_heap *h; + int rc; + + h = xa_load(&tee_dma_heap, id); + if (h) { + mutex_lock(&h->mu); + if (h->teedev) { + rc = -EBUSY; + } else { + h->teedev = teedev; + h->pool = pool; + rc = 0; + } + mutex_unlock(&h->mu); + } else { + rc = alloc_dma_heap(teedev, id, pool); + } + + if (rc) + dev_err(&teedev->dev, "can't register DMA heap id %d (%s)\n", + id, heap_id_2_name(id)); + + return rc; +} + +void tee_device_unregister_all_dma_heaps(struct tee_device *teedev) +{ + struct tee_rstmem_pool *pool; + struct tee_dma_heap *h; + u_long i; + + xa_for_each(&tee_dma_heap, i, h) { + if (h) { + pool = NULL; + mutex_lock(&h->mu); + if (h->teedev == teedev) { + pool = h->pool; + h->teedev = NULL; + h->pool = NULL; + } + mutex_unlock(&h->mu); + if (pool) + pool->ops->destroy_pool(pool); + } + } +} +EXPORT_SYMBOL_GPL(tee_device_unregister_all_dma_heaps); + +int tee_heap_update_from_dma_buf(struct tee_device *teedev, + struct dma_buf *dmabuf, size_t *offset, + struct tee_shm *shm, + struct tee_shm **parent_shm) +{ + struct tee_heap_buffer *buf; + int rc; + + /* The DMA-buf must be from our heap */ + if (dmabuf->ops != &tee_heap_buf_ops) + return -EINVAL; + + buf = dmabuf->priv; + /* The buffer must be from the same teedev */ + if (buf->teedev != teedev) + return -EINVAL; + + shm->size = buf->size; + + rc = buf->pool->ops->update_shm(buf->pool, &buf->table, buf->offs, shm, + parent_shm); + if (!rc && *parent_shm) + *offset = buf->offs; + + return rc; +} +#else +int tee_device_register_dma_heap(struct tee_device *teedev __always_unused, + enum tee_dma_heap_id id __always_unused, + struct tee_rstmem_pool *pool __always_unused) +{ + return -EINVAL; +} +EXPORT_SYMBOL_GPL(tee_device_register_dma_heap); + +void +tee_device_unregister_all_dma_heaps(struct tee_device *teedev __always_unused) +{ +} +EXPORT_SYMBOL_GPL(tee_device_unregister_all_dma_heaps); + +int tee_heap_update_from_dma_buf(struct tee_device *teedev __always_unused, + struct dma_buf *dmabuf __always_unused, + size_t *offset __always_unused, + struct tee_shm *shm __always_unused, + struct tee_shm **parent_shm __always_unused) +{ + return -EINVAL; +} +#endif + +static struct tee_rstmem_static_pool * +to_rstmem_static_pool(struct tee_rstmem_pool *pool) +{ + return container_of(pool, struct tee_rstmem_static_pool, pool); +} + +static int rstmem_pool_op_static_alloc(struct tee_rstmem_pool *pool, + struct sg_table *sgt, size_t size, + size_t *offs) +{ + struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool); + phys_addr_t pa; + int ret; + + pa = gen_pool_alloc(stp->gen_pool, size); + if (!pa) + return -ENOMEM; + + ret = sg_alloc_table(sgt, 1, GFP_KERNEL); + if (ret) { + gen_pool_free(stp->gen_pool, pa, size); + return ret; + } + + sg_set_page(sgt->sgl, phys_to_page(pa), size, 0); + *offs = pa - stp->pa_base; + + return 0; +} + +static void rstmem_pool_op_static_free(struct tee_rstmem_pool *pool, + struct sg_table *sgt) +{ + struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool); + struct scatterlist *sg; + int i; + + for_each_sgtable_sg(sgt, sg, i) + gen_pool_free(stp->gen_pool, sg_phys(sg), sg->length); + sg_free_table(sgt); +} + +static int rstmem_pool_op_static_update_shm(struct tee_rstmem_pool *pool, + struct sg_table *sgt, size_t offs, + struct tee_shm *shm, + struct tee_shm **parent_shm) +{ + struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool); + + shm->paddr = stp->pa_base + offs; + *parent_shm = NULL; + + return 0; +} + +static void rstmem_pool_op_static_destroy_pool(struct tee_rstmem_pool *pool) +{ + struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool); + + gen_pool_destroy(stp->gen_pool); + kfree(stp); +} + +static struct tee_rstmem_pool_ops rstmem_pool_ops_static = { + .alloc = rstmem_pool_op_static_alloc, + .free = rstmem_pool_op_static_free, + .update_shm = rstmem_pool_op_static_update_shm, + .destroy_pool = rstmem_pool_op_static_destroy_pool, +}; + +struct tee_rstmem_pool *tee_rstmem_static_pool_alloc(phys_addr_t paddr, + size_t size) +{ + const size_t page_mask = PAGE_SIZE - 1; + struct tee_rstmem_static_pool *stp; + int rc; + + /* Check it's page aligned */ + if ((paddr | size) & page_mask) + return ERR_PTR(-EINVAL); + + stp = kzalloc(sizeof(*stp), GFP_KERNEL); + if (!stp) + return ERR_PTR(-ENOMEM); + + stp->gen_pool = gen_pool_create(PAGE_SHIFT, -1); + if (!stp->gen_pool) { + rc = -ENOMEM; + goto err_free; + } + + rc = gen_pool_add(stp->gen_pool, paddr, size, -1); + if (rc) + goto err_free_pool; + + stp->pool.ops = &rstmem_pool_ops_static; + stp->pa_base = paddr; + return &stp->pool; + +err_free_pool: + gen_pool_destroy(stp->gen_pool); +err_free: + kfree(stp); + + return ERR_PTR(rc); +} +EXPORT_SYMBOL_GPL(tee_rstmem_static_pool_alloc); diff --git a/drivers/tee/tee_private.h b/drivers/tee/tee_private.h index 9bc50605227c..6c6ff5d5eed2 100644 --- a/drivers/tee/tee_private.h +++ b/drivers/tee/tee_private.h @@ -8,6 +8,7 @@ #include <linux/cdev.h> #include <linux/completion.h> #include <linux/device.h> +#include <linux/dma-buf.h> #include <linux/kref.h> #include <linux/mutex.h> #include <linux/types.h> @@ -24,4 +25,9 @@ struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size); struct tee_shm *tee_shm_register_user_buf(struct tee_context *ctx, unsigned long addr, size_t length);
+int tee_heap_update_from_dma_buf(struct tee_device *teedev, + struct dma_buf *dmabuf, size_t *offset, + struct tee_shm *shm, + struct tee_shm **parent_shm); + #endif /*TEE_PRIVATE_H*/ diff --git a/include/linux/tee_core.h b/include/linux/tee_core.h index a38494d6b5f4..16ef078247ae 100644 --- a/include/linux/tee_core.h +++ b/include/linux/tee_core.h @@ -8,9 +8,11 @@
#include <linux/cdev.h> #include <linux/device.h> +#include <linux/dma-buf.h> #include <linux/idr.h> #include <linux/kref.h> #include <linux/list.h> +#include <linux/scatterlist.h> #include <linux/tee.h> #include <linux/tee_drv.h> #include <linux/types.h> @@ -30,6 +32,12 @@ #define TEE_DEVICE_FLAG_REGISTERED 0x1 #define TEE_MAX_DEV_NAME_LEN 32
+enum tee_dma_heap_id { + TEE_DMA_HEAP_SECURE_VIDEO_PLAY = 1, + TEE_DMA_HEAP_TRUSTED_UI, + TEE_DMA_HEAP_SECURE_VIDEO_RECORD, +}; + /** * struct tee_device - TEE Device representation * @name: name of device @@ -116,6 +124,33 @@ struct tee_desc { u32 flags; };
+/** + * struct tee_rstmem_pool - restricted memory pool + * @ops: operations + * + * This is an abstract interface where this struct is expected to be + * embedded in another struct specific to the implementation. + */ +struct tee_rstmem_pool { + const struct tee_rstmem_pool_ops *ops; +}; + +/** + * struct tee_rstmem_pool_ops - restricted memory pool operations + * @alloc: called when allocating restricted memory + * @free: called when freeing restricted memory + * @destroy_pool: called when destroying the pool + */ +struct tee_rstmem_pool_ops { + int (*alloc)(struct tee_rstmem_pool *pool, struct sg_table *sgt, + size_t size, size_t *offs); + void (*free)(struct tee_rstmem_pool *pool, struct sg_table *sgt); + int (*update_shm)(struct tee_rstmem_pool *pool, struct sg_table *sgt, + size_t offs, struct tee_shm *shm, + struct tee_shm **parent_shm); + void (*destroy_pool)(struct tee_rstmem_pool *pool); +}; + /** * tee_device_alloc() - Allocate a new struct tee_device instance * @teedesc: Descriptor for this driver @@ -154,6 +189,11 @@ int tee_device_register(struct tee_device *teedev); */ void tee_device_unregister(struct tee_device *teedev);
+int tee_device_register_dma_heap(struct tee_device *teedev, + enum tee_dma_heap_id id, + struct tee_rstmem_pool *pool); +void tee_device_unregister_all_dma_heaps(struct tee_device *teedev); + /** * tee_device_set_dev_groups() - Set device attribute groups * @teedev: Device to register @@ -229,6 +269,28 @@ static inline void tee_shm_pool_free(struct tee_shm_pool *pool) pool->ops->destroy_pool(pool); }
+/** + * tee_rstmem_static_pool_alloc() - Create a restricted memory manager + * @paddr: Physical address of start of pool + * @size: Size in bytes of the pool + * + * @returns pointer to a 'struct tee_shm_pool' or an ERR_PTR on failure. + */ +struct tee_rstmem_pool *tee_rstmem_static_pool_alloc(phys_addr_t paddr, + size_t size); + +/** + * tee_rstmem_pool_free() - Free a restricted memory pool + * @pool: The restricted memory pool to free + * + * There must be no remaining restricted memory allocated from this pool + * when this function is called. + */ +static inline void tee_rstmem_pool_free(struct tee_rstmem_pool *pool) +{ + pool->ops->destroy_pool(pool); +} + /** * tee_get_drvdata() - Return driver_data pointer * @returns the driver_data pointer supplied to tee_register().
Hi Jens,
On Wed, Mar 05, 2025 at 02:04:11PM +0100, Jens Wiklander wrote:
Implement DMA heap for restricted DMA-buf allocation in the TEE subsystem.
Restricted memory refers to memory buffers behind a hardware enforced firewall. It is not accessible to the kernel during normal circumstances but rather only accessible to certain hardware IPs or CPUs executing in higher or differently privileged mode than the kernel itself. This interface allows to allocate and manage such restricted memory buffers via interaction with a TEE implementation.
The restricted memory is allocated for a specific use-case, like Secure Video Playback, Trusted UI, or Secure Video Recording where certain hardware devices can access the memory.
The DMA-heaps are enabled explicitly by the TEE backend driver. The TEE backend drivers needs to implement restricted memory pool to manage the restricted memory.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/tee/Makefile | 1 + drivers/tee/tee_heap.c | 470 ++++++++++++++++++++++++++++++++++++++ drivers/tee/tee_private.h | 6 + include/linux/tee_core.h | 62 +++++ 4 files changed, 539 insertions(+) create mode 100644 drivers/tee/tee_heap.c
diff --git a/drivers/tee/Makefile b/drivers/tee/Makefile index 5488cba30bd2..949a6a79fb06 100644 --- a/drivers/tee/Makefile +++ b/drivers/tee/Makefile @@ -1,6 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_TEE) += tee.o tee-objs += tee_core.o +tee-objs += tee_heap.o tee-objs += tee_shm.o tee-objs += tee_shm_pool.o obj-$(CONFIG_OPTEE) += optee/ diff --git a/drivers/tee/tee_heap.c b/drivers/tee/tee_heap.c new file mode 100644 index 000000000000..476ab2e27260 --- /dev/null +++ b/drivers/tee/tee_heap.c @@ -0,0 +1,470 @@ +// SPDX-License-Identifier: GPL-2.0-only +/*
- Copyright (c) 2025, Linaro Limited
- */
+#include <linux/scatterlist.h> +#include <linux/dma-buf.h> +#include <linux/dma-heap.h> +#include <linux/genalloc.h> +#include <linux/module.h> +#include <linux/scatterlist.h> +#include <linux/slab.h> +#include <linux/tee_core.h> +#include <linux/xarray.h>
Lets try to follow alphabetical order here.
+#include "tee_private.h"
+struct tee_dma_heap {
- struct dma_heap *heap;
- enum tee_dma_heap_id id;
- struct tee_rstmem_pool *pool;
- struct tee_device *teedev;
- /* Protects pool and teedev above */
- struct mutex mu;
+};
+struct tee_heap_buffer {
- struct tee_rstmem_pool *pool;
- struct tee_device *teedev;
- size_t size;
- size_t offs;
- struct sg_table table;
+};
+struct tee_heap_attachment {
- struct sg_table table;
- struct device *dev;
+};
+struct tee_rstmem_static_pool {
- struct tee_rstmem_pool pool;
- struct gen_pool *gen_pool;
- phys_addr_t pa_base;
+};
+#if !IS_MODULE(CONFIG_TEE) && IS_ENABLED(CONFIG_DMABUF_HEAPS)
Can this dependency rather be better managed via Kconfig?
+static DEFINE_XARRAY_ALLOC(tee_dma_heap);
+static int copy_sg_table(struct sg_table *dst, struct sg_table *src) +{
- struct scatterlist *dst_sg;
- struct scatterlist *src_sg;
- int ret;
- int i;
- ret = sg_alloc_table(dst, src->orig_nents, GFP_KERNEL);
- if (ret)
return ret;
- dst_sg = dst->sgl;
- for_each_sgtable_sg(src, src_sg, i) {
sg_set_page(dst_sg, sg_page(src_sg), src_sg->length,
src_sg->offset);
dst_sg = sg_next(dst_sg);
- }
- return 0;
+}
+static int tee_heap_attach(struct dma_buf *dmabuf,
struct dma_buf_attachment *attachment)
+{
- struct tee_heap_buffer *buf = dmabuf->priv;
- struct tee_heap_attachment *a;
- int ret;
- a = kzalloc(sizeof(*a), GFP_KERNEL);
- if (!a)
return -ENOMEM;
- ret = copy_sg_table(&a->table, &buf->table);
- if (ret) {
kfree(a);
return ret;
- }
- a->dev = attachment->dev;
- attachment->priv = a;
- return 0;
+}
+static void tee_heap_detach(struct dma_buf *dmabuf,
struct dma_buf_attachment *attachment)
+{
- struct tee_heap_attachment *a = attachment->priv;
- sg_free_table(&a->table);
- kfree(a);
+}
+static struct sg_table * +tee_heap_map_dma_buf(struct dma_buf_attachment *attachment,
enum dma_data_direction direction)
+{
- struct tee_heap_attachment *a = attachment->priv;
- int ret;
- ret = dma_map_sgtable(attachment->dev, &a->table, direction,
DMA_ATTR_SKIP_CPU_SYNC);
- if (ret)
return ERR_PTR(ret);
- return &a->table;
+}
+static void tee_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
struct sg_table *table,
enum dma_data_direction direction)
+{
- struct tee_heap_attachment *a = attachment->priv;
- WARN_ON(&a->table != table);
- dma_unmap_sgtable(attachment->dev, table, direction,
DMA_ATTR_SKIP_CPU_SYNC);
+}
+static void tee_heap_buf_free(struct dma_buf *dmabuf) +{
- struct tee_heap_buffer *buf = dmabuf->priv;
- struct tee_device *teedev = buf->teedev;
- buf->pool->ops->free(buf->pool, &buf->table);
- tee_device_put(teedev);
+}
+static const struct dma_buf_ops tee_heap_buf_ops = {
- .attach = tee_heap_attach,
- .detach = tee_heap_detach,
- .map_dma_buf = tee_heap_map_dma_buf,
- .unmap_dma_buf = tee_heap_unmap_dma_buf,
- .release = tee_heap_buf_free,
+};
+static struct dma_buf *tee_dma_heap_alloc(struct dma_heap *heap,
unsigned long len, u32 fd_flags,
u64 heap_flags)
+{
- struct tee_dma_heap *h = dma_heap_get_drvdata(heap);
- DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
- struct tee_device *teedev = NULL;
- struct tee_heap_buffer *buf;
- struct tee_rstmem_pool *pool;
- struct dma_buf *dmabuf;
- int rc;
- mutex_lock(&h->mu);
- if (tee_device_get(h->teedev)) {
teedev = h->teedev;
pool = h->pool;
- }
- mutex_unlock(&h->mu);
- if (!teedev)
return ERR_PTR(-EINVAL);
- buf = kzalloc(sizeof(*buf), GFP_KERNEL);
- if (!buf) {
dmabuf = ERR_PTR(-ENOMEM);
goto err;
- }
- buf->size = len;
- buf->pool = pool;
- buf->teedev = teedev;
- rc = pool->ops->alloc(pool, &buf->table, len, &buf->offs);
- if (rc) {
dmabuf = ERR_PTR(rc);
goto err_kfree;
- }
- exp_info.ops = &tee_heap_buf_ops;
- exp_info.size = len;
- exp_info.priv = buf;
- exp_info.flags = fd_flags;
- dmabuf = dma_buf_export(&exp_info);
- if (IS_ERR(dmabuf))
goto err_rstmem_free;
- return dmabuf;
+err_rstmem_free:
- pool->ops->free(pool, &buf->table);
+err_kfree:
- kfree(buf);
+err:
- tee_device_put(h->teedev);
- return dmabuf;
+}
+static const struct dma_heap_ops tee_dma_heap_ops = {
- .allocate = tee_dma_heap_alloc,
+};
+static const char *heap_id_2_name(enum tee_dma_heap_id id) +{
- switch (id) {
- case TEE_DMA_HEAP_SECURE_VIDEO_PLAY:
return "restricted,secure-video";
- case TEE_DMA_HEAP_TRUSTED_UI:
return "restricted,trusted-ui";
- case TEE_DMA_HEAP_SECURE_VIDEO_RECORD:
return "restricted,secure-video-record";
- default:
return NULL;
- }
+}
+static int alloc_dma_heap(struct tee_device *teedev, enum tee_dma_heap_id id,
struct tee_rstmem_pool *pool)
+{
- struct dma_heap_export_info exp_info = {
.ops = &tee_dma_heap_ops,
.name = heap_id_2_name(id),
- };
- struct tee_dma_heap *h;
- int rc;
- if (!exp_info.name)
return -EINVAL;
- if (xa_reserve(&tee_dma_heap, id, GFP_KERNEL)) {
if (!xa_load(&tee_dma_heap, id))
return -EEXIST;
return -ENOMEM;
- }
- h = kzalloc(sizeof(*h), GFP_KERNEL);
- if (!h)
return -ENOMEM;
- h->id = id;
- h->teedev = teedev;
- h->pool = pool;
- mutex_init(&h->mu);
- exp_info.priv = h;
- h->heap = dma_heap_add(&exp_info);
- if (IS_ERR(h->heap)) {
rc = PTR_ERR(h->heap);
kfree(h);
return rc;
- }
- /* "can't fail" due to the call to xa_reserve() above */
- return WARN(xa_store(&tee_dma_heap, id, h, GFP_KERNEL),
"xa_store() failed");
+}
+int tee_device_register_dma_heap(struct tee_device *teedev,
enum tee_dma_heap_id id,
struct tee_rstmem_pool *pool)
+{
- struct tee_dma_heap *h;
- int rc;
- h = xa_load(&tee_dma_heap, id);
- if (h) {
mutex_lock(&h->mu);
if (h->teedev) {
rc = -EBUSY;
} else {
h->teedev = teedev;
h->pool = pool;
rc = 0;
}
mutex_unlock(&h->mu);
- } else {
rc = alloc_dma_heap(teedev, id, pool);
- }
- if (rc)
dev_err(&teedev->dev, "can't register DMA heap id %d (%s)\n",
id, heap_id_2_name(id));
- return rc;
+}
+void tee_device_unregister_all_dma_heaps(struct tee_device *teedev) +{
- struct tee_rstmem_pool *pool;
- struct tee_dma_heap *h;
- u_long i;
- xa_for_each(&tee_dma_heap, i, h) {
if (h) {
pool = NULL;
mutex_lock(&h->mu);
if (h->teedev == teedev) {
pool = h->pool;
h->teedev = NULL;
h->pool = NULL;
}
mutex_unlock(&h->mu);
if (pool)
pool->ops->destroy_pool(pool);
}
- }
+} +EXPORT_SYMBOL_GPL(tee_device_unregister_all_dma_heaps);
+int tee_heap_update_from_dma_buf(struct tee_device *teedev,
struct dma_buf *dmabuf, size_t *offset,
struct tee_shm *shm,
struct tee_shm **parent_shm)
+{
- struct tee_heap_buffer *buf;
- int rc;
- /* The DMA-buf must be from our heap */
- if (dmabuf->ops != &tee_heap_buf_ops)
return -EINVAL;
- buf = dmabuf->priv;
- /* The buffer must be from the same teedev */
- if (buf->teedev != teedev)
return -EINVAL;
- shm->size = buf->size;
- rc = buf->pool->ops->update_shm(buf->pool, &buf->table, buf->offs, shm,
parent_shm);
- if (!rc && *parent_shm)
*offset = buf->offs;
- return rc;
+} +#else +int tee_device_register_dma_heap(struct tee_device *teedev __always_unused,
enum tee_dma_heap_id id __always_unused,
struct tee_rstmem_pool *pool __always_unused)
+{
- return -EINVAL;
+} +EXPORT_SYMBOL_GPL(tee_device_register_dma_heap);
+void +tee_device_unregister_all_dma_heaps(struct tee_device *teedev __always_unused) +{ +} +EXPORT_SYMBOL_GPL(tee_device_unregister_all_dma_heaps);
+int tee_heap_update_from_dma_buf(struct tee_device *teedev __always_unused,
struct dma_buf *dmabuf __always_unused,
size_t *offset __always_unused,
struct tee_shm *shm __always_unused,
struct tee_shm **parent_shm __always_unused)
+{
- return -EINVAL;
+} +#endif
+static struct tee_rstmem_static_pool * +to_rstmem_static_pool(struct tee_rstmem_pool *pool) +{
- return container_of(pool, struct tee_rstmem_static_pool, pool);
+}
+static int rstmem_pool_op_static_alloc(struct tee_rstmem_pool *pool,
struct sg_table *sgt, size_t size,
size_t *offs)
+{
- struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool);
- phys_addr_t pa;
- int ret;
- pa = gen_pool_alloc(stp->gen_pool, size);
- if (!pa)
return -ENOMEM;
- ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
- if (ret) {
gen_pool_free(stp->gen_pool, pa, size);
return ret;
- }
- sg_set_page(sgt->sgl, phys_to_page(pa), size, 0);
- *offs = pa - stp->pa_base;
- return 0;
+}
+static void rstmem_pool_op_static_free(struct tee_rstmem_pool *pool,
struct sg_table *sgt)
+{
- struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool);
- struct scatterlist *sg;
- int i;
- for_each_sgtable_sg(sgt, sg, i)
gen_pool_free(stp->gen_pool, sg_phys(sg), sg->length);
- sg_free_table(sgt);
+}
+static int rstmem_pool_op_static_update_shm(struct tee_rstmem_pool *pool,
struct sg_table *sgt, size_t offs,
struct tee_shm *shm,
struct tee_shm **parent_shm)
+{
- struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool);
- shm->paddr = stp->pa_base + offs;
- *parent_shm = NULL;
- return 0;
+}
+static void rstmem_pool_op_static_destroy_pool(struct tee_rstmem_pool *pool) +{
- struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool);
- gen_pool_destroy(stp->gen_pool);
- kfree(stp);
+}
+static struct tee_rstmem_pool_ops rstmem_pool_ops_static = {
- .alloc = rstmem_pool_op_static_alloc,
- .free = rstmem_pool_op_static_free,
- .update_shm = rstmem_pool_op_static_update_shm,
- .destroy_pool = rstmem_pool_op_static_destroy_pool,
+};
+struct tee_rstmem_pool *tee_rstmem_static_pool_alloc(phys_addr_t paddr,
size_t size)
+{
- const size_t page_mask = PAGE_SIZE - 1;
- struct tee_rstmem_static_pool *stp;
- int rc;
- /* Check it's page aligned */
- if ((paddr | size) & page_mask)
return ERR_PTR(-EINVAL);
- stp = kzalloc(sizeof(*stp), GFP_KERNEL);
- if (!stp)
return ERR_PTR(-ENOMEM);
- stp->gen_pool = gen_pool_create(PAGE_SHIFT, -1);
- if (!stp->gen_pool) {
rc = -ENOMEM;
goto err_free;
- }
- rc = gen_pool_add(stp->gen_pool, paddr, size, -1);
- if (rc)
goto err_free_pool;
- stp->pool.ops = &rstmem_pool_ops_static;
- stp->pa_base = paddr;
- return &stp->pool;
+err_free_pool:
- gen_pool_destroy(stp->gen_pool);
+err_free:
- kfree(stp);
- return ERR_PTR(rc);
+} +EXPORT_SYMBOL_GPL(tee_rstmem_static_pool_alloc); diff --git a/drivers/tee/tee_private.h b/drivers/tee/tee_private.h index 9bc50605227c..6c6ff5d5eed2 100644 --- a/drivers/tee/tee_private.h +++ b/drivers/tee/tee_private.h @@ -8,6 +8,7 @@ #include <linux/cdev.h> #include <linux/completion.h> #include <linux/device.h> +#include <linux/dma-buf.h> #include <linux/kref.h> #include <linux/mutex.h> #include <linux/types.h> @@ -24,4 +25,9 @@ struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size); struct tee_shm *tee_shm_register_user_buf(struct tee_context *ctx, unsigned long addr, size_t length); +int tee_heap_update_from_dma_buf(struct tee_device *teedev,
struct dma_buf *dmabuf, size_t *offset,
struct tee_shm *shm,
struct tee_shm **parent_shm);
#endif /*TEE_PRIVATE_H*/ diff --git a/include/linux/tee_core.h b/include/linux/tee_core.h index a38494d6b5f4..16ef078247ae 100644 --- a/include/linux/tee_core.h +++ b/include/linux/tee_core.h @@ -8,9 +8,11 @@ #include <linux/cdev.h> #include <linux/device.h> +#include <linux/dma-buf.h> #include <linux/idr.h> #include <linux/kref.h> #include <linux/list.h> +#include <linux/scatterlist.h> #include <linux/tee.h> #include <linux/tee_drv.h> #include <linux/types.h> @@ -30,6 +32,12 @@ #define TEE_DEVICE_FLAG_REGISTERED 0x1 #define TEE_MAX_DEV_NAME_LEN 32 +enum tee_dma_heap_id {
- TEE_DMA_HEAP_SECURE_VIDEO_PLAY = 1,
- TEE_DMA_HEAP_TRUSTED_UI,
- TEE_DMA_HEAP_SECURE_VIDEO_RECORD,
+};
/**
- struct tee_device - TEE Device representation
- @name: name of device
@@ -116,6 +124,33 @@ struct tee_desc { u32 flags; }; +/**
- struct tee_rstmem_pool - restricted memory pool
- @ops: operations
- This is an abstract interface where this struct is expected to be
- embedded in another struct specific to the implementation.
- */
+struct tee_rstmem_pool {
- const struct tee_rstmem_pool_ops *ops;
+};
+/**
- struct tee_rstmem_pool_ops - restricted memory pool operations
- @alloc: called when allocating restricted memory
- @free: called when freeing restricted memory
- @destroy_pool: called when destroying the pool
- */
+struct tee_rstmem_pool_ops {
- int (*alloc)(struct tee_rstmem_pool *pool, struct sg_table *sgt,
size_t size, size_t *offs);
- void (*free)(struct tee_rstmem_pool *pool, struct sg_table *sgt);
- int (*update_shm)(struct tee_rstmem_pool *pool, struct sg_table *sgt,
size_t offs, struct tee_shm *shm,
struct tee_shm **parent_shm);
This API isn't descibed in kdoc comment above. Can you descibe the role of this API and when it's needed?
-Sumit
- void (*destroy_pool)(struct tee_rstmem_pool *pool);
+};
/**
- tee_device_alloc() - Allocate a new struct tee_device instance
- @teedesc: Descriptor for this driver
@@ -154,6 +189,11 @@ int tee_device_register(struct tee_device *teedev); */ void tee_device_unregister(struct tee_device *teedev); +int tee_device_register_dma_heap(struct tee_device *teedev,
enum tee_dma_heap_id id,
struct tee_rstmem_pool *pool);
+void tee_device_unregister_all_dma_heaps(struct tee_device *teedev);
/**
- tee_device_set_dev_groups() - Set device attribute groups
- @teedev: Device to register
@@ -229,6 +269,28 @@ static inline void tee_shm_pool_free(struct tee_shm_pool *pool) pool->ops->destroy_pool(pool); } +/**
- tee_rstmem_static_pool_alloc() - Create a restricted memory manager
- @paddr: Physical address of start of pool
- @size: Size in bytes of the pool
- @returns pointer to a 'struct tee_shm_pool' or an ERR_PTR on failure.
- */
+struct tee_rstmem_pool *tee_rstmem_static_pool_alloc(phys_addr_t paddr,
size_t size);
+/**
- tee_rstmem_pool_free() - Free a restricted memory pool
- @pool: The restricted memory pool to free
- There must be no remaining restricted memory allocated from this pool
- when this function is called.
- */
+static inline void tee_rstmem_pool_free(struct tee_rstmem_pool *pool) +{
- pool->ops->destroy_pool(pool);
+}
/**
- tee_get_drvdata() - Return driver_data pointer
- @returns the driver_data pointer supplied to tee_register().
-- 2.43.0
Hi Sumit,
On Tue, Mar 25, 2025 at 7:33 AM Sumit Garg sumit.garg@kernel.org wrote:
Hi Jens,
On Wed, Mar 05, 2025 at 02:04:11PM +0100, Jens Wiklander wrote:
Implement DMA heap for restricted DMA-buf allocation in the TEE subsystem.
Restricted memory refers to memory buffers behind a hardware enforced firewall. It is not accessible to the kernel during normal circumstances but rather only accessible to certain hardware IPs or CPUs executing in higher or differently privileged mode than the kernel itself. This interface allows to allocate and manage such restricted memory buffers via interaction with a TEE implementation.
The restricted memory is allocated for a specific use-case, like Secure Video Playback, Trusted UI, or Secure Video Recording where certain hardware devices can access the memory.
The DMA-heaps are enabled explicitly by the TEE backend driver. The TEE backend drivers needs to implement restricted memory pool to manage the restricted memory.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/tee/Makefile | 1 + drivers/tee/tee_heap.c | 470 ++++++++++++++++++++++++++++++++++++++ drivers/tee/tee_private.h | 6 + include/linux/tee_core.h | 62 +++++ 4 files changed, 539 insertions(+) create mode 100644 drivers/tee/tee_heap.c
diff --git a/drivers/tee/Makefile b/drivers/tee/Makefile index 5488cba30bd2..949a6a79fb06 100644 --- a/drivers/tee/Makefile +++ b/drivers/tee/Makefile @@ -1,6 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_TEE) += tee.o tee-objs += tee_core.o +tee-objs += tee_heap.o tee-objs += tee_shm.o tee-objs += tee_shm_pool.o obj-$(CONFIG_OPTEE) += optee/ diff --git a/drivers/tee/tee_heap.c b/drivers/tee/tee_heap.c new file mode 100644 index 000000000000..476ab2e27260 --- /dev/null +++ b/drivers/tee/tee_heap.c @@ -0,0 +1,470 @@ +// SPDX-License-Identifier: GPL-2.0-only +/*
- Copyright (c) 2025, Linaro Limited
- */
+#include <linux/scatterlist.h> +#include <linux/dma-buf.h> +#include <linux/dma-heap.h> +#include <linux/genalloc.h> +#include <linux/module.h> +#include <linux/scatterlist.h> +#include <linux/slab.h> +#include <linux/tee_core.h> +#include <linux/xarray.h>
Lets try to follow alphabetical order here.
Sure
+#include "tee_private.h"
+struct tee_dma_heap {
struct dma_heap *heap;
enum tee_dma_heap_id id;
struct tee_rstmem_pool *pool;
struct tee_device *teedev;
/* Protects pool and teedev above */
struct mutex mu;
+};
+struct tee_heap_buffer {
struct tee_rstmem_pool *pool;
struct tee_device *teedev;
size_t size;
size_t offs;
struct sg_table table;
+};
+struct tee_heap_attachment {
struct sg_table table;
struct device *dev;
+};
+struct tee_rstmem_static_pool {
struct tee_rstmem_pool pool;
struct gen_pool *gen_pool;
phys_addr_t pa_base;
+};
+#if !IS_MODULE(CONFIG_TEE) && IS_ENABLED(CONFIG_DMABUF_HEAPS)
Can this dependency rather be better managed via Kconfig?
This was the easiest yet somewhat flexible solution I could find. If you have something better, let's use that instead.
+static DEFINE_XARRAY_ALLOC(tee_dma_heap);
+static int copy_sg_table(struct sg_table *dst, struct sg_table *src) +{
struct scatterlist *dst_sg;
struct scatterlist *src_sg;
int ret;
int i;
ret = sg_alloc_table(dst, src->orig_nents, GFP_KERNEL);
if (ret)
return ret;
dst_sg = dst->sgl;
for_each_sgtable_sg(src, src_sg, i) {
sg_set_page(dst_sg, sg_page(src_sg), src_sg->length,
src_sg->offset);
dst_sg = sg_next(dst_sg);
}
return 0;
+}
+static int tee_heap_attach(struct dma_buf *dmabuf,
struct dma_buf_attachment *attachment)
+{
struct tee_heap_buffer *buf = dmabuf->priv;
struct tee_heap_attachment *a;
int ret;
a = kzalloc(sizeof(*a), GFP_KERNEL);
if (!a)
return -ENOMEM;
ret = copy_sg_table(&a->table, &buf->table);
if (ret) {
kfree(a);
return ret;
}
a->dev = attachment->dev;
attachment->priv = a;
return 0;
+}
+static void tee_heap_detach(struct dma_buf *dmabuf,
struct dma_buf_attachment *attachment)
+{
struct tee_heap_attachment *a = attachment->priv;
sg_free_table(&a->table);
kfree(a);
+}
+static struct sg_table * +tee_heap_map_dma_buf(struct dma_buf_attachment *attachment,
enum dma_data_direction direction)
+{
struct tee_heap_attachment *a = attachment->priv;
int ret;
ret = dma_map_sgtable(attachment->dev, &a->table, direction,
DMA_ATTR_SKIP_CPU_SYNC);
if (ret)
return ERR_PTR(ret);
return &a->table;
+}
+static void tee_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
struct sg_table *table,
enum dma_data_direction direction)
+{
struct tee_heap_attachment *a = attachment->priv;
WARN_ON(&a->table != table);
dma_unmap_sgtable(attachment->dev, table, direction,
DMA_ATTR_SKIP_CPU_SYNC);
+}
+static void tee_heap_buf_free(struct dma_buf *dmabuf) +{
struct tee_heap_buffer *buf = dmabuf->priv;
struct tee_device *teedev = buf->teedev;
buf->pool->ops->free(buf->pool, &buf->table);
tee_device_put(teedev);
+}
+static const struct dma_buf_ops tee_heap_buf_ops = {
.attach = tee_heap_attach,
.detach = tee_heap_detach,
.map_dma_buf = tee_heap_map_dma_buf,
.unmap_dma_buf = tee_heap_unmap_dma_buf,
.release = tee_heap_buf_free,
+};
+static struct dma_buf *tee_dma_heap_alloc(struct dma_heap *heap,
unsigned long len, u32 fd_flags,
u64 heap_flags)
+{
struct tee_dma_heap *h = dma_heap_get_drvdata(heap);
DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
struct tee_device *teedev = NULL;
struct tee_heap_buffer *buf;
struct tee_rstmem_pool *pool;
struct dma_buf *dmabuf;
int rc;
mutex_lock(&h->mu);
if (tee_device_get(h->teedev)) {
teedev = h->teedev;
pool = h->pool;
}
mutex_unlock(&h->mu);
if (!teedev)
return ERR_PTR(-EINVAL);
buf = kzalloc(sizeof(*buf), GFP_KERNEL);
if (!buf) {
dmabuf = ERR_PTR(-ENOMEM);
goto err;
}
buf->size = len;
buf->pool = pool;
buf->teedev = teedev;
rc = pool->ops->alloc(pool, &buf->table, len, &buf->offs);
if (rc) {
dmabuf = ERR_PTR(rc);
goto err_kfree;
}
exp_info.ops = &tee_heap_buf_ops;
exp_info.size = len;
exp_info.priv = buf;
exp_info.flags = fd_flags;
dmabuf = dma_buf_export(&exp_info);
if (IS_ERR(dmabuf))
goto err_rstmem_free;
return dmabuf;
+err_rstmem_free:
pool->ops->free(pool, &buf->table);
+err_kfree:
kfree(buf);
+err:
tee_device_put(h->teedev);
return dmabuf;
+}
+static const struct dma_heap_ops tee_dma_heap_ops = {
.allocate = tee_dma_heap_alloc,
+};
+static const char *heap_id_2_name(enum tee_dma_heap_id id) +{
switch (id) {
case TEE_DMA_HEAP_SECURE_VIDEO_PLAY:
return "restricted,secure-video";
case TEE_DMA_HEAP_TRUSTED_UI:
return "restricted,trusted-ui";
case TEE_DMA_HEAP_SECURE_VIDEO_RECORD:
return "restricted,secure-video-record";
default:
return NULL;
}
+}
+static int alloc_dma_heap(struct tee_device *teedev, enum tee_dma_heap_id id,
struct tee_rstmem_pool *pool)
+{
struct dma_heap_export_info exp_info = {
.ops = &tee_dma_heap_ops,
.name = heap_id_2_name(id),
};
struct tee_dma_heap *h;
int rc;
if (!exp_info.name)
return -EINVAL;
if (xa_reserve(&tee_dma_heap, id, GFP_KERNEL)) {
if (!xa_load(&tee_dma_heap, id))
return -EEXIST;
return -ENOMEM;
}
h = kzalloc(sizeof(*h), GFP_KERNEL);
if (!h)
return -ENOMEM;
h->id = id;
h->teedev = teedev;
h->pool = pool;
mutex_init(&h->mu);
exp_info.priv = h;
h->heap = dma_heap_add(&exp_info);
if (IS_ERR(h->heap)) {
rc = PTR_ERR(h->heap);
kfree(h);
return rc;
}
/* "can't fail" due to the call to xa_reserve() above */
return WARN(xa_store(&tee_dma_heap, id, h, GFP_KERNEL),
"xa_store() failed");
+}
+int tee_device_register_dma_heap(struct tee_device *teedev,
enum tee_dma_heap_id id,
struct tee_rstmem_pool *pool)
+{
struct tee_dma_heap *h;
int rc;
h = xa_load(&tee_dma_heap, id);
if (h) {
mutex_lock(&h->mu);
if (h->teedev) {
rc = -EBUSY;
} else {
h->teedev = teedev;
h->pool = pool;
rc = 0;
}
mutex_unlock(&h->mu);
} else {
rc = alloc_dma_heap(teedev, id, pool);
}
if (rc)
dev_err(&teedev->dev, "can't register DMA heap id %d (%s)\n",
id, heap_id_2_name(id));
return rc;
+}
+void tee_device_unregister_all_dma_heaps(struct tee_device *teedev) +{
struct tee_rstmem_pool *pool;
struct tee_dma_heap *h;
u_long i;
xa_for_each(&tee_dma_heap, i, h) {
if (h) {
pool = NULL;
mutex_lock(&h->mu);
if (h->teedev == teedev) {
pool = h->pool;
h->teedev = NULL;
h->pool = NULL;
}
mutex_unlock(&h->mu);
if (pool)
pool->ops->destroy_pool(pool);
}
}
+} +EXPORT_SYMBOL_GPL(tee_device_unregister_all_dma_heaps);
+int tee_heap_update_from_dma_buf(struct tee_device *teedev,
struct dma_buf *dmabuf, size_t *offset,
struct tee_shm *shm,
struct tee_shm **parent_shm)
+{
struct tee_heap_buffer *buf;
int rc;
/* The DMA-buf must be from our heap */
if (dmabuf->ops != &tee_heap_buf_ops)
return -EINVAL;
buf = dmabuf->priv;
/* The buffer must be from the same teedev */
if (buf->teedev != teedev)
return -EINVAL;
shm->size = buf->size;
rc = buf->pool->ops->update_shm(buf->pool, &buf->table, buf->offs, shm,
parent_shm);
if (!rc && *parent_shm)
*offset = buf->offs;
return rc;
+} +#else +int tee_device_register_dma_heap(struct tee_device *teedev __always_unused,
enum tee_dma_heap_id id __always_unused,
struct tee_rstmem_pool *pool __always_unused)
+{
return -EINVAL;
+} +EXPORT_SYMBOL_GPL(tee_device_register_dma_heap);
+void +tee_device_unregister_all_dma_heaps(struct tee_device *teedev __always_unused) +{ +} +EXPORT_SYMBOL_GPL(tee_device_unregister_all_dma_heaps);
+int tee_heap_update_from_dma_buf(struct tee_device *teedev __always_unused,
struct dma_buf *dmabuf __always_unused,
size_t *offset __always_unused,
struct tee_shm *shm __always_unused,
struct tee_shm **parent_shm __always_unused)
+{
return -EINVAL;
+} +#endif
+static struct tee_rstmem_static_pool * +to_rstmem_static_pool(struct tee_rstmem_pool *pool) +{
return container_of(pool, struct tee_rstmem_static_pool, pool);
+}
+static int rstmem_pool_op_static_alloc(struct tee_rstmem_pool *pool,
struct sg_table *sgt, size_t size,
size_t *offs)
+{
struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool);
phys_addr_t pa;
int ret;
pa = gen_pool_alloc(stp->gen_pool, size);
if (!pa)
return -ENOMEM;
ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
if (ret) {
gen_pool_free(stp->gen_pool, pa, size);
return ret;
}
sg_set_page(sgt->sgl, phys_to_page(pa), size, 0);
*offs = pa - stp->pa_base;
return 0;
+}
+static void rstmem_pool_op_static_free(struct tee_rstmem_pool *pool,
struct sg_table *sgt)
+{
struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool);
struct scatterlist *sg;
int i;
for_each_sgtable_sg(sgt, sg, i)
gen_pool_free(stp->gen_pool, sg_phys(sg), sg->length);
sg_free_table(sgt);
+}
+static int rstmem_pool_op_static_update_shm(struct tee_rstmem_pool *pool,
struct sg_table *sgt, size_t offs,
struct tee_shm *shm,
struct tee_shm **parent_shm)
+{
struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool);
shm->paddr = stp->pa_base + offs;
*parent_shm = NULL;
return 0;
+}
+static void rstmem_pool_op_static_destroy_pool(struct tee_rstmem_pool *pool) +{
struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool);
gen_pool_destroy(stp->gen_pool);
kfree(stp);
+}
+static struct tee_rstmem_pool_ops rstmem_pool_ops_static = {
.alloc = rstmem_pool_op_static_alloc,
.free = rstmem_pool_op_static_free,
.update_shm = rstmem_pool_op_static_update_shm,
.destroy_pool = rstmem_pool_op_static_destroy_pool,
+};
+struct tee_rstmem_pool *tee_rstmem_static_pool_alloc(phys_addr_t paddr,
size_t size)
+{
const size_t page_mask = PAGE_SIZE - 1;
struct tee_rstmem_static_pool *stp;
int rc;
/* Check it's page aligned */
if ((paddr | size) & page_mask)
return ERR_PTR(-EINVAL);
stp = kzalloc(sizeof(*stp), GFP_KERNEL);
if (!stp)
return ERR_PTR(-ENOMEM);
stp->gen_pool = gen_pool_create(PAGE_SHIFT, -1);
if (!stp->gen_pool) {
rc = -ENOMEM;
goto err_free;
}
rc = gen_pool_add(stp->gen_pool, paddr, size, -1);
if (rc)
goto err_free_pool;
stp->pool.ops = &rstmem_pool_ops_static;
stp->pa_base = paddr;
return &stp->pool;
+err_free_pool:
gen_pool_destroy(stp->gen_pool);
+err_free:
kfree(stp);
return ERR_PTR(rc);
+} +EXPORT_SYMBOL_GPL(tee_rstmem_static_pool_alloc); diff --git a/drivers/tee/tee_private.h b/drivers/tee/tee_private.h index 9bc50605227c..6c6ff5d5eed2 100644 --- a/drivers/tee/tee_private.h +++ b/drivers/tee/tee_private.h @@ -8,6 +8,7 @@ #include <linux/cdev.h> #include <linux/completion.h> #include <linux/device.h> +#include <linux/dma-buf.h> #include <linux/kref.h> #include <linux/mutex.h> #include <linux/types.h> @@ -24,4 +25,9 @@ struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size); struct tee_shm *tee_shm_register_user_buf(struct tee_context *ctx, unsigned long addr, size_t length);
+int tee_heap_update_from_dma_buf(struct tee_device *teedev,
struct dma_buf *dmabuf, size_t *offset,
struct tee_shm *shm,
struct tee_shm **parent_shm);
#endif /*TEE_PRIVATE_H*/ diff --git a/include/linux/tee_core.h b/include/linux/tee_core.h index a38494d6b5f4..16ef078247ae 100644 --- a/include/linux/tee_core.h +++ b/include/linux/tee_core.h @@ -8,9 +8,11 @@
#include <linux/cdev.h> #include <linux/device.h> +#include <linux/dma-buf.h> #include <linux/idr.h> #include <linux/kref.h> #include <linux/list.h> +#include <linux/scatterlist.h> #include <linux/tee.h> #include <linux/tee_drv.h> #include <linux/types.h> @@ -30,6 +32,12 @@ #define TEE_DEVICE_FLAG_REGISTERED 0x1 #define TEE_MAX_DEV_NAME_LEN 32
+enum tee_dma_heap_id {
TEE_DMA_HEAP_SECURE_VIDEO_PLAY = 1,
TEE_DMA_HEAP_TRUSTED_UI,
TEE_DMA_HEAP_SECURE_VIDEO_RECORD,
+};
/**
- struct tee_device - TEE Device representation
- @name: name of device
@@ -116,6 +124,33 @@ struct tee_desc { u32 flags; };
+/**
- struct tee_rstmem_pool - restricted memory pool
- @ops: operations
- This is an abstract interface where this struct is expected to be
- embedded in another struct specific to the implementation.
- */
+struct tee_rstmem_pool {
const struct tee_rstmem_pool_ops *ops;
+};
+/**
- struct tee_rstmem_pool_ops - restricted memory pool operations
- @alloc: called when allocating restricted memory
- @free: called when freeing restricted memory
- @destroy_pool: called when destroying the pool
- */
+struct tee_rstmem_pool_ops {
int (*alloc)(struct tee_rstmem_pool *pool, struct sg_table *sgt,
size_t size, size_t *offs);
void (*free)(struct tee_rstmem_pool *pool, struct sg_table *sgt);
int (*update_shm)(struct tee_rstmem_pool *pool, struct sg_table *sgt,
size_t offs, struct tee_shm *shm,
struct tee_shm **parent_shm);
This API isn't descibed in kdoc comment above. Can you descibe the role of this API and when it's needed?
Yes, I'll add something.
Cheers, Jens
-Sumit
void (*destroy_pool)(struct tee_rstmem_pool *pool);
+};
/**
- tee_device_alloc() - Allocate a new struct tee_device instance
- @teedesc: Descriptor for this driver
@@ -154,6 +189,11 @@ int tee_device_register(struct tee_device *teedev); */ void tee_device_unregister(struct tee_device *teedev);
+int tee_device_register_dma_heap(struct tee_device *teedev,
enum tee_dma_heap_id id,
struct tee_rstmem_pool *pool);
+void tee_device_unregister_all_dma_heaps(struct tee_device *teedev);
/**
- tee_device_set_dev_groups() - Set device attribute groups
- @teedev: Device to register
@@ -229,6 +269,28 @@ static inline void tee_shm_pool_free(struct tee_shm_pool *pool) pool->ops->destroy_pool(pool); }
+/**
- tee_rstmem_static_pool_alloc() - Create a restricted memory manager
- @paddr: Physical address of start of pool
- @size: Size in bytes of the pool
- @returns pointer to a 'struct tee_shm_pool' or an ERR_PTR on failure.
- */
+struct tee_rstmem_pool *tee_rstmem_static_pool_alloc(phys_addr_t paddr,
size_t size);
+/**
- tee_rstmem_pool_free() - Free a restricted memory pool
- @pool: The restricted memory pool to free
- There must be no remaining restricted memory allocated from this pool
- when this function is called.
- */
+static inline void tee_rstmem_pool_free(struct tee_rstmem_pool *pool) +{
pool->ops->destroy_pool(pool);
+}
/**
- tee_get_drvdata() - Return driver_data pointer
- @returns the driver_data pointer supplied to tee_register().
-- 2.43.0
On Tue, Mar 25, 2025 at 11:55:46AM +0100, Jens Wiklander wrote:
Hi Sumit,
<snip>
+#include "tee_private.h"
+struct tee_dma_heap {
struct dma_heap *heap;
enum tee_dma_heap_id id;
struct tee_rstmem_pool *pool;
struct tee_device *teedev;
/* Protects pool and teedev above */
struct mutex mu;
+};
+struct tee_heap_buffer {
struct tee_rstmem_pool *pool;
struct tee_device *teedev;
size_t size;
size_t offs;
struct sg_table table;
+};
+struct tee_heap_attachment {
struct sg_table table;
struct device *dev;
+};
+struct tee_rstmem_static_pool {
struct tee_rstmem_pool pool;
struct gen_pool *gen_pool;
phys_addr_t pa_base;
+};
+#if !IS_MODULE(CONFIG_TEE) && IS_ENABLED(CONFIG_DMABUF_HEAPS)
Can this dependency rather be better managed via Kconfig?
This was the easiest yet somewhat flexible solution I could find. If you have something better, let's use that instead.
--- a/drivers/tee/optee/Kconfig +++ b/drivers/tee/optee/Kconfig @@ -5,6 +5,7 @@ config OPTEE depends on HAVE_ARM_SMCCC depends on MMU depends on RPMB || !RPMB + select DMABUF_HEAPS help This implements the OP-TEE Trusted Execution Environment (TEE) driver.
-Sumit
On Tue, Apr 1, 2025 at 9:58 AM Sumit Garg sumit.garg@kernel.org wrote:
On Tue, Mar 25, 2025 at 11:55:46AM +0100, Jens Wiklander wrote:
Hi Sumit,
<snip>
+#include "tee_private.h"
+struct tee_dma_heap {
struct dma_heap *heap;
enum tee_dma_heap_id id;
struct tee_rstmem_pool *pool;
struct tee_device *teedev;
/* Protects pool and teedev above */
struct mutex mu;
+};
+struct tee_heap_buffer {
struct tee_rstmem_pool *pool;
struct tee_device *teedev;
size_t size;
size_t offs;
struct sg_table table;
+};
+struct tee_heap_attachment {
struct sg_table table;
struct device *dev;
+};
+struct tee_rstmem_static_pool {
struct tee_rstmem_pool pool;
struct gen_pool *gen_pool;
phys_addr_t pa_base;
+};
+#if !IS_MODULE(CONFIG_TEE) && IS_ENABLED(CONFIG_DMABUF_HEAPS)
Can this dependency rather be better managed via Kconfig?
This was the easiest yet somewhat flexible solution I could find. If you have something better, let's use that instead.
--- a/drivers/tee/optee/Kconfig +++ b/drivers/tee/optee/Kconfig @@ -5,6 +5,7 @@ config OPTEE depends on HAVE_ARM_SMCCC depends on MMU depends on RPMB || !RPMB
select DMABUF_HEAPS help This implements the OP-TEE Trusted Execution Environment (TEE) driver.
I wanted to avoid that since there are plenty of use cases where DMABUF_HEAPS aren't needed. This seems to do the job: +config TEE_DMABUF_HEAP + bool + depends on TEE = y && DMABUF_HEAPS
We can only use DMABUF_HEAPS if the TEE subsystem is compiled into the kernel.
Cheers, Jens
From: Etienne Carriere etienne.carriere@linaro.org
Enable userspace to create a tee_shm object that refers to a dmabuf reference.
Userspace registers the dmabuf file descriptor as in a tee_shm object. The registration is completed with a tee_shm file descriptor returned to userspace.
Userspace is free to close the dmabuf file descriptor now since all the resources are now held via the tee_shm object.
Closing the tee_shm file descriptor will release all resources used by the tee_shm object.
This change only support dmabuf references that relates to physically contiguous memory buffers.
New tee_shm flag to identify tee_shm objects built from a registered dmabuf, TEE_SHM_DMA_BUF.
Signed-off-by: Etienne Carriere etienne.carriere@linaro.org Signed-off-by: Olivier Masse olivier.masse@nxp.com Signed-off-by: Jens Wiklander jens.wiklander@linaro.org --- drivers/tee/tee_core.c | 145 ++++++++++++++++++++++++++----------- drivers/tee/tee_private.h | 1 + drivers/tee/tee_shm.c | 146 ++++++++++++++++++++++++++++++++++++-- include/linux/tee_core.h | 1 + include/linux/tee_drv.h | 10 +++ include/uapi/linux/tee.h | 29 ++++++++ 6 files changed, 288 insertions(+), 44 deletions(-)
diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c index 685afcaa3ea1..3a71643766d5 100644 --- a/drivers/tee/tee_core.c +++ b/drivers/tee/tee_core.c @@ -353,6 +353,103 @@ tee_ioctl_shm_register(struct tee_context *ctx, return ret; }
+static int +tee_ioctl_shm_register_fd(struct tee_context *ctx, + struct tee_ioctl_shm_register_fd_data __user *udata) +{ + struct tee_ioctl_shm_register_fd_data data; + struct tee_shm *shm; + long ret; + + if (copy_from_user(&data, udata, sizeof(data))) + return -EFAULT; + + /* Currently no input flags are supported */ + if (data.flags) + return -EINVAL; + + shm = tee_shm_register_fd(ctx, data.fd); + if (IS_ERR(shm)) + return -EINVAL; + + data.id = shm->id; + data.flags = shm->flags; + data.size = shm->size; + + if (copy_to_user(udata, &data, sizeof(data))) + ret = -EFAULT; + else + ret = tee_shm_get_fd(shm); + + /* + * When user space closes the file descriptor the shared memory + * should be freed or if tee_shm_get_fd() failed then it will + * be freed immediately. + */ + tee_shm_put(shm); + return ret; +} + +static int param_from_user_memref(struct tee_context *ctx, + struct tee_param_memref *memref, + struct tee_ioctl_param *ip) +{ + struct tee_shm *shm; + size_t offs = 0; + + /* + * If a NULL pointer is passed to a TA in the TEE, + * the ip.c IOCTL parameters is set to TEE_MEMREF_NULL + * indicating a NULL memory reference. + */ + if (ip->c != TEE_MEMREF_NULL) { + /* + * If we fail to get a pointer to a shared + * memory object (and increase the ref count) + * from an identifier we return an error. All + * pointers that has been added in params have + * an increased ref count. It's the callers + * responibility to do tee_shm_put() on all + * resolved pointers. + */ + shm = tee_shm_get_from_id(ctx, ip->c); + if (IS_ERR(shm)) + return PTR_ERR(shm); + + /* + * Ensure offset + size does not overflow + * offset and does not overflow the size of + * the referred shared memory object. + */ + if ((ip->a + ip->b) < ip->a || + (ip->a + ip->b) > shm->size) { + tee_shm_put(shm); + return -EINVAL; + } + + if (shm->flags & TEE_SHM_DMA_BUF) { + struct tee_shm *parent_shm; + + parent_shm = tee_shm_get_parent_shm(shm, &offs); + if (parent_shm) { + tee_shm_put(shm); + shm = parent_shm; + } + } + } else if (ctx->cap_memref_null) { + /* Pass NULL pointer to OP-TEE */ + shm = NULL; + } else { + return -EINVAL; + } + + memref->shm_offs = ip->a + offs; + memref->size = ip->b; + memref->shm = shm; + + return 0; +} + static int params_from_user(struct tee_context *ctx, struct tee_param *params, size_t num_params, struct tee_ioctl_param __user *uparams) @@ -360,8 +457,8 @@ static int params_from_user(struct tee_context *ctx, struct tee_param *params, size_t n;
for (n = 0; n < num_params; n++) { - struct tee_shm *shm; struct tee_ioctl_param ip; + int rc;
if (copy_from_user(&ip, uparams + n, sizeof(ip))) return -EFAULT; @@ -384,45 +481,10 @@ static int params_from_user(struct tee_context *ctx, struct tee_param *params, case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT: - /* - * If a NULL pointer is passed to a TA in the TEE, - * the ip.c IOCTL parameters is set to TEE_MEMREF_NULL - * indicating a NULL memory reference. - */ - if (ip.c != TEE_MEMREF_NULL) { - /* - * If we fail to get a pointer to a shared - * memory object (and increase the ref count) - * from an identifier we return an error. All - * pointers that has been added in params have - * an increased ref count. It's the callers - * responibility to do tee_shm_put() on all - * resolved pointers. - */ - shm = tee_shm_get_from_id(ctx, ip.c); - if (IS_ERR(shm)) - return PTR_ERR(shm); - - /* - * Ensure offset + size does not overflow - * offset and does not overflow the size of - * the referred shared memory object. - */ - if ((ip.a + ip.b) < ip.a || - (ip.a + ip.b) > shm->size) { - tee_shm_put(shm); - return -EINVAL; - } - } else if (ctx->cap_memref_null) { - /* Pass NULL pointer to OP-TEE */ - shm = NULL; - } else { - return -EINVAL; - } - - params[n].u.memref.shm_offs = ip.a; - params[n].u.memref.size = ip.b; - params[n].u.memref.shm = shm; + rc = param_from_user_memref(ctx, ¶ms[n].u.memref, + &ip); + if (rc) + return rc; break; default: /* Unknown attribute */ @@ -827,6 +889,8 @@ static long tee_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) return tee_ioctl_shm_alloc(ctx, uarg); case TEE_IOC_SHM_REGISTER: return tee_ioctl_shm_register(ctx, uarg); + case TEE_IOC_SHM_REGISTER_FD: + return tee_ioctl_shm_register_fd(ctx, uarg); case TEE_IOC_OPEN_SESSION: return tee_ioctl_open_session(ctx, uarg); case TEE_IOC_INVOKE: @@ -1288,3 +1352,4 @@ MODULE_AUTHOR("Linaro"); MODULE_DESCRIPTION("TEE Driver"); MODULE_VERSION("1.0"); MODULE_LICENSE("GPL v2"); +MODULE_IMPORT_NS("DMA_BUF"); diff --git a/drivers/tee/tee_private.h b/drivers/tee/tee_private.h index 6c6ff5d5eed2..aad7f6c7e0f0 100644 --- a/drivers/tee/tee_private.h +++ b/drivers/tee/tee_private.h @@ -24,6 +24,7 @@ void teedev_ctx_put(struct tee_context *ctx); struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size); struct tee_shm *tee_shm_register_user_buf(struct tee_context *ctx, unsigned long addr, size_t length); +struct tee_shm *tee_shm_get_parent_shm(struct tee_shm *shm, size_t *offs);
int tee_heap_update_from_dma_buf(struct tee_device *teedev, struct dma_buf *dmabuf, size_t *offset, diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c index daf6e5cfd59a..8b79918468b5 100644 --- a/drivers/tee/tee_shm.c +++ b/drivers/tee/tee_shm.c @@ -4,6 +4,7 @@ */ #include <linux/anon_inodes.h> #include <linux/device.h> +#include <linux/dma-buf.h> #include <linux/idr.h> #include <linux/io.h> #include <linux/mm.h> @@ -15,6 +16,16 @@ #include <linux/highmem.h> #include "tee_private.h"
+/* extra references appended to shm object for registered shared memory */ +struct tee_shm_dmabuf_ref { + struct tee_shm shm; + size_t offset; + struct dma_buf *dmabuf; + struct dma_buf_attachment *attach; + struct sg_table *sgt; + struct tee_shm *parent_shm; +}; + static void shm_put_kernel_pages(struct page **pages, size_t page_count) { size_t n; @@ -45,7 +56,23 @@ static void release_registered_pages(struct tee_shm *shm)
static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) { - if (shm->flags & TEE_SHM_POOL) { + struct tee_shm *parent_shm = NULL; + void *p = shm; + + if (shm->flags & TEE_SHM_DMA_BUF) { + struct tee_shm_dmabuf_ref *ref; + + ref = container_of(shm, struct tee_shm_dmabuf_ref, shm); + parent_shm = ref->parent_shm; + p = ref; + if (ref->attach) { + dma_buf_unmap_attachment(ref->attach, ref->sgt, + DMA_BIDIRECTIONAL); + + dma_buf_detach(ref->dmabuf, ref->attach); + } + dma_buf_put(ref->dmabuf); + } else if (shm->flags & TEE_SHM_POOL) { teedev->pool->ops->free(teedev->pool, shm); } else if (shm->flags & TEE_SHM_DYNAMIC) { int rc = teedev->desc->ops->shm_unregister(shm->ctx, shm); @@ -57,9 +84,10 @@ static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) release_registered_pages(shm); }
- teedev_ctx_put(shm->ctx); + if (shm->ctx) + teedev_ctx_put(shm->ctx);
- kfree(shm); + kfree(p);
tee_device_put(teedev); } @@ -169,7 +197,7 @@ struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size) * tee_client_invoke_func(). The memory allocated is later freed with a * call to tee_shm_free(). * - * @returns a pointer to 'struct tee_shm' + * @returns a pointer to 'struct tee_shm' on success, and ERR_PTR on failure */ struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size) { @@ -179,6 +207,116 @@ struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size) } EXPORT_SYMBOL_GPL(tee_shm_alloc_kernel_buf);
+struct tee_shm *tee_shm_register_fd(struct tee_context *ctx, int fd) +{ + struct tee_shm_dmabuf_ref *ref; + int rc; + + if (!tee_device_get(ctx->teedev)) + return ERR_PTR(-EINVAL); + + teedev_ctx_get(ctx); + + ref = kzalloc(sizeof(*ref), GFP_KERNEL); + if (!ref) { + rc = -ENOMEM; + goto err_put_tee; + } + + refcount_set(&ref->shm.refcount, 1); + ref->shm.ctx = ctx; + ref->shm.id = -1; + ref->shm.flags = TEE_SHM_DMA_BUF; + + ref->dmabuf = dma_buf_get(fd); + if (IS_ERR(ref->dmabuf)) { + rc = PTR_ERR(ref->dmabuf); + goto err_kfree_ref; + } + + rc = tee_heap_update_from_dma_buf(ctx->teedev, ref->dmabuf, + &ref->offset, &ref->shm, + &ref->parent_shm); + if (!rc) + goto out; + if (rc != -EINVAL) + goto err_put_dmabuf; + + ref->attach = dma_buf_attach(ref->dmabuf, &ctx->teedev->dev); + if (IS_ERR(ref->attach)) { + rc = PTR_ERR(ref->attach); + goto err_put_dmabuf; + } + + ref->sgt = dma_buf_map_attachment(ref->attach, DMA_BIDIRECTIONAL); + if (IS_ERR(ref->sgt)) { + rc = PTR_ERR(ref->sgt); + goto err_detach; + } + + if (sg_nents(ref->sgt->sgl) != 1) { + rc = PTR_ERR(ref->sgt->sgl); + goto err_unmap_attachement; + } + + ref->shm.paddr = page_to_phys(sg_page(ref->sgt->sgl)); + ref->shm.size = ref->sgt->sgl->length; + +out: + mutex_lock(&ref->shm.ctx->teedev->mutex); + ref->shm.id = idr_alloc(&ref->shm.ctx->teedev->idr, &ref->shm, + 1, 0, GFP_KERNEL); + mutex_unlock(&ref->shm.ctx->teedev->mutex); + if (ref->shm.id < 0) { + rc = ref->shm.id; + if (ref->attach) + goto err_unmap_attachement; + goto err_put_dmabuf; + } + + return &ref->shm; + +err_unmap_attachement: + dma_buf_unmap_attachment(ref->attach, ref->sgt, DMA_BIDIRECTIONAL); +err_detach: + dma_buf_detach(ref->dmabuf, ref->attach); +err_put_dmabuf: + dma_buf_put(ref->dmabuf); +err_kfree_ref: + kfree(ref); +err_put_tee: + teedev_ctx_put(ctx); + tee_device_put(ctx->teedev); + + return ERR_PTR(rc); +} +EXPORT_SYMBOL_GPL(tee_shm_register_fd); + +struct tee_shm *tee_shm_get_parent_shm(struct tee_shm *shm, size_t *offs) +{ + struct tee_shm *parent_shm = NULL; + + if (shm->flags & TEE_SHM_DMA_BUF) { + struct tee_shm_dmabuf_ref *ref; + + ref = container_of(shm, struct tee_shm_dmabuf_ref, shm); + if (ref->parent_shm) { + /* + * the shm already has one reference to + * ref->parent_shm so we should be clear of 0. + * We're getting another reference since the caller + * of this function expects to put the returned + * parent_shm when it's done with it. + */ + parent_shm = ref->parent_shm; + refcount_inc(&parent_shm->refcount); + *offs = ref->offset; + } + } + + return parent_shm; +} + /** * tee_shm_alloc_priv_buf() - Allocate shared memory for a privately shared * kernel buffer diff --git a/include/linux/tee_core.h b/include/linux/tee_core.h index 16ef078247ae..6bd833b6d0e1 100644 --- a/include/linux/tee_core.h +++ b/include/linux/tee_core.h @@ -28,6 +28,7 @@ #define TEE_SHM_USER_MAPPED BIT(1) /* Memory mapped in user space */ #define TEE_SHM_POOL BIT(2) /* Memory allocated from pool */ #define TEE_SHM_PRIV BIT(3) /* Memory private to TEE driver */ +#define TEE_SHM_DMA_BUF BIT(4) /* Memory with dma-buf handle */
#define TEE_DEVICE_FLAG_REGISTERED 0x1 #define TEE_MAX_DEV_NAME_LEN 32 diff --git a/include/linux/tee_drv.h b/include/linux/tee_drv.h index a54c203000ed..824f1251de60 100644 --- a/include/linux/tee_drv.h +++ b/include/linux/tee_drv.h @@ -116,6 +116,16 @@ struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size); struct tee_shm *tee_shm_register_kernel_buf(struct tee_context *ctx, void *addr, size_t length);
+/** + * tee_shm_register_fd() - Register shared memory from file descriptor + * + * @ctx: Context that allocates the shared memory + * @fd: Shared memory file descriptor reference + * + * @returns a pointer to 'struct tee_shm' on success, and ERR_PTR on failure + */ +struct tee_shm *tee_shm_register_fd(struct tee_context *ctx, int fd); + /** * tee_shm_free() - Free shared memory * @shm: Handle to shared memory to free diff --git a/include/uapi/linux/tee.h b/include/uapi/linux/tee.h index d0430bee8292..1f9a4ac2b211 100644 --- a/include/uapi/linux/tee.h +++ b/include/uapi/linux/tee.h @@ -118,6 +118,35 @@ struct tee_ioctl_shm_alloc_data { #define TEE_IOC_SHM_ALLOC _IOWR(TEE_IOC_MAGIC, TEE_IOC_BASE + 1, \ struct tee_ioctl_shm_alloc_data)
+/** + * struct tee_ioctl_shm_register_fd_data - Shared memory registering argument + * @fd: [in] File descriptor identifying the shared memory + * @size: [out] Size of shared memory to allocate + * @flags: [in] Flags to/from allocation. + * @id: [out] Identifier of the shared memory + * + * The flags field should currently be zero as input. Updated by the call + * with actual flags as defined by TEE_IOCTL_SHM_* above. + * This structure is used as argument for TEE_IOC_SHM_REGISTER_FD below. + */ +struct tee_ioctl_shm_register_fd_data { + __s64 fd; + __u64 size; + __u32 flags; + __s32 id; +}; + +/** + * TEE_IOC_SHM_REGISTER_FD - register a shared memory from a file descriptor + * + * Returns a file descriptor on success or < 0 on failure + * + * The returned file descriptor refers to the shared memory object in kernel + * land. The shared memory is freed when the descriptor is closed. + */ +#define TEE_IOC_SHM_REGISTER_FD _IOWR(TEE_IOC_MAGIC, TEE_IOC_BASE + 8, \ + struct tee_ioctl_shm_register_fd_data) + /** * struct tee_ioctl_buf_data - Variable sized buffer * @buf_ptr: [in] A __user pointer to a buffer
Hi Jens,
On Wed, Mar 05, 2025 at 02:04:12PM +0100, Jens Wiklander wrote:
From: Etienne Carriere etienne.carriere@linaro.org
Enable userspace to create a tee_shm object that refers to a dmabuf reference.
Userspace registers the dmabuf file descriptor as in a tee_shm object. The registration is completed with a tee_shm file descriptor returned to userspace.
Userspace is free to close the dmabuf file descriptor now since all the resources are now held via the tee_shm object.
Closing the tee_shm file descriptor will release all resources used by the tee_shm object.
This change only support dmabuf references that relates to physically contiguous memory buffers.
New tee_shm flag to identify tee_shm objects built from a registered dmabuf, TEE_SHM_DMA_BUF.
Signed-off-by: Etienne Carriere etienne.carriere@linaro.org Signed-off-by: Olivier Masse olivier.masse@nxp.com Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/tee/tee_core.c | 145 ++++++++++++++++++++++++++----------- drivers/tee/tee_private.h | 1 + drivers/tee/tee_shm.c | 146 ++++++++++++++++++++++++++++++++++++-- include/linux/tee_core.h | 1 + include/linux/tee_drv.h | 10 +++ include/uapi/linux/tee.h | 29 ++++++++ 6 files changed, 288 insertions(+), 44 deletions(-)
I am still trying to find if we really need a separate IOCTL to register DMA heap with TEE subsystem. Can't we initialize tee_shm as a member of struct tee_heap_buffer in tee_dma_heap_alloc() where the allocation happens? We can always find a reference back to tee_shm object from DMA buffer.
-Sumit
diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c index 685afcaa3ea1..3a71643766d5 100644 --- a/drivers/tee/tee_core.c +++ b/drivers/tee/tee_core.c @@ -353,6 +353,103 @@ tee_ioctl_shm_register(struct tee_context *ctx, return ret; } +static int +tee_ioctl_shm_register_fd(struct tee_context *ctx,
struct tee_ioctl_shm_register_fd_data __user *udata)
+{
- struct tee_ioctl_shm_register_fd_data data;
- struct tee_shm *shm;
- long ret;
- if (copy_from_user(&data, udata, sizeof(data)))
return -EFAULT;
- /* Currently no input flags are supported */
- if (data.flags)
return -EINVAL;
- shm = tee_shm_register_fd(ctx, data.fd);
- if (IS_ERR(shm))
return -EINVAL;
- data.id = shm->id;
- data.flags = shm->flags;
- data.size = shm->size;
- if (copy_to_user(udata, &data, sizeof(data)))
ret = -EFAULT;
- else
ret = tee_shm_get_fd(shm);
- /*
* When user space closes the file descriptor the shared memory
* should be freed or if tee_shm_get_fd() failed then it will
* be freed immediately.
*/
- tee_shm_put(shm);
- return ret;
+}
+static int param_from_user_memref(struct tee_context *ctx,
struct tee_param_memref *memref,
struct tee_ioctl_param *ip)
+{
- struct tee_shm *shm;
- size_t offs = 0;
- /*
* If a NULL pointer is passed to a TA in the TEE,
* the ip.c IOCTL parameters is set to TEE_MEMREF_NULL
* indicating a NULL memory reference.
*/
- if (ip->c != TEE_MEMREF_NULL) {
/*
* If we fail to get a pointer to a shared
* memory object (and increase the ref count)
* from an identifier we return an error. All
* pointers that has been added in params have
* an increased ref count. It's the callers
* responibility to do tee_shm_put() on all
* resolved pointers.
*/
shm = tee_shm_get_from_id(ctx, ip->c);
if (IS_ERR(shm))
return PTR_ERR(shm);
/*
* Ensure offset + size does not overflow
* offset and does not overflow the size of
* the referred shared memory object.
*/
if ((ip->a + ip->b) < ip->a ||
(ip->a + ip->b) > shm->size) {
tee_shm_put(shm);
return -EINVAL;
}
if (shm->flags & TEE_SHM_DMA_BUF) {
struct tee_shm *parent_shm;
parent_shm = tee_shm_get_parent_shm(shm, &offs);
if (parent_shm) {
tee_shm_put(shm);
shm = parent_shm;
}
}
- } else if (ctx->cap_memref_null) {
/* Pass NULL pointer to OP-TEE */
shm = NULL;
- } else {
return -EINVAL;
- }
- memref->shm_offs = ip->a + offs;
- memref->size = ip->b;
- memref->shm = shm;
- return 0;
+}
static int params_from_user(struct tee_context *ctx, struct tee_param *params, size_t num_params, struct tee_ioctl_param __user *uparams) @@ -360,8 +457,8 @@ static int params_from_user(struct tee_context *ctx, struct tee_param *params, size_t n; for (n = 0; n < num_params; n++) {
struct tee_ioctl_param ip;struct tee_shm *shm;
int rc;
if (copy_from_user(&ip, uparams + n, sizeof(ip))) return -EFAULT; @@ -384,45 +481,10 @@ static int params_from_user(struct tee_context *ctx, struct tee_param *params, case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT:
/*
* If a NULL pointer is passed to a TA in the TEE,
* the ip.c IOCTL parameters is set to TEE_MEMREF_NULL
* indicating a NULL memory reference.
*/
if (ip.c != TEE_MEMREF_NULL) {
/*
* If we fail to get a pointer to a shared
* memory object (and increase the ref count)
* from an identifier we return an error. All
* pointers that has been added in params have
* an increased ref count. It's the callers
* responibility to do tee_shm_put() on all
* resolved pointers.
*/
shm = tee_shm_get_from_id(ctx, ip.c);
if (IS_ERR(shm))
return PTR_ERR(shm);
/*
* Ensure offset + size does not overflow
* offset and does not overflow the size of
* the referred shared memory object.
*/
if ((ip.a + ip.b) < ip.a ||
(ip.a + ip.b) > shm->size) {
tee_shm_put(shm);
return -EINVAL;
}
} else if (ctx->cap_memref_null) {
/* Pass NULL pointer to OP-TEE */
shm = NULL;
} else {
return -EINVAL;
}
params[n].u.memref.shm_offs = ip.a;
params[n].u.memref.size = ip.b;
params[n].u.memref.shm = shm;
rc = param_from_user_memref(ctx, ¶ms[n].u.memref,
&ip);
if (rc)
default: /* Unknown attribute */return rc; break;
@@ -827,6 +889,8 @@ static long tee_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) return tee_ioctl_shm_alloc(ctx, uarg); case TEE_IOC_SHM_REGISTER: return tee_ioctl_shm_register(ctx, uarg);
- case TEE_IOC_SHM_REGISTER_FD:
case TEE_IOC_OPEN_SESSION: return tee_ioctl_open_session(ctx, uarg); case TEE_IOC_INVOKE:return tee_ioctl_shm_register_fd(ctx, uarg);
@@ -1288,3 +1352,4 @@ MODULE_AUTHOR("Linaro"); MODULE_DESCRIPTION("TEE Driver"); MODULE_VERSION("1.0"); MODULE_LICENSE("GPL v2"); +MODULE_IMPORT_NS("DMA_BUF"); diff --git a/drivers/tee/tee_private.h b/drivers/tee/tee_private.h index 6c6ff5d5eed2..aad7f6c7e0f0 100644 --- a/drivers/tee/tee_private.h +++ b/drivers/tee/tee_private.h @@ -24,6 +24,7 @@ void teedev_ctx_put(struct tee_context *ctx); struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size); struct tee_shm *tee_shm_register_user_buf(struct tee_context *ctx, unsigned long addr, size_t length); +struct tee_shm *tee_shm_get_parent_shm(struct tee_shm *shm, size_t *offs); int tee_heap_update_from_dma_buf(struct tee_device *teedev, struct dma_buf *dmabuf, size_t *offset, diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c index daf6e5cfd59a..8b79918468b5 100644 --- a/drivers/tee/tee_shm.c +++ b/drivers/tee/tee_shm.c @@ -4,6 +4,7 @@ */ #include <linux/anon_inodes.h> #include <linux/device.h> +#include <linux/dma-buf.h> #include <linux/idr.h> #include <linux/io.h> #include <linux/mm.h> @@ -15,6 +16,16 @@ #include <linux/highmem.h> #include "tee_private.h" +/* extra references appended to shm object for registered shared memory */ +struct tee_shm_dmabuf_ref {
- struct tee_shm shm;
- size_t offset;
- struct dma_buf *dmabuf;
- struct dma_buf_attachment *attach;
- struct sg_table *sgt;
- struct tee_shm *parent_shm;
+};
static void shm_put_kernel_pages(struct page **pages, size_t page_count) { size_t n; @@ -45,7 +56,23 @@ static void release_registered_pages(struct tee_shm *shm) static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) {
- if (shm->flags & TEE_SHM_POOL) {
- struct tee_shm *parent_shm = NULL;
- void *p = shm;
- if (shm->flags & TEE_SHM_DMA_BUF) {
struct tee_shm_dmabuf_ref *ref;
ref = container_of(shm, struct tee_shm_dmabuf_ref, shm);
parent_shm = ref->parent_shm;
p = ref;
if (ref->attach) {
dma_buf_unmap_attachment(ref->attach, ref->sgt,
DMA_BIDIRECTIONAL);
dma_buf_detach(ref->dmabuf, ref->attach);
}
dma_buf_put(ref->dmabuf);
- } else if (shm->flags & TEE_SHM_POOL) { teedev->pool->ops->free(teedev->pool, shm); } else if (shm->flags & TEE_SHM_DYNAMIC) { int rc = teedev->desc->ops->shm_unregister(shm->ctx, shm);
@@ -57,9 +84,10 @@ static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) release_registered_pages(shm); }
- teedev_ctx_put(shm->ctx);
- if (shm->ctx)
teedev_ctx_put(shm->ctx);
- kfree(shm);
- kfree(p);
tee_device_put(teedev); } @@ -169,7 +197,7 @@ struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size)
- tee_client_invoke_func(). The memory allocated is later freed with a
- call to tee_shm_free().
- @returns a pointer to 'struct tee_shm'
*/
- @returns a pointer to 'struct tee_shm' on success, and ERR_PTR on failure
struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size) { @@ -179,6 +207,116 @@ struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size) } EXPORT_SYMBOL_GPL(tee_shm_alloc_kernel_buf); +struct tee_shm *tee_shm_register_fd(struct tee_context *ctx, int fd) +{
- struct tee_shm_dmabuf_ref *ref;
- int rc;
- if (!tee_device_get(ctx->teedev))
return ERR_PTR(-EINVAL);
- teedev_ctx_get(ctx);
- ref = kzalloc(sizeof(*ref), GFP_KERNEL);
- if (!ref) {
rc = -ENOMEM;
goto err_put_tee;
- }
- refcount_set(&ref->shm.refcount, 1);
- ref->shm.ctx = ctx;
- ref->shm.id = -1;
- ref->shm.flags = TEE_SHM_DMA_BUF;
- ref->dmabuf = dma_buf_get(fd);
- if (IS_ERR(ref->dmabuf)) {
rc = PTR_ERR(ref->dmabuf);
goto err_kfree_ref;
- }
- rc = tee_heap_update_from_dma_buf(ctx->teedev, ref->dmabuf,
&ref->offset, &ref->shm,
&ref->parent_shm);
- if (!rc)
goto out;
- if (rc != -EINVAL)
goto err_put_dmabuf;
- ref->attach = dma_buf_attach(ref->dmabuf, &ctx->teedev->dev);
- if (IS_ERR(ref->attach)) {
rc = PTR_ERR(ref->attach);
goto err_put_dmabuf;
- }
- ref->sgt = dma_buf_map_attachment(ref->attach, DMA_BIDIRECTIONAL);
- if (IS_ERR(ref->sgt)) {
rc = PTR_ERR(ref->sgt);
goto err_detach;
- }
- if (sg_nents(ref->sgt->sgl) != 1) {
rc = PTR_ERR(ref->sgt->sgl);
goto err_unmap_attachement;
- }
- ref->shm.paddr = page_to_phys(sg_page(ref->sgt->sgl));
- ref->shm.size = ref->sgt->sgl->length;
+out:
- mutex_lock(&ref->shm.ctx->teedev->mutex);
- ref->shm.id = idr_alloc(&ref->shm.ctx->teedev->idr, &ref->shm,
1, 0, GFP_KERNEL);
- mutex_unlock(&ref->shm.ctx->teedev->mutex);
- if (ref->shm.id < 0) {
rc = ref->shm.id;
if (ref->attach)
goto err_unmap_attachement;
goto err_put_dmabuf;
- }
- return &ref->shm;
+err_unmap_attachement:
- dma_buf_unmap_attachment(ref->attach, ref->sgt, DMA_BIDIRECTIONAL);
+err_detach:
- dma_buf_detach(ref->dmabuf, ref->attach);
+err_put_dmabuf:
- dma_buf_put(ref->dmabuf);
+err_kfree_ref:
- kfree(ref);
+err_put_tee:
- teedev_ctx_put(ctx);
- tee_device_put(ctx->teedev);
- return ERR_PTR(rc);
+} +EXPORT_SYMBOL_GPL(tee_shm_register_fd);
+struct tee_shm *tee_shm_get_parent_shm(struct tee_shm *shm, size_t *offs) +{
- struct tee_shm *parent_shm = NULL;
- if (shm->flags & TEE_SHM_DMA_BUF) {
struct tee_shm_dmabuf_ref *ref;
ref = container_of(shm, struct tee_shm_dmabuf_ref, shm);
if (ref->parent_shm) {
/*
* the shm already has one reference to
* ref->parent_shm so we should be clear of 0.
* We're getting another reference since the caller
* of this function expects to put the returned
* parent_shm when it's done with it.
*/
parent_shm = ref->parent_shm;
refcount_inc(&parent_shm->refcount);
*offs = ref->offset;
}
- }
- return parent_shm;
+}
/**
- tee_shm_alloc_priv_buf() - Allocate shared memory for a privately shared
kernel buffer
diff --git a/include/linux/tee_core.h b/include/linux/tee_core.h index 16ef078247ae..6bd833b6d0e1 100644 --- a/include/linux/tee_core.h +++ b/include/linux/tee_core.h @@ -28,6 +28,7 @@ #define TEE_SHM_USER_MAPPED BIT(1) /* Memory mapped in user space */ #define TEE_SHM_POOL BIT(2) /* Memory allocated from pool */ #define TEE_SHM_PRIV BIT(3) /* Memory private to TEE driver */ +#define TEE_SHM_DMA_BUF BIT(4) /* Memory with dma-buf handle */ #define TEE_DEVICE_FLAG_REGISTERED 0x1 #define TEE_MAX_DEV_NAME_LEN 32 diff --git a/include/linux/tee_drv.h b/include/linux/tee_drv.h index a54c203000ed..824f1251de60 100644 --- a/include/linux/tee_drv.h +++ b/include/linux/tee_drv.h @@ -116,6 +116,16 @@ struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size); struct tee_shm *tee_shm_register_kernel_buf(struct tee_context *ctx, void *addr, size_t length); +/**
- tee_shm_register_fd() - Register shared memory from file descriptor
- @ctx: Context that allocates the shared memory
- @fd: Shared memory file descriptor reference
- @returns a pointer to 'struct tee_shm' on success, and ERR_PTR on failure
- */
+struct tee_shm *tee_shm_register_fd(struct tee_context *ctx, int fd);
/**
- tee_shm_free() - Free shared memory
- @shm: Handle to shared memory to free
diff --git a/include/uapi/linux/tee.h b/include/uapi/linux/tee.h index d0430bee8292..1f9a4ac2b211 100644 --- a/include/uapi/linux/tee.h +++ b/include/uapi/linux/tee.h @@ -118,6 +118,35 @@ struct tee_ioctl_shm_alloc_data { #define TEE_IOC_SHM_ALLOC _IOWR(TEE_IOC_MAGIC, TEE_IOC_BASE + 1, \ struct tee_ioctl_shm_alloc_data) +/**
- struct tee_ioctl_shm_register_fd_data - Shared memory registering argument
- @fd: [in] File descriptor identifying the shared memory
- @size: [out] Size of shared memory to allocate
- @flags: [in] Flags to/from allocation.
- @id: [out] Identifier of the shared memory
- The flags field should currently be zero as input. Updated by the call
- with actual flags as defined by TEE_IOCTL_SHM_* above.
- This structure is used as argument for TEE_IOC_SHM_REGISTER_FD below.
- */
+struct tee_ioctl_shm_register_fd_data {
- __s64 fd;
- __u64 size;
- __u32 flags;
- __s32 id;
+};
+/**
- TEE_IOC_SHM_REGISTER_FD - register a shared memory from a file descriptor
- Returns a file descriptor on success or < 0 on failure
- The returned file descriptor refers to the shared memory object in kernel
- land. The shared memory is freed when the descriptor is closed.
- */
+#define TEE_IOC_SHM_REGISTER_FD _IOWR(TEE_IOC_MAGIC, TEE_IOC_BASE + 8, \
struct tee_ioctl_shm_register_fd_data)
/**
- struct tee_ioctl_buf_data - Variable sized buffer
- @buf_ptr: [in] A __user pointer to a buffer
-- 2.43.0
Hi Sumit,
On Tue, Mar 25, 2025 at 7:50 AM Sumit Garg sumit.garg@kernel.org wrote:
Hi Jens,
On Wed, Mar 05, 2025 at 02:04:12PM +0100, Jens Wiklander wrote:
From: Etienne Carriere etienne.carriere@linaro.org
Enable userspace to create a tee_shm object that refers to a dmabuf reference.
Userspace registers the dmabuf file descriptor as in a tee_shm object. The registration is completed with a tee_shm file descriptor returned to userspace.
Userspace is free to close the dmabuf file descriptor now since all the resources are now held via the tee_shm object.
Closing the tee_shm file descriptor will release all resources used by the tee_shm object.
This change only support dmabuf references that relates to physically contiguous memory buffers.
New tee_shm flag to identify tee_shm objects built from a registered dmabuf, TEE_SHM_DMA_BUF.
Signed-off-by: Etienne Carriere etienne.carriere@linaro.org Signed-off-by: Olivier Masse olivier.masse@nxp.com Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/tee/tee_core.c | 145 ++++++++++++++++++++++++++----------- drivers/tee/tee_private.h | 1 + drivers/tee/tee_shm.c | 146 ++++++++++++++++++++++++++++++++++++-- include/linux/tee_core.h | 1 + include/linux/tee_drv.h | 10 +++ include/uapi/linux/tee.h | 29 ++++++++ 6 files changed, 288 insertions(+), 44 deletions(-)
I am still trying to find if we really need a separate IOCTL to register DMA heap with TEE subsystem. Can't we initialize tee_shm as a member of struct tee_heap_buffer in tee_dma_heap_alloc() where the allocation happens?
No, that's not possible since we don't have a tee_context availble so we can't assign an ID that userspace can use for this shm object.
We could add new attribute types TEE_IOCTL_PARAM_ATTR_TYPE_MEMFD_*, but it's a bit more complicated than that since we'd also need to update for the life cycle.
We can always find a reference back to tee_shm object from DMA buffer.
Yes
Cheers, Jens
-Sumit
diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c index 685afcaa3ea1..3a71643766d5 100644 --- a/drivers/tee/tee_core.c +++ b/drivers/tee/tee_core.c @@ -353,6 +353,103 @@ tee_ioctl_shm_register(struct tee_context *ctx, return ret; }
+static int +tee_ioctl_shm_register_fd(struct tee_context *ctx,
struct tee_ioctl_shm_register_fd_data __user *udata)
+{
struct tee_ioctl_shm_register_fd_data data;
struct tee_shm *shm;
long ret;
if (copy_from_user(&data, udata, sizeof(data)))
return -EFAULT;
/* Currently no input flags are supported */
if (data.flags)
return -EINVAL;
shm = tee_shm_register_fd(ctx, data.fd);
if (IS_ERR(shm))
return -EINVAL;
data.id = shm->id;
data.flags = shm->flags;
data.size = shm->size;
if (copy_to_user(udata, &data, sizeof(data)))
ret = -EFAULT;
else
ret = tee_shm_get_fd(shm);
/*
* When user space closes the file descriptor the shared memory
* should be freed or if tee_shm_get_fd() failed then it will
* be freed immediately.
*/
tee_shm_put(shm);
return ret;
+}
+static int param_from_user_memref(struct tee_context *ctx,
struct tee_param_memref *memref,
struct tee_ioctl_param *ip)
+{
struct tee_shm *shm;
size_t offs = 0;
/*
* If a NULL pointer is passed to a TA in the TEE,
* the ip.c IOCTL parameters is set to TEE_MEMREF_NULL
* indicating a NULL memory reference.
*/
if (ip->c != TEE_MEMREF_NULL) {
/*
* If we fail to get a pointer to a shared
* memory object (and increase the ref count)
* from an identifier we return an error. All
* pointers that has been added in params have
* an increased ref count. It's the callers
* responibility to do tee_shm_put() on all
* resolved pointers.
*/
shm = tee_shm_get_from_id(ctx, ip->c);
if (IS_ERR(shm))
return PTR_ERR(shm);
/*
* Ensure offset + size does not overflow
* offset and does not overflow the size of
* the referred shared memory object.
*/
if ((ip->a + ip->b) < ip->a ||
(ip->a + ip->b) > shm->size) {
tee_shm_put(shm);
return -EINVAL;
}
if (shm->flags & TEE_SHM_DMA_BUF) {
struct tee_shm *parent_shm;
parent_shm = tee_shm_get_parent_shm(shm, &offs);
if (parent_shm) {
tee_shm_put(shm);
shm = parent_shm;
}
}
} else if (ctx->cap_memref_null) {
/* Pass NULL pointer to OP-TEE */
shm = NULL;
} else {
return -EINVAL;
}
memref->shm_offs = ip->a + offs;
memref->size = ip->b;
memref->shm = shm;
return 0;
+}
static int params_from_user(struct tee_context *ctx, struct tee_param *params, size_t num_params, struct tee_ioctl_param __user *uparams) @@ -360,8 +457,8 @@ static int params_from_user(struct tee_context *ctx, struct tee_param *params, size_t n;
for (n = 0; n < num_params; n++) {
struct tee_shm *shm; struct tee_ioctl_param ip;
int rc; if (copy_from_user(&ip, uparams + n, sizeof(ip))) return -EFAULT;
@@ -384,45 +481,10 @@ static int params_from_user(struct tee_context *ctx, struct tee_param *params, case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT:
/*
* If a NULL pointer is passed to a TA in the TEE,
* the ip.c IOCTL parameters is set to TEE_MEMREF_NULL
* indicating a NULL memory reference.
*/
if (ip.c != TEE_MEMREF_NULL) {
/*
* If we fail to get a pointer to a shared
* memory object (and increase the ref count)
* from an identifier we return an error. All
* pointers that has been added in params have
* an increased ref count. It's the callers
* responibility to do tee_shm_put() on all
* resolved pointers.
*/
shm = tee_shm_get_from_id(ctx, ip.c);
if (IS_ERR(shm))
return PTR_ERR(shm);
/*
* Ensure offset + size does not overflow
* offset and does not overflow the size of
* the referred shared memory object.
*/
if ((ip.a + ip.b) < ip.a ||
(ip.a + ip.b) > shm->size) {
tee_shm_put(shm);
return -EINVAL;
}
} else if (ctx->cap_memref_null) {
/* Pass NULL pointer to OP-TEE */
shm = NULL;
} else {
return -EINVAL;
}
params[n].u.memref.shm_offs = ip.a;
params[n].u.memref.size = ip.b;
params[n].u.memref.shm = shm;
rc = param_from_user_memref(ctx, ¶ms[n].u.memref,
&ip);
if (rc)
return rc; break; default: /* Unknown attribute */
@@ -827,6 +889,8 @@ static long tee_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) return tee_ioctl_shm_alloc(ctx, uarg); case TEE_IOC_SHM_REGISTER: return tee_ioctl_shm_register(ctx, uarg);
case TEE_IOC_SHM_REGISTER_FD:
return tee_ioctl_shm_register_fd(ctx, uarg); case TEE_IOC_OPEN_SESSION: return tee_ioctl_open_session(ctx, uarg); case TEE_IOC_INVOKE:
@@ -1288,3 +1352,4 @@ MODULE_AUTHOR("Linaro"); MODULE_DESCRIPTION("TEE Driver"); MODULE_VERSION("1.0"); MODULE_LICENSE("GPL v2"); +MODULE_IMPORT_NS("DMA_BUF"); diff --git a/drivers/tee/tee_private.h b/drivers/tee/tee_private.h index 6c6ff5d5eed2..aad7f6c7e0f0 100644 --- a/drivers/tee/tee_private.h +++ b/drivers/tee/tee_private.h @@ -24,6 +24,7 @@ void teedev_ctx_put(struct tee_context *ctx); struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size); struct tee_shm *tee_shm_register_user_buf(struct tee_context *ctx, unsigned long addr, size_t length); +struct tee_shm *tee_shm_get_parent_shm(struct tee_shm *shm, size_t *offs);
int tee_heap_update_from_dma_buf(struct tee_device *teedev, struct dma_buf *dmabuf, size_t *offset, diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c index daf6e5cfd59a..8b79918468b5 100644 --- a/drivers/tee/tee_shm.c +++ b/drivers/tee/tee_shm.c @@ -4,6 +4,7 @@ */ #include <linux/anon_inodes.h> #include <linux/device.h> +#include <linux/dma-buf.h> #include <linux/idr.h> #include <linux/io.h> #include <linux/mm.h> @@ -15,6 +16,16 @@ #include <linux/highmem.h> #include "tee_private.h"
+/* extra references appended to shm object for registered shared memory */ +struct tee_shm_dmabuf_ref {
struct tee_shm shm;
size_t offset;
struct dma_buf *dmabuf;
struct dma_buf_attachment *attach;
struct sg_table *sgt;
struct tee_shm *parent_shm;
+};
static void shm_put_kernel_pages(struct page **pages, size_t page_count) { size_t n; @@ -45,7 +56,23 @@ static void release_registered_pages(struct tee_shm *shm)
static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) {
if (shm->flags & TEE_SHM_POOL) {
struct tee_shm *parent_shm = NULL;
void *p = shm;
if (shm->flags & TEE_SHM_DMA_BUF) {
struct tee_shm_dmabuf_ref *ref;
ref = container_of(shm, struct tee_shm_dmabuf_ref, shm);
parent_shm = ref->parent_shm;
p = ref;
if (ref->attach) {
dma_buf_unmap_attachment(ref->attach, ref->sgt,
DMA_BIDIRECTIONAL);
dma_buf_detach(ref->dmabuf, ref->attach);
}
dma_buf_put(ref->dmabuf);
} else if (shm->flags & TEE_SHM_POOL) { teedev->pool->ops->free(teedev->pool, shm); } else if (shm->flags & TEE_SHM_DYNAMIC) { int rc = teedev->desc->ops->shm_unregister(shm->ctx, shm);
@@ -57,9 +84,10 @@ static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) release_registered_pages(shm); }
teedev_ctx_put(shm->ctx);
if (shm->ctx)
teedev_ctx_put(shm->ctx);
kfree(shm);
kfree(p); tee_device_put(teedev);
} @@ -169,7 +197,7 @@ struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size)
- tee_client_invoke_func(). The memory allocated is later freed with a
- call to tee_shm_free().
- @returns a pointer to 'struct tee_shm'
*/
- @returns a pointer to 'struct tee_shm' on success, and ERR_PTR on failure
struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size) { @@ -179,6 +207,116 @@ struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size) } EXPORT_SYMBOL_GPL(tee_shm_alloc_kernel_buf);
+struct tee_shm *tee_shm_register_fd(struct tee_context *ctx, int fd) +{
struct tee_shm_dmabuf_ref *ref;
int rc;
if (!tee_device_get(ctx->teedev))
return ERR_PTR(-EINVAL);
teedev_ctx_get(ctx);
ref = kzalloc(sizeof(*ref), GFP_KERNEL);
if (!ref) {
rc = -ENOMEM;
goto err_put_tee;
}
refcount_set(&ref->shm.refcount, 1);
ref->shm.ctx = ctx;
ref->shm.id = -1;
ref->shm.flags = TEE_SHM_DMA_BUF;
ref->dmabuf = dma_buf_get(fd);
if (IS_ERR(ref->dmabuf)) {
rc = PTR_ERR(ref->dmabuf);
goto err_kfree_ref;
}
rc = tee_heap_update_from_dma_buf(ctx->teedev, ref->dmabuf,
&ref->offset, &ref->shm,
&ref->parent_shm);
if (!rc)
goto out;
if (rc != -EINVAL)
goto err_put_dmabuf;
ref->attach = dma_buf_attach(ref->dmabuf, &ctx->teedev->dev);
if (IS_ERR(ref->attach)) {
rc = PTR_ERR(ref->attach);
goto err_put_dmabuf;
}
ref->sgt = dma_buf_map_attachment(ref->attach, DMA_BIDIRECTIONAL);
if (IS_ERR(ref->sgt)) {
rc = PTR_ERR(ref->sgt);
goto err_detach;
}
if (sg_nents(ref->sgt->sgl) != 1) {
rc = PTR_ERR(ref->sgt->sgl);
goto err_unmap_attachement;
}
ref->shm.paddr = page_to_phys(sg_page(ref->sgt->sgl));
ref->shm.size = ref->sgt->sgl->length;
+out:
mutex_lock(&ref->shm.ctx->teedev->mutex);
ref->shm.id = idr_alloc(&ref->shm.ctx->teedev->idr, &ref->shm,
1, 0, GFP_KERNEL);
mutex_unlock(&ref->shm.ctx->teedev->mutex);
if (ref->shm.id < 0) {
rc = ref->shm.id;
if (ref->attach)
goto err_unmap_attachement;
goto err_put_dmabuf;
}
return &ref->shm;
+err_unmap_attachement:
dma_buf_unmap_attachment(ref->attach, ref->sgt, DMA_BIDIRECTIONAL);
+err_detach:
dma_buf_detach(ref->dmabuf, ref->attach);
+err_put_dmabuf:
dma_buf_put(ref->dmabuf);
+err_kfree_ref:
kfree(ref);
+err_put_tee:
teedev_ctx_put(ctx);
tee_device_put(ctx->teedev);
return ERR_PTR(rc);
+} +EXPORT_SYMBOL_GPL(tee_shm_register_fd);
+struct tee_shm *tee_shm_get_parent_shm(struct tee_shm *shm, size_t *offs) +{
struct tee_shm *parent_shm = NULL;
if (shm->flags & TEE_SHM_DMA_BUF) {
struct tee_shm_dmabuf_ref *ref;
ref = container_of(shm, struct tee_shm_dmabuf_ref, shm);
if (ref->parent_shm) {
/*
* the shm already has one reference to
* ref->parent_shm so we should be clear of 0.
* We're getting another reference since the caller
* of this function expects to put the returned
* parent_shm when it's done with it.
*/
parent_shm = ref->parent_shm;
refcount_inc(&parent_shm->refcount);
*offs = ref->offset;
}
}
return parent_shm;
+}
/**
- tee_shm_alloc_priv_buf() - Allocate shared memory for a privately shared
kernel buffer
diff --git a/include/linux/tee_core.h b/include/linux/tee_core.h index 16ef078247ae..6bd833b6d0e1 100644 --- a/include/linux/tee_core.h +++ b/include/linux/tee_core.h @@ -28,6 +28,7 @@ #define TEE_SHM_USER_MAPPED BIT(1) /* Memory mapped in user space */ #define TEE_SHM_POOL BIT(2) /* Memory allocated from pool */ #define TEE_SHM_PRIV BIT(3) /* Memory private to TEE driver */ +#define TEE_SHM_DMA_BUF BIT(4) /* Memory with dma-buf handle */
#define TEE_DEVICE_FLAG_REGISTERED 0x1 #define TEE_MAX_DEV_NAME_LEN 32 diff --git a/include/linux/tee_drv.h b/include/linux/tee_drv.h index a54c203000ed..824f1251de60 100644 --- a/include/linux/tee_drv.h +++ b/include/linux/tee_drv.h @@ -116,6 +116,16 @@ struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size); struct tee_shm *tee_shm_register_kernel_buf(struct tee_context *ctx, void *addr, size_t length);
+/**
- tee_shm_register_fd() - Register shared memory from file descriptor
- @ctx: Context that allocates the shared memory
- @fd: Shared memory file descriptor reference
- @returns a pointer to 'struct tee_shm' on success, and ERR_PTR on failure
- */
+struct tee_shm *tee_shm_register_fd(struct tee_context *ctx, int fd);
/**
- tee_shm_free() - Free shared memory
- @shm: Handle to shared memory to free
diff --git a/include/uapi/linux/tee.h b/include/uapi/linux/tee.h index d0430bee8292..1f9a4ac2b211 100644 --- a/include/uapi/linux/tee.h +++ b/include/uapi/linux/tee.h @@ -118,6 +118,35 @@ struct tee_ioctl_shm_alloc_data { #define TEE_IOC_SHM_ALLOC _IOWR(TEE_IOC_MAGIC, TEE_IOC_BASE + 1, \ struct tee_ioctl_shm_alloc_data)
+/**
- struct tee_ioctl_shm_register_fd_data - Shared memory registering argument
- @fd: [in] File descriptor identifying the shared memory
- @size: [out] Size of shared memory to allocate
- @flags: [in] Flags to/from allocation.
- @id: [out] Identifier of the shared memory
- The flags field should currently be zero as input. Updated by the call
- with actual flags as defined by TEE_IOCTL_SHM_* above.
- This structure is used as argument for TEE_IOC_SHM_REGISTER_FD below.
- */
+struct tee_ioctl_shm_register_fd_data {
__s64 fd;
__u64 size;
__u32 flags;
__s32 id;
+};
+/**
- TEE_IOC_SHM_REGISTER_FD - register a shared memory from a file descriptor
- Returns a file descriptor on success or < 0 on failure
- The returned file descriptor refers to the shared memory object in kernel
- land. The shared memory is freed when the descriptor is closed.
- */
+#define TEE_IOC_SHM_REGISTER_FD _IOWR(TEE_IOC_MAGIC, TEE_IOC_BASE + 8, \
struct tee_ioctl_shm_register_fd_data)
/**
- struct tee_ioctl_buf_data - Variable sized buffer
- @buf_ptr: [in] A __user pointer to a buffer
-- 2.43.0
On Tue, Mar 25, 2025 at 12:17:20PM +0100, Jens Wiklander wrote:
Hi Sumit,
On Tue, Mar 25, 2025 at 7:50 AM Sumit Garg sumit.garg@kernel.org wrote:
Hi Jens,
On Wed, Mar 05, 2025 at 02:04:12PM +0100, Jens Wiklander wrote:
From: Etienne Carriere etienne.carriere@linaro.org
Enable userspace to create a tee_shm object that refers to a dmabuf reference.
Userspace registers the dmabuf file descriptor as in a tee_shm object. The registration is completed with a tee_shm file descriptor returned to userspace.
Userspace is free to close the dmabuf file descriptor now since all the resources are now held via the tee_shm object.
Closing the tee_shm file descriptor will release all resources used by the tee_shm object.
This change only support dmabuf references that relates to physically contiguous memory buffers.
Let's try to reframe this commit message to say that new ioctl allows to register DMA buffers allocated from restricted/protected heaps with TEE subsystem.
New tee_shm flag to identify tee_shm objects built from a registered dmabuf, TEE_SHM_DMA_BUF.
Signed-off-by: Etienne Carriere etienne.carriere@linaro.org Signed-off-by: Olivier Masse olivier.masse@nxp.com Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/tee/tee_core.c | 145 ++++++++++++++++++++++++++----------- drivers/tee/tee_private.h | 1 + drivers/tee/tee_shm.c | 146 ++++++++++++++++++++++++++++++++++++-- include/linux/tee_core.h | 1 + include/linux/tee_drv.h | 10 +++ include/uapi/linux/tee.h | 29 ++++++++ 6 files changed, 288 insertions(+), 44 deletions(-)
I am still trying to find if we really need a separate IOCTL to register DMA heap with TEE subsystem. Can't we initialize tee_shm as a member of struct tee_heap_buffer in tee_dma_heap_alloc() where the allocation happens?
No, that's not possible since we don't have a tee_context availble so we can't assign an ID that userspace can use for this shm object.
We could add new attribute types TEE_IOCTL_PARAM_ATTR_TYPE_MEMFD_*, but it's a bit more complicated than that since we'd also need to update for the life cycle.
Okay, that's fair enough. Lets take this new IOCTL approach then.
We can always find a reference back to tee_shm object from DMA buffer.
Yes
Cheers, Jens
-Sumit
diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c index 685afcaa3ea1..3a71643766d5 100644 --- a/drivers/tee/tee_core.c +++ b/drivers/tee/tee_core.c @@ -353,6 +353,103 @@ tee_ioctl_shm_register(struct tee_context *ctx, return ret; }
+static int +tee_ioctl_shm_register_fd(struct tee_context *ctx,
struct tee_ioctl_shm_register_fd_data __user *udata)
+{
struct tee_ioctl_shm_register_fd_data data;
struct tee_shm *shm;
long ret;
if (copy_from_user(&data, udata, sizeof(data)))
return -EFAULT;
/* Currently no input flags are supported */
if (data.flags)
return -EINVAL;
shm = tee_shm_register_fd(ctx, data.fd);
if (IS_ERR(shm))
return -EINVAL;
data.id = shm->id;
data.flags = shm->flags;
data.size = shm->size;
if (copy_to_user(udata, &data, sizeof(data)))
ret = -EFAULT;
else
ret = tee_shm_get_fd(shm);
/*
* When user space closes the file descriptor the shared memory
* should be freed or if tee_shm_get_fd() failed then it will
* be freed immediately.
*/
tee_shm_put(shm);
return ret;
+}
+static int param_from_user_memref(struct tee_context *ctx,
struct tee_param_memref *memref,
struct tee_ioctl_param *ip)
+{
struct tee_shm *shm;
size_t offs = 0;
/*
* If a NULL pointer is passed to a TA in the TEE,
* the ip.c IOCTL parameters is set to TEE_MEMREF_NULL
* indicating a NULL memory reference.
*/
if (ip->c != TEE_MEMREF_NULL) {
/*
* If we fail to get a pointer to a shared
* memory object (and increase the ref count)
* from an identifier we return an error. All
* pointers that has been added in params have
* an increased ref count. It's the callers
* responibility to do tee_shm_put() on all
* resolved pointers.
*/
shm = tee_shm_get_from_id(ctx, ip->c);
if (IS_ERR(shm))
return PTR_ERR(shm);
/*
* Ensure offset + size does not overflow
* offset and does not overflow the size of
* the referred shared memory object.
*/
if ((ip->a + ip->b) < ip->a ||
(ip->a + ip->b) > shm->size) {
tee_shm_put(shm);
return -EINVAL;
}
if (shm->flags & TEE_SHM_DMA_BUF) {
This check is already done within tee_shm_get_parent_shm(), is it redundant here?
struct tee_shm *parent_shm;
parent_shm = tee_shm_get_parent_shm(shm, &offs);
if (parent_shm) {
tee_shm_put(shm);
shm = parent_shm;
}
}
} else if (ctx->cap_memref_null) {
/* Pass NULL pointer to OP-TEE */
shm = NULL;
} else {
return -EINVAL;
}
memref->shm_offs = ip->a + offs;
memref->size = ip->b;
memref->shm = shm;
return 0;
+}
static int params_from_user(struct tee_context *ctx, struct tee_param *params, size_t num_params, struct tee_ioctl_param __user *uparams) @@ -360,8 +457,8 @@ static int params_from_user(struct tee_context *ctx, struct tee_param *params, size_t n;
for (n = 0; n < num_params; n++) {
struct tee_shm *shm; struct tee_ioctl_param ip;
int rc; if (copy_from_user(&ip, uparams + n, sizeof(ip))) return -EFAULT;
@@ -384,45 +481,10 @@ static int params_from_user(struct tee_context *ctx, struct tee_param *params, case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT:
/*
* If a NULL pointer is passed to a TA in the TEE,
* the ip.c IOCTL parameters is set to TEE_MEMREF_NULL
* indicating a NULL memory reference.
*/
if (ip.c != TEE_MEMREF_NULL) {
/*
* If we fail to get a pointer to a shared
* memory object (and increase the ref count)
* from an identifier we return an error. All
* pointers that has been added in params have
* an increased ref count. It's the callers
* responibility to do tee_shm_put() on all
* resolved pointers.
*/
shm = tee_shm_get_from_id(ctx, ip.c);
if (IS_ERR(shm))
return PTR_ERR(shm);
/*
* Ensure offset + size does not overflow
* offset and does not overflow the size of
* the referred shared memory object.
*/
if ((ip.a + ip.b) < ip.a ||
(ip.a + ip.b) > shm->size) {
tee_shm_put(shm);
return -EINVAL;
}
} else if (ctx->cap_memref_null) {
/* Pass NULL pointer to OP-TEE */
shm = NULL;
} else {
return -EINVAL;
}
params[n].u.memref.shm_offs = ip.a;
params[n].u.memref.size = ip.b;
params[n].u.memref.shm = shm;
rc = param_from_user_memref(ctx, ¶ms[n].u.memref,
&ip);
if (rc)
return rc;
This looks more of a refactoring, can we split it as a separate commit?
break; default: /* Unknown attribute */
@@ -827,6 +889,8 @@ static long tee_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) return tee_ioctl_shm_alloc(ctx, uarg); case TEE_IOC_SHM_REGISTER: return tee_ioctl_shm_register(ctx, uarg);
case TEE_IOC_SHM_REGISTER_FD:
return tee_ioctl_shm_register_fd(ctx, uarg); case TEE_IOC_OPEN_SESSION: return tee_ioctl_open_session(ctx, uarg); case TEE_IOC_INVOKE:
@@ -1288,3 +1352,4 @@ MODULE_AUTHOR("Linaro"); MODULE_DESCRIPTION("TEE Driver"); MODULE_VERSION("1.0"); MODULE_LICENSE("GPL v2"); +MODULE_IMPORT_NS("DMA_BUF"); diff --git a/drivers/tee/tee_private.h b/drivers/tee/tee_private.h index 6c6ff5d5eed2..aad7f6c7e0f0 100644 --- a/drivers/tee/tee_private.h +++ b/drivers/tee/tee_private.h @@ -24,6 +24,7 @@ void teedev_ctx_put(struct tee_context *ctx); struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size); struct tee_shm *tee_shm_register_user_buf(struct tee_context *ctx, unsigned long addr, size_t length); +struct tee_shm *tee_shm_get_parent_shm(struct tee_shm *shm, size_t *offs);
int tee_heap_update_from_dma_buf(struct tee_device *teedev, struct dma_buf *dmabuf, size_t *offset, diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c index daf6e5cfd59a..8b79918468b5 100644 --- a/drivers/tee/tee_shm.c +++ b/drivers/tee/tee_shm.c @@ -4,6 +4,7 @@ */ #include <linux/anon_inodes.h> #include <linux/device.h> +#include <linux/dma-buf.h> #include <linux/idr.h> #include <linux/io.h> #include <linux/mm.h> @@ -15,6 +16,16 @@ #include <linux/highmem.h> #include "tee_private.h"
+/* extra references appended to shm object for registered shared memory */ +struct tee_shm_dmabuf_ref {
struct tee_shm shm;
size_t offset;
struct dma_buf *dmabuf;
struct dma_buf_attachment *attach;
struct sg_table *sgt;
struct tee_shm *parent_shm;
+};
static void shm_put_kernel_pages(struct page **pages, size_t page_count) { size_t n; @@ -45,7 +56,23 @@ static void release_registered_pages(struct tee_shm *shm)
static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) {
if (shm->flags & TEE_SHM_POOL) {
struct tee_shm *parent_shm = NULL;
void *p = shm;
if (shm->flags & TEE_SHM_DMA_BUF) {
struct tee_shm_dmabuf_ref *ref;
ref = container_of(shm, struct tee_shm_dmabuf_ref, shm);
parent_shm = ref->parent_shm;
p = ref;
if (ref->attach) {
dma_buf_unmap_attachment(ref->attach, ref->sgt,
DMA_BIDIRECTIONAL);
dma_buf_detach(ref->dmabuf, ref->attach);
}
dma_buf_put(ref->dmabuf);
} else if (shm->flags & TEE_SHM_POOL) { teedev->pool->ops->free(teedev->pool, shm); } else if (shm->flags & TEE_SHM_DYNAMIC) { int rc = teedev->desc->ops->shm_unregister(shm->ctx, shm);
@@ -57,9 +84,10 @@ static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) release_registered_pages(shm); }
teedev_ctx_put(shm->ctx);
if (shm->ctx)
teedev_ctx_put(shm->ctx);
kfree(shm);
kfree(p); tee_device_put(teedev);
} @@ -169,7 +197,7 @@ struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size)
- tee_client_invoke_func(). The memory allocated is later freed with a
- call to tee_shm_free().
- @returns a pointer to 'struct tee_shm'
*/
- @returns a pointer to 'struct tee_shm' on success, and ERR_PTR on failure
struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size) { @@ -179,6 +207,116 @@ struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size) } EXPORT_SYMBOL_GPL(tee_shm_alloc_kernel_buf);
+struct tee_shm *tee_shm_register_fd(struct tee_context *ctx, int fd) +{
struct tee_shm_dmabuf_ref *ref;
int rc;
if (!tee_device_get(ctx->teedev))
return ERR_PTR(-EINVAL);
teedev_ctx_get(ctx);
ref = kzalloc(sizeof(*ref), GFP_KERNEL);
if (!ref) {
rc = -ENOMEM;
goto err_put_tee;
}
refcount_set(&ref->shm.refcount, 1);
ref->shm.ctx = ctx;
ref->shm.id = -1;
ref->shm.flags = TEE_SHM_DMA_BUF;
ref->dmabuf = dma_buf_get(fd);
if (IS_ERR(ref->dmabuf)) {
rc = PTR_ERR(ref->dmabuf);
goto err_kfree_ref;
}
rc = tee_heap_update_from_dma_buf(ctx->teedev, ref->dmabuf,
&ref->offset, &ref->shm,
&ref->parent_shm);
if (!rc)
goto out;
if (rc != -EINVAL)
goto err_put_dmabuf;
ref->attach = dma_buf_attach(ref->dmabuf, &ctx->teedev->dev);
if (IS_ERR(ref->attach)) {
rc = PTR_ERR(ref->attach);
goto err_put_dmabuf;
}
ref->sgt = dma_buf_map_attachment(ref->attach, DMA_BIDIRECTIONAL);
if (IS_ERR(ref->sgt)) {
rc = PTR_ERR(ref->sgt);
goto err_detach;
}
if (sg_nents(ref->sgt->sgl) != 1) {
rc = PTR_ERR(ref->sgt->sgl);
goto err_unmap_attachement;
}
ref->shm.paddr = page_to_phys(sg_page(ref->sgt->sgl));
ref->shm.size = ref->sgt->sgl->length;
+out:
mutex_lock(&ref->shm.ctx->teedev->mutex);
ref->shm.id = idr_alloc(&ref->shm.ctx->teedev->idr, &ref->shm,
1, 0, GFP_KERNEL);
mutex_unlock(&ref->shm.ctx->teedev->mutex);
if (ref->shm.id < 0) {
rc = ref->shm.id;
if (ref->attach)
goto err_unmap_attachement;
goto err_put_dmabuf;
}
return &ref->shm;
+err_unmap_attachement:
dma_buf_unmap_attachment(ref->attach, ref->sgt, DMA_BIDIRECTIONAL);
+err_detach:
dma_buf_detach(ref->dmabuf, ref->attach);
+err_put_dmabuf:
dma_buf_put(ref->dmabuf);
+err_kfree_ref:
kfree(ref);
+err_put_tee:
teedev_ctx_put(ctx);
tee_device_put(ctx->teedev);
return ERR_PTR(rc);
+} +EXPORT_SYMBOL_GPL(tee_shm_register_fd);
+struct tee_shm *tee_shm_get_parent_shm(struct tee_shm *shm, size_t *offs) +{
struct tee_shm *parent_shm = NULL;
if (shm->flags & TEE_SHM_DMA_BUF) {
struct tee_shm_dmabuf_ref *ref;
ref = container_of(shm, struct tee_shm_dmabuf_ref, shm);
if (ref->parent_shm) {
/*
* the shm already has one reference to
* ref->parent_shm so we should be clear of 0.
* We're getting another reference since the caller
* of this function expects to put the returned
* parent_shm when it's done with it.
This seems a bit complicated, can we rather inline this API as I can't see any other current user?
-Sumit
*/
parent_shm = ref->parent_shm;
refcount_inc(&parent_shm->refcount);
*offs = ref->offset;
}
}
return parent_shm;
+}
/**
- tee_shm_alloc_priv_buf() - Allocate shared memory for a privately shared
kernel buffer
diff --git a/include/linux/tee_core.h b/include/linux/tee_core.h index 16ef078247ae..6bd833b6d0e1 100644 --- a/include/linux/tee_core.h +++ b/include/linux/tee_core.h @@ -28,6 +28,7 @@ #define TEE_SHM_USER_MAPPED BIT(1) /* Memory mapped in user space */ #define TEE_SHM_POOL BIT(2) /* Memory allocated from pool */ #define TEE_SHM_PRIV BIT(3) /* Memory private to TEE driver */ +#define TEE_SHM_DMA_BUF BIT(4) /* Memory with dma-buf handle */
#define TEE_DEVICE_FLAG_REGISTERED 0x1 #define TEE_MAX_DEV_NAME_LEN 32 diff --git a/include/linux/tee_drv.h b/include/linux/tee_drv.h index a54c203000ed..824f1251de60 100644 --- a/include/linux/tee_drv.h +++ b/include/linux/tee_drv.h @@ -116,6 +116,16 @@ struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size); struct tee_shm *tee_shm_register_kernel_buf(struct tee_context *ctx, void *addr, size_t length);
+/**
- tee_shm_register_fd() - Register shared memory from file descriptor
- @ctx: Context that allocates the shared memory
- @fd: Shared memory file descriptor reference
- @returns a pointer to 'struct tee_shm' on success, and ERR_PTR on failure
- */
+struct tee_shm *tee_shm_register_fd(struct tee_context *ctx, int fd);
/**
- tee_shm_free() - Free shared memory
- @shm: Handle to shared memory to free
diff --git a/include/uapi/linux/tee.h b/include/uapi/linux/tee.h index d0430bee8292..1f9a4ac2b211 100644 --- a/include/uapi/linux/tee.h +++ b/include/uapi/linux/tee.h @@ -118,6 +118,35 @@ struct tee_ioctl_shm_alloc_data { #define TEE_IOC_SHM_ALLOC _IOWR(TEE_IOC_MAGIC, TEE_IOC_BASE + 1, \ struct tee_ioctl_shm_alloc_data)
+/**
- struct tee_ioctl_shm_register_fd_data - Shared memory registering argument
- @fd: [in] File descriptor identifying the shared memory
- @size: [out] Size of shared memory to allocate
- @flags: [in] Flags to/from allocation.
- @id: [out] Identifier of the shared memory
- The flags field should currently be zero as input. Updated by the call
- with actual flags as defined by TEE_IOCTL_SHM_* above.
- This structure is used as argument for TEE_IOC_SHM_REGISTER_FD below.
- */
+struct tee_ioctl_shm_register_fd_data {
__s64 fd;
__u64 size;
__u32 flags;
__s32 id;
+};
+/**
- TEE_IOC_SHM_REGISTER_FD - register a shared memory from a file descriptor
- Returns a file descriptor on success or < 0 on failure
- The returned file descriptor refers to the shared memory object in kernel
- land. The shared memory is freed when the descriptor is closed.
- */
+#define TEE_IOC_SHM_REGISTER_FD _IOWR(TEE_IOC_MAGIC, TEE_IOC_BASE + 8, \
struct tee_ioctl_shm_register_fd_data)
/**
- struct tee_ioctl_buf_data - Variable sized buffer
- @buf_ptr: [in] A __user pointer to a buffer
-- 2.43.0
On Tue, Apr 1, 2025 at 10:46 AM Sumit Garg sumit.garg@kernel.org wrote:
On Tue, Mar 25, 2025 at 12:17:20PM +0100, Jens Wiklander wrote:
Hi Sumit,
On Tue, Mar 25, 2025 at 7:50 AM Sumit Garg sumit.garg@kernel.org wrote:
Hi Jens,
On Wed, Mar 05, 2025 at 02:04:12PM +0100, Jens Wiklander wrote:
From: Etienne Carriere etienne.carriere@linaro.org
Enable userspace to create a tee_shm object that refers to a dmabuf reference.
Userspace registers the dmabuf file descriptor as in a tee_shm object. The registration is completed with a tee_shm file descriptor returned to userspace.
Userspace is free to close the dmabuf file descriptor now since all the resources are now held via the tee_shm object.
Closing the tee_shm file descriptor will release all resources used by the tee_shm object.
This change only support dmabuf references that relates to physically contiguous memory buffers.
Let's try to reframe this commit message to say that new ioctl allows to register DMA buffers allocated from restricted/protected heaps with TEE subsystem.
New tee_shm flag to identify tee_shm objects built from a registered dmabuf, TEE_SHM_DMA_BUF.
Signed-off-by: Etienne Carriere etienne.carriere@linaro.org Signed-off-by: Olivier Masse olivier.masse@nxp.com Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/tee/tee_core.c | 145 ++++++++++++++++++++++++++----------- drivers/tee/tee_private.h | 1 + drivers/tee/tee_shm.c | 146 ++++++++++++++++++++++++++++++++++++-- include/linux/tee_core.h | 1 + include/linux/tee_drv.h | 10 +++ include/uapi/linux/tee.h | 29 ++++++++ 6 files changed, 288 insertions(+), 44 deletions(-)
I am still trying to find if we really need a separate IOCTL to register DMA heap with TEE subsystem. Can't we initialize tee_shm as a member of struct tee_heap_buffer in tee_dma_heap_alloc() where the allocation happens?
No, that's not possible since we don't have a tee_context availble so we can't assign an ID that userspace can use for this shm object.
We could add new attribute types TEE_IOCTL_PARAM_ATTR_TYPE_MEMFD_*, but it's a bit more complicated than that since we'd also need to update for the life cycle.
Okay, that's fair enough. Lets take this new IOCTL approach then.
We can always find a reference back to tee_shm object from DMA buffer.
Yes
Cheers, Jens
-Sumit
diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c index 685afcaa3ea1..3a71643766d5 100644 --- a/drivers/tee/tee_core.c +++ b/drivers/tee/tee_core.c @@ -353,6 +353,103 @@ tee_ioctl_shm_register(struct tee_context *ctx, return ret; }
+static int +tee_ioctl_shm_register_fd(struct tee_context *ctx,
struct tee_ioctl_shm_register_fd_data __user *udata)
+{
struct tee_ioctl_shm_register_fd_data data;
struct tee_shm *shm;
long ret;
if (copy_from_user(&data, udata, sizeof(data)))
return -EFAULT;
/* Currently no input flags are supported */
if (data.flags)
return -EINVAL;
shm = tee_shm_register_fd(ctx, data.fd);
if (IS_ERR(shm))
return -EINVAL;
data.id = shm->id;
data.flags = shm->flags;
data.size = shm->size;
if (copy_to_user(udata, &data, sizeof(data)))
ret = -EFAULT;
else
ret = tee_shm_get_fd(shm);
/*
* When user space closes the file descriptor the shared memory
* should be freed or if tee_shm_get_fd() failed then it will
* be freed immediately.
*/
tee_shm_put(shm);
return ret;
+}
+static int param_from_user_memref(struct tee_context *ctx,
struct tee_param_memref *memref,
struct tee_ioctl_param *ip)
+{
struct tee_shm *shm;
size_t offs = 0;
/*
* If a NULL pointer is passed to a TA in the TEE,
* the ip.c IOCTL parameters is set to TEE_MEMREF_NULL
* indicating a NULL memory reference.
*/
if (ip->c != TEE_MEMREF_NULL) {
/*
* If we fail to get a pointer to a shared
* memory object (and increase the ref count)
* from an identifier we return an error. All
* pointers that has been added in params have
* an increased ref count. It's the callers
* responibility to do tee_shm_put() on all
* resolved pointers.
*/
shm = tee_shm_get_from_id(ctx, ip->c);
if (IS_ERR(shm))
return PTR_ERR(shm);
/*
* Ensure offset + size does not overflow
* offset and does not overflow the size of
* the referred shared memory object.
*/
if ((ip->a + ip->b) < ip->a ||
(ip->a + ip->b) > shm->size) {
tee_shm_put(shm);
return -EINVAL;
}
if (shm->flags & TEE_SHM_DMA_BUF) {
This check is already done within tee_shm_get_parent_shm(), is it redundant here?
Yes, I'll remove it.
struct tee_shm *parent_shm;
parent_shm = tee_shm_get_parent_shm(shm, &offs);
if (parent_shm) {
tee_shm_put(shm);
shm = parent_shm;
}
}
} else if (ctx->cap_memref_null) {
/* Pass NULL pointer to OP-TEE */
shm = NULL;
} else {
return -EINVAL;
}
memref->shm_offs = ip->a + offs;
memref->size = ip->b;
memref->shm = shm;
return 0;
+}
static int params_from_user(struct tee_context *ctx, struct tee_param *params, size_t num_params, struct tee_ioctl_param __user *uparams) @@ -360,8 +457,8 @@ static int params_from_user(struct tee_context *ctx, struct tee_param *params, size_t n;
for (n = 0; n < num_params; n++) {
struct tee_shm *shm; struct tee_ioctl_param ip;
int rc; if (copy_from_user(&ip, uparams + n, sizeof(ip))) return -EFAULT;
@@ -384,45 +481,10 @@ static int params_from_user(struct tee_context *ctx, struct tee_param *params, case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT:
/*
* If a NULL pointer is passed to a TA in the TEE,
* the ip.c IOCTL parameters is set to TEE_MEMREF_NULL
* indicating a NULL memory reference.
*/
if (ip.c != TEE_MEMREF_NULL) {
/*
* If we fail to get a pointer to a shared
* memory object (and increase the ref count)
* from an identifier we return an error. All
* pointers that has been added in params have
* an increased ref count. It's the callers
* responibility to do tee_shm_put() on all
* resolved pointers.
*/
shm = tee_shm_get_from_id(ctx, ip.c);
if (IS_ERR(shm))
return PTR_ERR(shm);
/*
* Ensure offset + size does not overflow
* offset and does not overflow the size of
* the referred shared memory object.
*/
if ((ip.a + ip.b) < ip.a ||
(ip.a + ip.b) > shm->size) {
tee_shm_put(shm);
return -EINVAL;
}
} else if (ctx->cap_memref_null) {
/* Pass NULL pointer to OP-TEE */
shm = NULL;
} else {
return -EINVAL;
}
params[n].u.memref.shm_offs = ip.a;
params[n].u.memref.size = ip.b;
params[n].u.memref.shm = shm;
rc = param_from_user_memref(ctx, ¶ms[n].u.memref,
&ip);
if (rc)
return rc;
This looks more of a refactoring, can we split it as a separate commit?
Sure
break; default: /* Unknown attribute */
@@ -827,6 +889,8 @@ static long tee_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) return tee_ioctl_shm_alloc(ctx, uarg); case TEE_IOC_SHM_REGISTER: return tee_ioctl_shm_register(ctx, uarg);
case TEE_IOC_SHM_REGISTER_FD:
return tee_ioctl_shm_register_fd(ctx, uarg); case TEE_IOC_OPEN_SESSION: return tee_ioctl_open_session(ctx, uarg); case TEE_IOC_INVOKE:
@@ -1288,3 +1352,4 @@ MODULE_AUTHOR("Linaro"); MODULE_DESCRIPTION("TEE Driver"); MODULE_VERSION("1.0"); MODULE_LICENSE("GPL v2"); +MODULE_IMPORT_NS("DMA_BUF"); diff --git a/drivers/tee/tee_private.h b/drivers/tee/tee_private.h index 6c6ff5d5eed2..aad7f6c7e0f0 100644 --- a/drivers/tee/tee_private.h +++ b/drivers/tee/tee_private.h @@ -24,6 +24,7 @@ void teedev_ctx_put(struct tee_context *ctx); struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size); struct tee_shm *tee_shm_register_user_buf(struct tee_context *ctx, unsigned long addr, size_t length); +struct tee_shm *tee_shm_get_parent_shm(struct tee_shm *shm, size_t *offs);
int tee_heap_update_from_dma_buf(struct tee_device *teedev, struct dma_buf *dmabuf, size_t *offset, diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c index daf6e5cfd59a..8b79918468b5 100644 --- a/drivers/tee/tee_shm.c +++ b/drivers/tee/tee_shm.c @@ -4,6 +4,7 @@ */ #include <linux/anon_inodes.h> #include <linux/device.h> +#include <linux/dma-buf.h> #include <linux/idr.h> #include <linux/io.h> #include <linux/mm.h> @@ -15,6 +16,16 @@ #include <linux/highmem.h> #include "tee_private.h"
+/* extra references appended to shm object for registered shared memory */ +struct tee_shm_dmabuf_ref {
struct tee_shm shm;
size_t offset;
struct dma_buf *dmabuf;
struct dma_buf_attachment *attach;
struct sg_table *sgt;
struct tee_shm *parent_shm;
+};
static void shm_put_kernel_pages(struct page **pages, size_t page_count) { size_t n; @@ -45,7 +56,23 @@ static void release_registered_pages(struct tee_shm *shm)
static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) {
if (shm->flags & TEE_SHM_POOL) {
struct tee_shm *parent_shm = NULL;
void *p = shm;
if (shm->flags & TEE_SHM_DMA_BUF) {
struct tee_shm_dmabuf_ref *ref;
ref = container_of(shm, struct tee_shm_dmabuf_ref, shm);
parent_shm = ref->parent_shm;
p = ref;
if (ref->attach) {
dma_buf_unmap_attachment(ref->attach, ref->sgt,
DMA_BIDIRECTIONAL);
dma_buf_detach(ref->dmabuf, ref->attach);
}
dma_buf_put(ref->dmabuf);
} else if (shm->flags & TEE_SHM_POOL) { teedev->pool->ops->free(teedev->pool, shm); } else if (shm->flags & TEE_SHM_DYNAMIC) { int rc = teedev->desc->ops->shm_unregister(shm->ctx, shm);
@@ -57,9 +84,10 @@ static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) release_registered_pages(shm); }
teedev_ctx_put(shm->ctx);
if (shm->ctx)
teedev_ctx_put(shm->ctx);
kfree(shm);
kfree(p); tee_device_put(teedev);
} @@ -169,7 +197,7 @@ struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size)
- tee_client_invoke_func(). The memory allocated is later freed with a
- call to tee_shm_free().
- @returns a pointer to 'struct tee_shm'
*/
- @returns a pointer to 'struct tee_shm' on success, and ERR_PTR on failure
struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size) { @@ -179,6 +207,116 @@ struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size) } EXPORT_SYMBOL_GPL(tee_shm_alloc_kernel_buf);
+struct tee_shm *tee_shm_register_fd(struct tee_context *ctx, int fd) +{
struct tee_shm_dmabuf_ref *ref;
int rc;
if (!tee_device_get(ctx->teedev))
return ERR_PTR(-EINVAL);
teedev_ctx_get(ctx);
ref = kzalloc(sizeof(*ref), GFP_KERNEL);
if (!ref) {
rc = -ENOMEM;
goto err_put_tee;
}
refcount_set(&ref->shm.refcount, 1);
ref->shm.ctx = ctx;
ref->shm.id = -1;
ref->shm.flags = TEE_SHM_DMA_BUF;
ref->dmabuf = dma_buf_get(fd);
if (IS_ERR(ref->dmabuf)) {
rc = PTR_ERR(ref->dmabuf);
goto err_kfree_ref;
}
rc = tee_heap_update_from_dma_buf(ctx->teedev, ref->dmabuf,
&ref->offset, &ref->shm,
&ref->parent_shm);
if (!rc)
goto out;
if (rc != -EINVAL)
goto err_put_dmabuf;
ref->attach = dma_buf_attach(ref->dmabuf, &ctx->teedev->dev);
if (IS_ERR(ref->attach)) {
rc = PTR_ERR(ref->attach);
goto err_put_dmabuf;
}
ref->sgt = dma_buf_map_attachment(ref->attach, DMA_BIDIRECTIONAL);
if (IS_ERR(ref->sgt)) {
rc = PTR_ERR(ref->sgt);
goto err_detach;
}
if (sg_nents(ref->sgt->sgl) != 1) {
rc = PTR_ERR(ref->sgt->sgl);
goto err_unmap_attachement;
}
ref->shm.paddr = page_to_phys(sg_page(ref->sgt->sgl));
ref->shm.size = ref->sgt->sgl->length;
+out:
mutex_lock(&ref->shm.ctx->teedev->mutex);
ref->shm.id = idr_alloc(&ref->shm.ctx->teedev->idr, &ref->shm,
1, 0, GFP_KERNEL);
mutex_unlock(&ref->shm.ctx->teedev->mutex);
if (ref->shm.id < 0) {
rc = ref->shm.id;
if (ref->attach)
goto err_unmap_attachement;
goto err_put_dmabuf;
}
return &ref->shm;
+err_unmap_attachement:
dma_buf_unmap_attachment(ref->attach, ref->sgt, DMA_BIDIRECTIONAL);
+err_detach:
dma_buf_detach(ref->dmabuf, ref->attach);
+err_put_dmabuf:
dma_buf_put(ref->dmabuf);
+err_kfree_ref:
kfree(ref);
+err_put_tee:
teedev_ctx_put(ctx);
tee_device_put(ctx->teedev);
return ERR_PTR(rc);
+} +EXPORT_SYMBOL_GPL(tee_shm_register_fd);
+struct tee_shm *tee_shm_get_parent_shm(struct tee_shm *shm, size_t *offs) +{
struct tee_shm *parent_shm = NULL;
if (shm->flags & TEE_SHM_DMA_BUF) {
struct tee_shm_dmabuf_ref *ref;
ref = container_of(shm, struct tee_shm_dmabuf_ref, shm);
if (ref->parent_shm) {
/*
* the shm already has one reference to
* ref->parent_shm so we should be clear of 0.
* We're getting another reference since the caller
* of this function expects to put the returned
* parent_shm when it's done with it.
This seems a bit complicated, can we rather inline this API as I can't see any other current user?
OK
Cheers, Jens
-Sumit
*/
parent_shm = ref->parent_shm;
refcount_inc(&parent_shm->refcount);
*offs = ref->offset;
}
}
return parent_shm;
+}
/**
- tee_shm_alloc_priv_buf() - Allocate shared memory for a privately shared
kernel buffer
diff --git a/include/linux/tee_core.h b/include/linux/tee_core.h index 16ef078247ae..6bd833b6d0e1 100644 --- a/include/linux/tee_core.h +++ b/include/linux/tee_core.h @@ -28,6 +28,7 @@ #define TEE_SHM_USER_MAPPED BIT(1) /* Memory mapped in user space */ #define TEE_SHM_POOL BIT(2) /* Memory allocated from pool */ #define TEE_SHM_PRIV BIT(3) /* Memory private to TEE driver */ +#define TEE_SHM_DMA_BUF BIT(4) /* Memory with dma-buf handle */
#define TEE_DEVICE_FLAG_REGISTERED 0x1 #define TEE_MAX_DEV_NAME_LEN 32 diff --git a/include/linux/tee_drv.h b/include/linux/tee_drv.h index a54c203000ed..824f1251de60 100644 --- a/include/linux/tee_drv.h +++ b/include/linux/tee_drv.h @@ -116,6 +116,16 @@ struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size); struct tee_shm *tee_shm_register_kernel_buf(struct tee_context *ctx, void *addr, size_t length);
+/**
- tee_shm_register_fd() - Register shared memory from file descriptor
- @ctx: Context that allocates the shared memory
- @fd: Shared memory file descriptor reference
- @returns a pointer to 'struct tee_shm' on success, and ERR_PTR on failure
- */
+struct tee_shm *tee_shm_register_fd(struct tee_context *ctx, int fd);
/**
- tee_shm_free() - Free shared memory
- @shm: Handle to shared memory to free
diff --git a/include/uapi/linux/tee.h b/include/uapi/linux/tee.h index d0430bee8292..1f9a4ac2b211 100644 --- a/include/uapi/linux/tee.h +++ b/include/uapi/linux/tee.h @@ -118,6 +118,35 @@ struct tee_ioctl_shm_alloc_data { #define TEE_IOC_SHM_ALLOC _IOWR(TEE_IOC_MAGIC, TEE_IOC_BASE + 1, \ struct tee_ioctl_shm_alloc_data)
+/**
- struct tee_ioctl_shm_register_fd_data - Shared memory registering argument
- @fd: [in] File descriptor identifying the shared memory
- @size: [out] Size of shared memory to allocate
- @flags: [in] Flags to/from allocation.
- @id: [out] Identifier of the shared memory
- The flags field should currently be zero as input. Updated by the call
- with actual flags as defined by TEE_IOCTL_SHM_* above.
- This structure is used as argument for TEE_IOC_SHM_REGISTER_FD below.
- */
+struct tee_ioctl_shm_register_fd_data {
__s64 fd;
__u64 size;
__u32 flags;
__s32 id;
+};
+/**
- TEE_IOC_SHM_REGISTER_FD - register a shared memory from a file descriptor
- Returns a file descriptor on success or < 0 on failure
- The returned file descriptor refers to the shared memory object in kernel
- land. The shared memory is freed when the descriptor is closed.
- */
+#define TEE_IOC_SHM_REGISTER_FD _IOWR(TEE_IOC_MAGIC, TEE_IOC_BASE + 8, \
struct tee_ioctl_shm_register_fd_data)
/**
- struct tee_ioctl_buf_data - Variable sized buffer
- @buf_ptr: [in] A __user pointer to a buffer
-- 2.43.0
Add tee_shm_alloc_cma_phys_mem() to allocate a physical memory using from the default CMA pool. The memory is represented by a tee_shm object using the new flag TEE_SHM_CMA_BUF to identify it as physical memory from CMA.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org --- drivers/tee/tee_shm.c | 55 ++++++++++++++++++++++++++++++++++++++-- include/linux/tee_core.h | 4 +++ 2 files changed, 57 insertions(+), 2 deletions(-)
diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c index 8b79918468b5..8d8341f8ebd7 100644 --- a/drivers/tee/tee_shm.c +++ b/drivers/tee/tee_shm.c @@ -3,8 +3,11 @@ * Copyright (c) 2015-2017, 2019-2021 Linaro Limited */ #include <linux/anon_inodes.h> +#include <linux/cma.h> #include <linux/device.h> #include <linux/dma-buf.h> +#include <linux/dma-map-ops.h> +#include <linux/highmem.h> #include <linux/idr.h> #include <linux/io.h> #include <linux/mm.h> @@ -13,7 +16,6 @@ #include <linux/tee_core.h> #include <linux/uaccess.h> #include <linux/uio.h> -#include <linux/highmem.h> #include "tee_private.h"
/* extra references appended to shm object for registered shared memory */ @@ -59,7 +61,14 @@ static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) struct tee_shm *parent_shm = NULL; void *p = shm;
- if (shm->flags & TEE_SHM_DMA_BUF) { + if (shm->flags & TEE_SHM_CMA_BUF) { +#if !IS_MODULE(CONFIG_TEE) && IS_ENABLED(CONFIG_CMA) + struct page *page = phys_to_page(shm->paddr); + struct cma *cma = dev_get_cma_area(&shm->ctx->teedev->dev); + + cma_release(cma, page, shm->size / PAGE_SIZE); +#endif + } else if (shm->flags & TEE_SHM_DMA_BUF) { struct tee_shm_dmabuf_ref *ref;
ref = container_of(shm, struct tee_shm_dmabuf_ref, shm); @@ -341,6 +350,48 @@ struct tee_shm *tee_shm_alloc_priv_buf(struct tee_context *ctx, size_t size) } EXPORT_SYMBOL_GPL(tee_shm_alloc_priv_buf);
+struct tee_shm *tee_shm_alloc_cma_phys_mem(struct tee_context *ctx, + size_t page_count, size_t align) +{ +#if !IS_MODULE(CONFIG_TEE) && IS_ENABLED(CONFIG_CMA) + struct tee_device *teedev = ctx->teedev; + struct cma *cma = dev_get_cma_area(&teedev->dev); + struct tee_shm *shm; + struct page *page; + + if (!tee_device_get(teedev)) + return ERR_PTR(-EINVAL); + + page = cma_alloc(cma, page_count, align, true/*no_warn*/); + if (!page) + goto err_put_teedev; + + shm = kzalloc(sizeof(*shm), GFP_KERNEL); + if (!shm) + goto err_cma_crelease; + + refcount_set(&shm->refcount, 1); + shm->ctx = ctx; + shm->paddr = page_to_phys(page); + shm->size = page_count * PAGE_SIZE; + shm->flags = TEE_SHM_CMA_BUF; + + teedev_ctx_get(ctx); + + return shm; + +err_cma_crelease: + cma_release(cma, page, page_count); +err_put_teedev: + tee_device_put(teedev); + + return ERR_PTR(-ENOMEM); +#else + return ERR_PTR(-EINVAL); +#endif +} +EXPORT_SYMBOL_GPL(tee_shm_alloc_cma_phys_mem); + int tee_dyn_shm_alloc_helper(struct tee_shm *shm, size_t size, size_t align, int (*shm_register)(struct tee_context *ctx, struct tee_shm *shm, diff --git a/include/linux/tee_core.h b/include/linux/tee_core.h index 6bd833b6d0e1..b6727d9a3556 100644 --- a/include/linux/tee_core.h +++ b/include/linux/tee_core.h @@ -29,6 +29,7 @@ #define TEE_SHM_POOL BIT(2) /* Memory allocated from pool */ #define TEE_SHM_PRIV BIT(3) /* Memory private to TEE driver */ #define TEE_SHM_DMA_BUF BIT(4) /* Memory with dma-buf handle */ +#define TEE_SHM_CMA_BUF BIT(5) /* CMA allocated memory */
#define TEE_DEVICE_FLAG_REGISTERED 0x1 #define TEE_MAX_DEV_NAME_LEN 32 @@ -307,6 +308,9 @@ void *tee_get_drvdata(struct tee_device *teedev); */ struct tee_shm *tee_shm_alloc_priv_buf(struct tee_context *ctx, size_t size);
+struct tee_shm *tee_shm_alloc_cma_phys_mem(struct tee_context *ctx, + size_t page_count, size_t align); + int tee_dyn_shm_alloc_helper(struct tee_shm *shm, size_t size, size_t align, int (*shm_register)(struct tee_context *ctx, struct tee_shm *shm,
Hi Jens,
On Wed, Mar 05, 2025 at 02:04:13PM +0100, Jens Wiklander wrote:
Add tee_shm_alloc_cma_phys_mem() to allocate a physical memory using from the default CMA pool. The memory is represented by a tee_shm object using the new flag TEE_SHM_CMA_BUF to identify it as physical memory from CMA.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/tee/tee_shm.c | 55 ++++++++++++++++++++++++++++++++++++++-- include/linux/tee_core.h | 4 +++ 2 files changed, 57 insertions(+), 2 deletions(-)
diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c index 8b79918468b5..8d8341f8ebd7 100644 --- a/drivers/tee/tee_shm.c +++ b/drivers/tee/tee_shm.c @@ -3,8 +3,11 @@
- Copyright (c) 2015-2017, 2019-2021 Linaro Limited
*/ #include <linux/anon_inodes.h> +#include <linux/cma.h> #include <linux/device.h> #include <linux/dma-buf.h> +#include <linux/dma-map-ops.h> +#include <linux/highmem.h> #include <linux/idr.h> #include <linux/io.h> #include <linux/mm.h> @@ -13,7 +16,6 @@ #include <linux/tee_core.h> #include <linux/uaccess.h> #include <linux/uio.h> -#include <linux/highmem.h> #include "tee_private.h" /* extra references appended to shm object for registered shared memory */ @@ -59,7 +61,14 @@ static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) struct tee_shm *parent_shm = NULL; void *p = shm;
- if (shm->flags & TEE_SHM_DMA_BUF) {
- if (shm->flags & TEE_SHM_CMA_BUF) {
+#if !IS_MODULE(CONFIG_TEE) && IS_ENABLED(CONFIG_CMA)
Can we rather manage this dependency via Kconfig?
struct page *page = phys_to_page(shm->paddr);
struct cma *cma = dev_get_cma_area(&shm->ctx->teedev->dev);
cma_release(cma, page, shm->size / PAGE_SIZE);
+#endif
- } else if (shm->flags & TEE_SHM_DMA_BUF) { struct tee_shm_dmabuf_ref *ref;
ref = container_of(shm, struct tee_shm_dmabuf_ref, shm); @@ -341,6 +350,48 @@ struct tee_shm *tee_shm_alloc_priv_buf(struct tee_context *ctx, size_t size) } EXPORT_SYMBOL_GPL(tee_shm_alloc_priv_buf); +struct tee_shm *tee_shm_alloc_cma_phys_mem(struct tee_context *ctx,
size_t page_count, size_t align)
+{ +#if !IS_MODULE(CONFIG_TEE) && IS_ENABLED(CONFIG_CMA)
Ditto here.
-Sumit
- struct tee_device *teedev = ctx->teedev;
- struct cma *cma = dev_get_cma_area(&teedev->dev);
- struct tee_shm *shm;
- struct page *page;
- if (!tee_device_get(teedev))
return ERR_PTR(-EINVAL);
- page = cma_alloc(cma, page_count, align, true/*no_warn*/);
- if (!page)
goto err_put_teedev;
- shm = kzalloc(sizeof(*shm), GFP_KERNEL);
- if (!shm)
goto err_cma_crelease;
- refcount_set(&shm->refcount, 1);
- shm->ctx = ctx;
- shm->paddr = page_to_phys(page);
- shm->size = page_count * PAGE_SIZE;
- shm->flags = TEE_SHM_CMA_BUF;
- teedev_ctx_get(ctx);
- return shm;
+err_cma_crelease:
- cma_release(cma, page, page_count);
+err_put_teedev:
- tee_device_put(teedev);
- return ERR_PTR(-ENOMEM);
+#else
- return ERR_PTR(-EINVAL);
+#endif +} +EXPORT_SYMBOL_GPL(tee_shm_alloc_cma_phys_mem);
int tee_dyn_shm_alloc_helper(struct tee_shm *shm, size_t size, size_t align, int (*shm_register)(struct tee_context *ctx, struct tee_shm *shm, diff --git a/include/linux/tee_core.h b/include/linux/tee_core.h index 6bd833b6d0e1..b6727d9a3556 100644 --- a/include/linux/tee_core.h +++ b/include/linux/tee_core.h @@ -29,6 +29,7 @@ #define TEE_SHM_POOL BIT(2) /* Memory allocated from pool */ #define TEE_SHM_PRIV BIT(3) /* Memory private to TEE driver */ #define TEE_SHM_DMA_BUF BIT(4) /* Memory with dma-buf handle */ +#define TEE_SHM_CMA_BUF BIT(5) /* CMA allocated memory */ #define TEE_DEVICE_FLAG_REGISTERED 0x1 #define TEE_MAX_DEV_NAME_LEN 32 @@ -307,6 +308,9 @@ void *tee_get_drvdata(struct tee_device *teedev); */ struct tee_shm *tee_shm_alloc_priv_buf(struct tee_context *ctx, size_t size); +struct tee_shm *tee_shm_alloc_cma_phys_mem(struct tee_context *ctx,
size_t page_count, size_t align);
int tee_dyn_shm_alloc_helper(struct tee_shm *shm, size_t size, size_t align, int (*shm_register)(struct tee_context *ctx, struct tee_shm *shm, -- 2.43.0
Add support in the OP-TEE backend driver for restricted memory allocation. The support is limited to only the SMC ABI and for secure video buffers.
OP-TEE is probed for the range of restricted physical memory and a memory pool allocator is initialized if OP-TEE have support for such memory.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org --- drivers/tee/optee/core.c | 1 + drivers/tee/optee/smc_abi.c | 44 +++++++++++++++++++++++++++++++++++-- 2 files changed, 43 insertions(+), 2 deletions(-)
diff --git a/drivers/tee/optee/core.c b/drivers/tee/optee/core.c index c75fddc83576..c7fd8040480e 100644 --- a/drivers/tee/optee/core.c +++ b/drivers/tee/optee/core.c @@ -181,6 +181,7 @@ void optee_remove_common(struct optee *optee) tee_device_unregister(optee->supp_teedev); tee_device_unregister(optee->teedev);
+ tee_device_unregister_all_dma_heaps(optee->teedev); tee_shm_pool_free(optee->pool); optee_supp_uninit(&optee->supp); mutex_destroy(&optee->call_queue.mutex); diff --git a/drivers/tee/optee/smc_abi.c b/drivers/tee/optee/smc_abi.c index cfdae266548b..a14ff0b7d3b3 100644 --- a/drivers/tee/optee/smc_abi.c +++ b/drivers/tee/optee/smc_abi.c @@ -1620,6 +1620,41 @@ static inline int optee_load_fw(struct platform_device *pdev, } #endif
+static int optee_sdp_pool_init(struct optee *optee) +{ + enum tee_dma_heap_id heap_id = TEE_DMA_HEAP_SECURE_VIDEO_PLAY; + struct tee_rstmem_pool *pool; + int rc; + + if (optee->smc.sec_caps & OPTEE_SMC_SEC_CAP_SDP) { + union { + struct arm_smccc_res smccc; + struct optee_smc_get_sdp_config_result result; + } res; + + optee->smc.invoke_fn(OPTEE_SMC_GET_SDP_CONFIG, 0, 0, 0, 0, 0, 0, + 0, &res.smccc); + if (res.result.status != OPTEE_SMC_RETURN_OK) { + pr_err("Secure Data Path service not available\n"); + return 0; + } + + pool = tee_rstmem_static_pool_alloc(res.result.start, + res.result.size); + if (IS_ERR(pool)) + return PTR_ERR(pool); + + rc = tee_device_register_dma_heap(optee->teedev, heap_id, pool); + if (rc) + goto err; + } + + return 0; +err: + pool->ops->destroy_pool(pool); + return rc; +} + static int optee_probe(struct platform_device *pdev) { optee_invoke_fn *invoke_fn; @@ -1715,7 +1750,7 @@ static int optee_probe(struct platform_device *pdev) optee = kzalloc(sizeof(*optee), GFP_KERNEL); if (!optee) { rc = -ENOMEM; - goto err_free_pool; + goto err_free_shm_pool; }
optee->ops = &optee_ops; @@ -1788,6 +1823,10 @@ static int optee_probe(struct platform_device *pdev) pr_info("Asynchronous notifications enabled\n"); }
+ rc = optee_sdp_pool_init(optee); + if (rc) + goto err_notif_uninit; + /* * Ensure that there are no pre-existing shm objects before enabling * the shm cache so that there's no chance of receiving an invalid @@ -1823,6 +1862,7 @@ static int optee_probe(struct platform_device *pdev) optee_disable_shm_cache(optee); optee_smc_notif_uninit_irq(optee); optee_unregister_devices(); + tee_device_unregister_all_dma_heaps(optee->teedev); err_notif_uninit: optee_notif_uninit(optee); err_close_ctx: @@ -1839,7 +1879,7 @@ static int optee_probe(struct platform_device *pdev) tee_device_unregister(optee->teedev); err_free_optee: kfree(optee); -err_free_pool: +err_free_shm_pool: tee_shm_pool_free(pool); if (memremaped_shm) memunmap(memremaped_shm);
On Wed, Mar 05, 2025 at 02:04:14PM +0100, Jens Wiklander wrote:
Add support in the OP-TEE backend driver for restricted memory allocation. The support is limited to only the SMC ABI and for secure video buffers.
OP-TEE is probed for the range of restricted physical memory and a memory pool allocator is initialized if OP-TEE have support for such memory.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/tee/optee/core.c | 1 + drivers/tee/optee/smc_abi.c | 44 +++++++++++++++++++++++++++++++++++-- 2 files changed, 43 insertions(+), 2 deletions(-)
diff --git a/drivers/tee/optee/core.c b/drivers/tee/optee/core.c index c75fddc83576..c7fd8040480e 100644 --- a/drivers/tee/optee/core.c +++ b/drivers/tee/optee/core.c @@ -181,6 +181,7 @@ void optee_remove_common(struct optee *optee) tee_device_unregister(optee->supp_teedev); tee_device_unregister(optee->teedev);
- tee_device_unregister_all_dma_heaps(optee->teedev); tee_shm_pool_free(optee->pool); optee_supp_uninit(&optee->supp); mutex_destroy(&optee->call_queue.mutex);
diff --git a/drivers/tee/optee/smc_abi.c b/drivers/tee/optee/smc_abi.c index cfdae266548b..a14ff0b7d3b3 100644 --- a/drivers/tee/optee/smc_abi.c +++ b/drivers/tee/optee/smc_abi.c @@ -1620,6 +1620,41 @@ static inline int optee_load_fw(struct platform_device *pdev, } #endif +static int optee_sdp_pool_init(struct optee *optee) +{
- enum tee_dma_heap_id heap_id = TEE_DMA_HEAP_SECURE_VIDEO_PLAY;
- struct tee_rstmem_pool *pool;
- int rc;
- if (optee->smc.sec_caps & OPTEE_SMC_SEC_CAP_SDP) {
Is this SDP capability an ABI yet since we haven't supported it in upstream kernel? If no then can we rename it as OPTEE_SMC_SEC_CAP_RSTMEM?
union {
struct arm_smccc_res smccc;
struct optee_smc_get_sdp_config_result result;
} res;
optee->smc.invoke_fn(OPTEE_SMC_GET_SDP_CONFIG, 0, 0, 0, 0, 0, 0,
0, &res.smccc);
if (res.result.status != OPTEE_SMC_RETURN_OK) {
pr_err("Secure Data Path service not available\n");
return 0;
}
pool = tee_rstmem_static_pool_alloc(res.result.start,
res.result.size);
if (IS_ERR(pool))
return PTR_ERR(pool);
rc = tee_device_register_dma_heap(optee->teedev, heap_id, pool);
if (rc)
goto err;
- }
- return 0;
+err:
- pool->ops->destroy_pool(pool);
- return rc;
+}
static int optee_probe(struct platform_device *pdev) { optee_invoke_fn *invoke_fn; @@ -1715,7 +1750,7 @@ static int optee_probe(struct platform_device *pdev) optee = kzalloc(sizeof(*optee), GFP_KERNEL); if (!optee) { rc = -ENOMEM;
goto err_free_pool;
}goto err_free_shm_pool;
optee->ops = &optee_ops; @@ -1788,6 +1823,10 @@ static int optee_probe(struct platform_device *pdev) pr_info("Asynchronous notifications enabled\n"); }
- rc = optee_sdp_pool_init(optee);
s/optee_sdp_pool_init/optee_rstmem_pool_init/
-Sumit
- if (rc)
goto err_notif_uninit;
- /*
- Ensure that there are no pre-existing shm objects before enabling
- the shm cache so that there's no chance of receiving an invalid
@@ -1823,6 +1862,7 @@ static int optee_probe(struct platform_device *pdev) optee_disable_shm_cache(optee); optee_smc_notif_uninit_irq(optee); optee_unregister_devices();
- tee_device_unregister_all_dma_heaps(optee->teedev);
err_notif_uninit: optee_notif_uninit(optee); err_close_ctx: @@ -1839,7 +1879,7 @@ static int optee_probe(struct platform_device *pdev) tee_device_unregister(optee->teedev); err_free_optee: kfree(optee); -err_free_pool: +err_free_shm_pool: tee_shm_pool_free(pool); if (memremaped_shm) memunmap(memremaped_shm); -- 2.43.0
On Tue, Mar 25, 2025 at 8:07 AM Sumit Garg sumit.garg@kernel.org wrote:
On Wed, Mar 05, 2025 at 02:04:14PM +0100, Jens Wiklander wrote:
Add support in the OP-TEE backend driver for restricted memory allocation. The support is limited to only the SMC ABI and for secure video buffers.
OP-TEE is probed for the range of restricted physical memory and a memory pool allocator is initialized if OP-TEE have support for such memory.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/tee/optee/core.c | 1 + drivers/tee/optee/smc_abi.c | 44 +++++++++++++++++++++++++++++++++++-- 2 files changed, 43 insertions(+), 2 deletions(-)
diff --git a/drivers/tee/optee/core.c b/drivers/tee/optee/core.c index c75fddc83576..c7fd8040480e 100644 --- a/drivers/tee/optee/core.c +++ b/drivers/tee/optee/core.c @@ -181,6 +181,7 @@ void optee_remove_common(struct optee *optee) tee_device_unregister(optee->supp_teedev); tee_device_unregister(optee->teedev);
tee_device_unregister_all_dma_heaps(optee->teedev); tee_shm_pool_free(optee->pool); optee_supp_uninit(&optee->supp); mutex_destroy(&optee->call_queue.mutex);
diff --git a/drivers/tee/optee/smc_abi.c b/drivers/tee/optee/smc_abi.c index cfdae266548b..a14ff0b7d3b3 100644 --- a/drivers/tee/optee/smc_abi.c +++ b/drivers/tee/optee/smc_abi.c @@ -1620,6 +1620,41 @@ static inline int optee_load_fw(struct platform_device *pdev, } #endif
+static int optee_sdp_pool_init(struct optee *optee) +{
enum tee_dma_heap_id heap_id = TEE_DMA_HEAP_SECURE_VIDEO_PLAY;
struct tee_rstmem_pool *pool;
int rc;
if (optee->smc.sec_caps & OPTEE_SMC_SEC_CAP_SDP) {
Is this SDP capability an ABI yet since we haven't supported it in upstream kernel? If no then can we rename it as OPTEE_SMC_SEC_CAP_RSTMEM?
No problem. We can rename it.
union {
struct arm_smccc_res smccc;
struct optee_smc_get_sdp_config_result result;
} res;
optee->smc.invoke_fn(OPTEE_SMC_GET_SDP_CONFIG, 0, 0, 0, 0, 0, 0,
0, &res.smccc);
if (res.result.status != OPTEE_SMC_RETURN_OK) {
pr_err("Secure Data Path service not available\n");
return 0;
}
pool = tee_rstmem_static_pool_alloc(res.result.start,
res.result.size);
if (IS_ERR(pool))
return PTR_ERR(pool);
rc = tee_device_register_dma_heap(optee->teedev, heap_id, pool);
if (rc)
goto err;
}
return 0;
+err:
pool->ops->destroy_pool(pool);
return rc;
+}
static int optee_probe(struct platform_device *pdev) { optee_invoke_fn *invoke_fn; @@ -1715,7 +1750,7 @@ static int optee_probe(struct platform_device *pdev) optee = kzalloc(sizeof(*optee), GFP_KERNEL); if (!optee) { rc = -ENOMEM;
goto err_free_pool;
goto err_free_shm_pool; } optee->ops = &optee_ops;
@@ -1788,6 +1823,10 @@ static int optee_probe(struct platform_device *pdev) pr_info("Asynchronous notifications enabled\n"); }
rc = optee_sdp_pool_init(optee);
s/optee_sdp_pool_init/optee_rstmem_pool_init/
OK
Cheers, Jens
-Sumit
if (rc)
goto err_notif_uninit;
/* * Ensure that there are no pre-existing shm objects before enabling * the shm cache so that there's no chance of receiving an invalid
@@ -1823,6 +1862,7 @@ static int optee_probe(struct platform_device *pdev) optee_disable_shm_cache(optee); optee_smc_notif_uninit_irq(optee); optee_unregister_devices();
tee_device_unregister_all_dma_heaps(optee->teedev);
err_notif_uninit: optee_notif_uninit(optee); err_close_ctx: @@ -1839,7 +1879,7 @@ static int optee_probe(struct platform_device *pdev) tee_device_unregister(optee->teedev); err_free_optee: kfree(optee); -err_free_pool: +err_free_shm_pool: tee_shm_pool_free(pool); if (memremaped_shm) memunmap(memremaped_shm); -- 2.43.0
Add support in the OP-TEE backend driver dynamic restricted memory allocation with FF-A.
The restricted memory pools for dynamically allocated restrict memory are instantiated when requested by user-space. This instantiation can fail if OP-TEE doesn't support the requested use-case of restricted memory.
Restricted memory pools based on a static carveout or dynamic allocation can coexist for different use-cases. We use only dynamic allocation with FF-A.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org --- drivers/tee/optee/Makefile | 1 + drivers/tee/optee/ffa_abi.c | 143 ++++++++++++- drivers/tee/optee/optee_private.h | 13 +- drivers/tee/optee/rstmem.c | 329 ++++++++++++++++++++++++++++++ 4 files changed, 483 insertions(+), 3 deletions(-) create mode 100644 drivers/tee/optee/rstmem.c
diff --git a/drivers/tee/optee/Makefile b/drivers/tee/optee/Makefile index a6eff388d300..498969fb8e40 100644 --- a/drivers/tee/optee/Makefile +++ b/drivers/tee/optee/Makefile @@ -4,6 +4,7 @@ optee-objs += core.o optee-objs += call.o optee-objs += notif.o optee-objs += rpc.o +optee-objs += rstmem.o optee-objs += supp.o optee-objs += device.o optee-objs += smc_abi.o diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c index e4b08cd195f3..6a55114232ef 100644 --- a/drivers/tee/optee/ffa_abi.c +++ b/drivers/tee/optee/ffa_abi.c @@ -672,6 +672,123 @@ static int optee_ffa_do_call_with_arg(struct tee_context *ctx, return optee_ffa_yielding_call(ctx, &data, rpc_arg, system_thread); }
+static int do_call_lend_rstmem(struct optee *optee, u64 cookie, u32 use_case) +{ + struct optee_shm_arg_entry *entry; + struct optee_msg_arg *msg_arg; + struct tee_shm *shm; + u_int offs; + int rc; + + msg_arg = optee_get_msg_arg(optee->ctx, 1, &entry, &shm, &offs); + if (IS_ERR(msg_arg)) + return PTR_ERR(msg_arg); + + msg_arg->cmd = OPTEE_MSG_CMD_ASSIGN_RSTMEM; + msg_arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT; + msg_arg->params[0].u.value.a = cookie; + msg_arg->params[0].u.value.b = use_case; + + rc = optee->ops->do_call_with_arg(optee->ctx, shm, offs, false); + if (rc) + goto out; + if (msg_arg->ret != TEEC_SUCCESS) { + rc = -EINVAL; + goto out; + } + +out: + optee_free_msg_arg(optee->ctx, entry, offs); + return rc; +} + +static int optee_ffa_lend_rstmem(struct optee *optee, struct tee_shm *rstmem, + u16 *end_points, unsigned int ep_count, + u32 use_case) +{ + struct ffa_device *ffa_dev = optee->ffa.ffa_dev; + const struct ffa_mem_ops *mem_ops = ffa_dev->ops->mem_ops; + const struct ffa_msg_ops *msg_ops = ffa_dev->ops->msg_ops; + struct ffa_send_direct_data data; + struct ffa_mem_region_attributes *mem_attr; + struct ffa_mem_ops_args args = { + .use_txbuf = true, + .tag = use_case, + }; + struct page *page; + struct scatterlist sgl; + unsigned int n; + int rc; + + mem_attr = kcalloc(ep_count, sizeof(*mem_attr), GFP_KERNEL); + for (n = 0; n < ep_count; n++) { + mem_attr[n].receiver = end_points[n]; + mem_attr[n].attrs = FFA_MEM_RW; + } + args.attrs = mem_attr; + args.nattrs = ep_count; + + page = phys_to_page(rstmem->paddr); + sg_init_table(&sgl, 1); + sg_set_page(&sgl, page, rstmem->size, 0); + + args.sg = &sgl; + rc = mem_ops->memory_lend(&args); + kfree(mem_attr); + if (rc) + return rc; + + rc = do_call_lend_rstmem(optee, args.g_handle, use_case); + if (rc) + goto err_reclaim; + + rc = optee_shm_add_ffa_handle(optee, rstmem, args.g_handle); + if (rc) + goto err_unreg; + + rstmem->sec_world_id = args.g_handle; + + return 0; + +err_unreg: + data = (struct ffa_send_direct_data){ + .data0 = OPTEE_FFA_RELEASE_RSTMEM, + .data1 = (u32)args.g_handle, + .data2 = (u32)(args.g_handle >> 32), + }; + msg_ops->sync_send_receive(ffa_dev, &data); +err_reclaim: + mem_ops->memory_reclaim(args.g_handle, 0); + return rc; +} + +static int optee_ffa_reclaim_rstmem(struct optee *optee, struct tee_shm *rstmem) +{ + struct ffa_device *ffa_dev = optee->ffa.ffa_dev; + const struct ffa_msg_ops *msg_ops = ffa_dev->ops->msg_ops; + const struct ffa_mem_ops *mem_ops = ffa_dev->ops->mem_ops; + u64 global_handle = rstmem->sec_world_id; + struct ffa_send_direct_data data = { + .data0 = OPTEE_FFA_RELEASE_RSTMEM, + .data1 = (u32)global_handle, + .data2 = (u32)(global_handle >> 32) + }; + int rc; + + optee_shm_rem_ffa_handle(optee, global_handle); + rstmem->sec_world_id = 0; + + rc = msg_ops->sync_send_receive(ffa_dev, &data); + if (rc) + pr_err("Release SHM id 0x%llx rc %d\n", global_handle, rc); + + rc = mem_ops->memory_reclaim(global_handle, 0); + if (rc) + pr_err("mem_reclaim: 0x%llx %d", global_handle, rc); + + return rc; +} + /* * 6. Driver initialization * @@ -833,6 +950,8 @@ static const struct optee_ops optee_ffa_ops = { .do_call_with_arg = optee_ffa_do_call_with_arg, .to_msg_param = optee_ffa_to_msg_param, .from_msg_param = optee_ffa_from_msg_param, + .lend_rstmem = optee_ffa_lend_rstmem, + .reclaim_rstmem = optee_ffa_reclaim_rstmem, };
static void optee_ffa_remove(struct ffa_device *ffa_dev) @@ -941,7 +1060,7 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev) optee->pool, optee); if (IS_ERR(teedev)) { rc = PTR_ERR(teedev); - goto err_free_pool; + goto err_free_shm_pool; } optee->teedev = teedev;
@@ -988,6 +1107,24 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev) rc); }
+ if (IS_ENABLED(CONFIG_CMA) && !IS_MODULE(CONFIG_OPTEE) && + (sec_caps & OPTEE_FFA_SEC_CAP_RSTMEM)) { + enum tee_dma_heap_id id = TEE_DMA_HEAP_SECURE_VIDEO_PLAY; + struct tee_rstmem_pool *pool; + + pool = optee_rstmem_alloc_cma_pool(optee, id); + if (IS_ERR(pool)) { + rc = PTR_ERR(pool); + goto err_notif_uninit; + } + + rc = tee_device_register_dma_heap(optee->teedev, id, pool); + if (rc) { + pool->ops->destroy_pool(pool); + goto err_notif_uninit; + } + } + rc = optee_enumerate_devices(PTA_CMD_GET_DEVICES); if (rc) goto err_unregister_devices; @@ -1001,6 +1138,8 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev)
err_unregister_devices: optee_unregister_devices(); + tee_device_unregister_all_dma_heaps(optee->teedev); +err_notif_uninit: if (optee->ffa.bottom_half_value != U32_MAX) notif_ops->notify_relinquish(ffa_dev, optee->ffa.bottom_half_value); @@ -1018,7 +1157,7 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev) tee_device_unregister(optee->supp_teedev); err_unreg_teedev: tee_device_unregister(optee->teedev); -err_free_pool: +err_free_shm_pool: tee_shm_pool_free(pool); err_free_optee: kfree(optee); diff --git a/drivers/tee/optee/optee_private.h b/drivers/tee/optee/optee_private.h index 20eda508dbac..faab31ad7c52 100644 --- a/drivers/tee/optee/optee_private.h +++ b/drivers/tee/optee/optee_private.h @@ -174,9 +174,14 @@ struct optee; * @do_call_with_arg: enters OP-TEE in secure world * @to_msg_param: converts from struct tee_param to OPTEE_MSG parameters * @from_msg_param: converts from OPTEE_MSG parameters to struct tee_param + * @lend_rstmem: lends physically contiguous memory as restricted + * memory, inaccessible by the kernel + * @reclaim_rstmem: reclaims restricted memory previously lent with + * @lend_rstmem() and makes it accessible by the + * kernel again * * These OPs are only supposed to be used internally in the OP-TEE driver - * as a way of abstracting the different methogs of entering OP-TEE in + * as a way of abstracting the different methods of entering OP-TEE in * secure world. */ struct optee_ops { @@ -191,6 +196,10 @@ struct optee_ops { size_t num_params, const struct optee_msg_param *msg_params, bool update_out); + int (*lend_rstmem)(struct optee *optee, struct tee_shm *rstmem, + u16 *end_points, unsigned int ep_count, + u32 use_case); + int (*reclaim_rstmem)(struct optee *optee, struct tee_shm *rstmem); };
/** @@ -285,6 +294,8 @@ u32 optee_supp_thrd_req(struct tee_context *ctx, u32 func, size_t num_params, void optee_supp_init(struct optee_supp *supp); void optee_supp_uninit(struct optee_supp *supp); void optee_supp_release(struct optee_supp *supp); +struct tee_rstmem_pool *optee_rstmem_alloc_cma_pool(struct optee *optee, + enum tee_dma_heap_id id);
int optee_supp_recv(struct tee_context *ctx, u32 *func, u32 *num_params, struct tee_param *param); diff --git a/drivers/tee/optee/rstmem.c b/drivers/tee/optee/rstmem.c new file mode 100644 index 000000000000..ea27769934d4 --- /dev/null +++ b/drivers/tee/optee/rstmem.c @@ -0,0 +1,329 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2025, Linaro Limited + */ +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include <linux/errno.h> +#include <linux/genalloc.h> +#include <linux/slab.h> +#include <linux/string.h> +#include <linux/tee_core.h> +#include <linux/types.h> +#include "optee_private.h" + +struct optee_rstmem_cma_pool { + struct tee_rstmem_pool pool; + struct gen_pool *gen_pool; + struct optee *optee; + size_t page_count; + u16 *end_points; + u_int end_point_count; + u_int align; + refcount_t refcount; + u32 use_case; + struct tee_shm *rstmem; + /* Protects when initializing and tearing down this struct */ + struct mutex mutex; +}; + +static struct optee_rstmem_cma_pool * +to_rstmem_cma_pool(struct tee_rstmem_pool *pool) +{ + return container_of(pool, struct optee_rstmem_cma_pool, pool); +} + +static int init_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{ + int rc; + + rp->rstmem = tee_shm_alloc_cma_phys_mem(rp->optee->ctx, rp->page_count, + rp->align); + if (IS_ERR(rp->rstmem)) { + rc = PTR_ERR(rp->rstmem); + goto err_null_rstmem; + } + + /* + * TODO unmap the memory range since the physical memory will + * become inaccesible after the lend_rstmem() call. + */ + rc = rp->optee->ops->lend_rstmem(rp->optee, rp->rstmem, rp->end_points, + rp->end_point_count, rp->use_case); + if (rc) + goto err_put_shm; + rp->rstmem->flags |= TEE_SHM_DYNAMIC; + + rp->gen_pool = gen_pool_create(PAGE_SHIFT, -1); + if (!rp->gen_pool) { + rc = -ENOMEM; + goto err_reclaim; + } + + rc = gen_pool_add(rp->gen_pool, rp->rstmem->paddr, + rp->rstmem->size, -1); + if (rc) + goto err_free_pool; + + refcount_set(&rp->refcount, 1); + return 0; + +err_free_pool: + gen_pool_destroy(rp->gen_pool); + rp->gen_pool = NULL; +err_reclaim: + rp->optee->ops->reclaim_rstmem(rp->optee, rp->rstmem); +err_put_shm: + tee_shm_put(rp->rstmem); +err_null_rstmem: + rp->rstmem = NULL; + return rc; +} + +static int get_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{ + int rc = 0; + + if (!refcount_inc_not_zero(&rp->refcount)) { + mutex_lock(&rp->mutex); + if (rp->gen_pool) { + /* + * Another thread has already initialized the pool + * before us, or the pool was just about to be torn + * down. Either way we only need to increase the + * refcount and we're done. + */ + refcount_inc(&rp->refcount); + } else { + rc = init_cma_rstmem(rp); + } + mutex_unlock(&rp->mutex); + } + + return rc; +} + +static void release_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{ + gen_pool_destroy(rp->gen_pool); + rp->gen_pool = NULL; + + rp->optee->ops->reclaim_rstmem(rp->optee, rp->rstmem); + rp->rstmem->flags &= ~TEE_SHM_DYNAMIC; + + WARN(refcount_read(&rp->rstmem->refcount) != 1, "Unexpected refcount"); + tee_shm_put(rp->rstmem); + rp->rstmem = NULL; +} + +static void put_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{ + if (refcount_dec_and_test(&rp->refcount)) { + mutex_lock(&rp->mutex); + if (rp->gen_pool) + release_cma_rstmem(rp); + mutex_unlock(&rp->mutex); + } +} + +static int rstmem_pool_op_cma_alloc(struct tee_rstmem_pool *pool, + struct sg_table *sgt, size_t size, + size_t *offs) +{ + struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool); + size_t sz = ALIGN(size, PAGE_SIZE); + phys_addr_t pa; + int rc; + + rc = get_cma_rstmem(rp); + if (rc) + return rc; + + pa = gen_pool_alloc(rp->gen_pool, sz); + if (!pa) { + rc = -ENOMEM; + goto err_put; + } + + rc = sg_alloc_table(sgt, 1, GFP_KERNEL); + if (rc) + goto err_free; + + sg_set_page(sgt->sgl, phys_to_page(pa), size, 0); + *offs = pa - rp->rstmem->paddr; + + return 0; +err_free: + gen_pool_free(rp->gen_pool, pa, size); +err_put: + put_cma_rstmem(rp); + + return rc; +} + +static void rstmem_pool_op_cma_free(struct tee_rstmem_pool *pool, + struct sg_table *sgt) +{ + struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool); + struct scatterlist *sg; + int i; + + for_each_sgtable_sg(sgt, sg, i) + gen_pool_free(rp->gen_pool, sg_phys(sg), sg->length); + sg_free_table(sgt); + put_cma_rstmem(rp); +} + +static int rstmem_pool_op_cma_update_shm(struct tee_rstmem_pool *pool, + struct sg_table *sgt, size_t offs, + struct tee_shm *shm, + struct tee_shm **parent_shm) +{ + struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool); + + *parent_shm = rp->rstmem; + + return 0; +} + +static void pool_op_cma_destroy_pool(struct tee_rstmem_pool *pool) +{ + struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool); + + mutex_destroy(&rp->mutex); + kfree(rp); +} + +static struct tee_rstmem_pool_ops rstmem_pool_ops_cma = { + .alloc = rstmem_pool_op_cma_alloc, + .free = rstmem_pool_op_cma_free, + .update_shm = rstmem_pool_op_cma_update_shm, + .destroy_pool = pool_op_cma_destroy_pool, +}; + +static int get_rstmem_config(struct optee *optee, u32 use_case, + size_t *min_size, u_int *min_align, + u16 *end_points, u_int *ep_count) +{ + struct tee_param params[2] = { + [0] = { + .attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT, + .u.value.a = use_case, + }, + [1] = { + .attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT, + }, + }; + struct optee_shm_arg_entry *entry; + struct tee_shm *shm_param = NULL; + struct optee_msg_arg *msg_arg; + struct tee_shm *shm; + u_int offs; + int rc; + + if (end_points && *ep_count) { + params[1].u.memref.size = *ep_count * sizeof(*end_points); + shm_param = tee_shm_alloc_priv_buf(optee->ctx, + params[1].u.memref.size); + if (IS_ERR(shm_param)) + return PTR_ERR(shm_param); + params[1].u.memref.shm = shm_param; + } + + msg_arg = optee_get_msg_arg(optee->ctx, ARRAY_SIZE(params), &entry, + &shm, &offs); + if (IS_ERR(msg_arg)) { + rc = PTR_ERR(msg_arg); + goto out_free_shm; + } + msg_arg->cmd = OPTEE_MSG_CMD_GET_RSTMEM_CONFIG; + + rc = optee->ops->to_msg_param(optee, msg_arg->params, + ARRAY_SIZE(params), params, + false /*!update_out*/); + if (rc) + goto out_free_msg; + + rc = optee->ops->do_call_with_arg(optee->ctx, shm, offs, false); + if (rc) + goto out_free_msg; + if (msg_arg->ret && msg_arg->ret != TEEC_ERROR_SHORT_BUFFER) { + rc = -EINVAL; + goto out_free_msg; + } + + rc = optee->ops->from_msg_param(optee, params, ARRAY_SIZE(params), + msg_arg->params, true /*update_out*/); + if (rc) + goto out_free_msg; + + if (!msg_arg->ret && end_points && + *ep_count < params[1].u.memref.size / sizeof(u16)) { + rc = -EINVAL; + goto out_free_msg; + } + + *min_size = params[0].u.value.a; + *min_align = params[0].u.value.b; + *ep_count = params[1].u.memref.size / sizeof(u16); + + if (msg_arg->ret == TEEC_ERROR_SHORT_BUFFER) { + rc = -ENOSPC; + goto out_free_msg; + } + + if (end_points) + memcpy(end_points, tee_shm_get_va(shm_param, 0), + params[1].u.memref.size); + +out_free_msg: + optee_free_msg_arg(optee->ctx, entry, offs); +out_free_shm: + if (shm_param) + tee_shm_free(shm_param); + return rc; +} + +struct tee_rstmem_pool *optee_rstmem_alloc_cma_pool(struct optee *optee, + enum tee_dma_heap_id id) +{ + struct optee_rstmem_cma_pool *rp; + u32 use_case = id; + size_t min_size; + int rc; + + rp = kzalloc(sizeof(*rp), GFP_KERNEL); + if (!rp) + return ERR_PTR(-ENOMEM); + rp->use_case = use_case; + + rc = get_rstmem_config(optee, use_case, &min_size, &rp->align, NULL, + &rp->end_point_count); + if (rc) { + if (rc != -ENOSPC) + goto err; + rp->end_points = kcalloc(rp->end_point_count, + sizeof(*rp->end_points), GFP_KERNEL); + if (!rp->end_points) { + rc = -ENOMEM; + goto err; + } + rc = get_rstmem_config(optee, use_case, &min_size, &rp->align, + rp->end_points, &rp->end_point_count); + if (rc) + goto err_kfree_eps; + } + + rp->pool.ops = &rstmem_pool_ops_cma; + rp->optee = optee; + rp->page_count = min_size / PAGE_SIZE; + mutex_init(&rp->mutex); + + return &rp->pool; + +err_kfree_eps: + kfree(rp->end_points); +err: + kfree(rp); + return ERR_PTR(rc); +}
On Wed, Mar 05, 2025 at 02:04:15PM +0100, Jens Wiklander wrote:
Add support in the OP-TEE backend driver dynamic restricted memory allocation with FF-A.
The restricted memory pools for dynamically allocated restrict memory are instantiated when requested by user-space. This instantiation can fail if OP-TEE doesn't support the requested use-case of restricted memory.
Restricted memory pools based on a static carveout or dynamic allocation can coexist for different use-cases. We use only dynamic allocation with FF-A.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/tee/optee/Makefile | 1 + drivers/tee/optee/ffa_abi.c | 143 ++++++++++++- drivers/tee/optee/optee_private.h | 13 +- drivers/tee/optee/rstmem.c | 329 ++++++++++++++++++++++++++++++ 4 files changed, 483 insertions(+), 3 deletions(-) create mode 100644 drivers/tee/optee/rstmem.c
diff --git a/drivers/tee/optee/Makefile b/drivers/tee/optee/Makefile index a6eff388d300..498969fb8e40 100644 --- a/drivers/tee/optee/Makefile +++ b/drivers/tee/optee/Makefile @@ -4,6 +4,7 @@ optee-objs += core.o optee-objs += call.o optee-objs += notif.o optee-objs += rpc.o +optee-objs += rstmem.o optee-objs += supp.o optee-objs += device.o optee-objs += smc_abi.o diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c index e4b08cd195f3..6a55114232ef 100644 --- a/drivers/tee/optee/ffa_abi.c +++ b/drivers/tee/optee/ffa_abi.c @@ -672,6 +672,123 @@ static int optee_ffa_do_call_with_arg(struct tee_context *ctx, return optee_ffa_yielding_call(ctx, &data, rpc_arg, system_thread); } +static int do_call_lend_rstmem(struct optee *optee, u64 cookie, u32 use_case) +{
- struct optee_shm_arg_entry *entry;
- struct optee_msg_arg *msg_arg;
- struct tee_shm *shm;
- u_int offs;
- int rc;
- msg_arg = optee_get_msg_arg(optee->ctx, 1, &entry, &shm, &offs);
- if (IS_ERR(msg_arg))
return PTR_ERR(msg_arg);
- msg_arg->cmd = OPTEE_MSG_CMD_ASSIGN_RSTMEM;
- msg_arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT;
- msg_arg->params[0].u.value.a = cookie;
- msg_arg->params[0].u.value.b = use_case;
- rc = optee->ops->do_call_with_arg(optee->ctx, shm, offs, false);
- if (rc)
goto out;
- if (msg_arg->ret != TEEC_SUCCESS) {
rc = -EINVAL;
goto out;
- }
+out:
- optee_free_msg_arg(optee->ctx, entry, offs);
- return rc;
+}
+static int optee_ffa_lend_rstmem(struct optee *optee, struct tee_shm *rstmem,
u16 *end_points, unsigned int ep_count,
u32 use_case)
+{
- struct ffa_device *ffa_dev = optee->ffa.ffa_dev;
- const struct ffa_mem_ops *mem_ops = ffa_dev->ops->mem_ops;
- const struct ffa_msg_ops *msg_ops = ffa_dev->ops->msg_ops;
- struct ffa_send_direct_data data;
- struct ffa_mem_region_attributes *mem_attr;
- struct ffa_mem_ops_args args = {
.use_txbuf = true,
.tag = use_case,
- };
- struct page *page;
- struct scatterlist sgl;
- unsigned int n;
- int rc;
- mem_attr = kcalloc(ep_count, sizeof(*mem_attr), GFP_KERNEL);
- for (n = 0; n < ep_count; n++) {
mem_attr[n].receiver = end_points[n];
mem_attr[n].attrs = FFA_MEM_RW;
- }
- args.attrs = mem_attr;
- args.nattrs = ep_count;
- page = phys_to_page(rstmem->paddr);
- sg_init_table(&sgl, 1);
- sg_set_page(&sgl, page, rstmem->size, 0);
- args.sg = &sgl;
- rc = mem_ops->memory_lend(&args);
- kfree(mem_attr);
- if (rc)
return rc;
- rc = do_call_lend_rstmem(optee, args.g_handle, use_case);
- if (rc)
goto err_reclaim;
- rc = optee_shm_add_ffa_handle(optee, rstmem, args.g_handle);
- if (rc)
goto err_unreg;
- rstmem->sec_world_id = args.g_handle;
- return 0;
+err_unreg:
- data = (struct ffa_send_direct_data){
.data0 = OPTEE_FFA_RELEASE_RSTMEM,
.data1 = (u32)args.g_handle,
.data2 = (u32)(args.g_handle >> 32),
- };
- msg_ops->sync_send_receive(ffa_dev, &data);
+err_reclaim:
- mem_ops->memory_reclaim(args.g_handle, 0);
- return rc;
+}
+static int optee_ffa_reclaim_rstmem(struct optee *optee, struct tee_shm *rstmem) +{
- struct ffa_device *ffa_dev = optee->ffa.ffa_dev;
- const struct ffa_msg_ops *msg_ops = ffa_dev->ops->msg_ops;
- const struct ffa_mem_ops *mem_ops = ffa_dev->ops->mem_ops;
- u64 global_handle = rstmem->sec_world_id;
- struct ffa_send_direct_data data = {
.data0 = OPTEE_FFA_RELEASE_RSTMEM,
.data1 = (u32)global_handle,
.data2 = (u32)(global_handle >> 32)
- };
- int rc;
- optee_shm_rem_ffa_handle(optee, global_handle);
- rstmem->sec_world_id = 0;
- rc = msg_ops->sync_send_receive(ffa_dev, &data);
- if (rc)
pr_err("Release SHM id 0x%llx rc %d\n", global_handle, rc);
- rc = mem_ops->memory_reclaim(global_handle, 0);
- if (rc)
pr_err("mem_reclaim: 0x%llx %d", global_handle, rc);
- return rc;
+}
/*
- Driver initialization
@@ -833,6 +950,8 @@ static const struct optee_ops optee_ffa_ops = { .do_call_with_arg = optee_ffa_do_call_with_arg, .to_msg_param = optee_ffa_to_msg_param, .from_msg_param = optee_ffa_from_msg_param,
- .lend_rstmem = optee_ffa_lend_rstmem,
- .reclaim_rstmem = optee_ffa_reclaim_rstmem,
}; static void optee_ffa_remove(struct ffa_device *ffa_dev) @@ -941,7 +1060,7 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev) optee->pool, optee); if (IS_ERR(teedev)) { rc = PTR_ERR(teedev);
goto err_free_pool;
} optee->teedev = teedev;goto err_free_shm_pool;
@@ -988,6 +1107,24 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev) rc); }
- if (IS_ENABLED(CONFIG_CMA) && !IS_MODULE(CONFIG_OPTEE) &&
The CMA dependency should be managed via Kconfig.
(sec_caps & OPTEE_FFA_SEC_CAP_RSTMEM)) {
enum tee_dma_heap_id id = TEE_DMA_HEAP_SECURE_VIDEO_PLAY;
struct tee_rstmem_pool *pool;
pool = optee_rstmem_alloc_cma_pool(optee, id);
if (IS_ERR(pool)) {
rc = PTR_ERR(pool);
goto err_notif_uninit;
}
rc = tee_device_register_dma_heap(optee->teedev, id, pool);
if (rc) {
pool->ops->destroy_pool(pool);
goto err_notif_uninit;
}
- }
- rc = optee_enumerate_devices(PTA_CMD_GET_DEVICES); if (rc) goto err_unregister_devices;
@@ -1001,6 +1138,8 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev) err_unregister_devices: optee_unregister_devices();
- tee_device_unregister_all_dma_heaps(optee->teedev);
+err_notif_uninit: if (optee->ffa.bottom_half_value != U32_MAX) notif_ops->notify_relinquish(ffa_dev, optee->ffa.bottom_half_value); @@ -1018,7 +1157,7 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev) tee_device_unregister(optee->supp_teedev); err_unreg_teedev: tee_device_unregister(optee->teedev); -err_free_pool: +err_free_shm_pool: tee_shm_pool_free(pool); err_free_optee: kfree(optee); diff --git a/drivers/tee/optee/optee_private.h b/drivers/tee/optee/optee_private.h index 20eda508dbac..faab31ad7c52 100644 --- a/drivers/tee/optee/optee_private.h +++ b/drivers/tee/optee/optee_private.h @@ -174,9 +174,14 @@ struct optee;
- @do_call_with_arg: enters OP-TEE in secure world
- @to_msg_param: converts from struct tee_param to OPTEE_MSG parameters
- @from_msg_param: converts from OPTEE_MSG parameters to struct tee_param
- @lend_rstmem: lends physically contiguous memory as restricted
memory, inaccessible by the kernel
- @reclaim_rstmem: reclaims restricted memory previously lent with
@lend_rstmem() and makes it accessible by the
kernel again
- These OPs are only supposed to be used internally in the OP-TEE driver
- as a way of abstracting the different methogs of entering OP-TEE in
*/
- as a way of abstracting the different methods of entering OP-TEE in
- secure world.
struct optee_ops { @@ -191,6 +196,10 @@ struct optee_ops { size_t num_params, const struct optee_msg_param *msg_params, bool update_out);
- int (*lend_rstmem)(struct optee *optee, struct tee_shm *rstmem,
u16 *end_points, unsigned int ep_count,
u32 use_case);
- int (*reclaim_rstmem)(struct optee *optee, struct tee_shm *rstmem);
}; /** @@ -285,6 +294,8 @@ u32 optee_supp_thrd_req(struct tee_context *ctx, u32 func, size_t num_params, void optee_supp_init(struct optee_supp *supp); void optee_supp_uninit(struct optee_supp *supp); void optee_supp_release(struct optee_supp *supp); +struct tee_rstmem_pool *optee_rstmem_alloc_cma_pool(struct optee *optee,
enum tee_dma_heap_id id);
int optee_supp_recv(struct tee_context *ctx, u32 *func, u32 *num_params, struct tee_param *param); diff --git a/drivers/tee/optee/rstmem.c b/drivers/tee/optee/rstmem.c new file mode 100644 index 000000000000..ea27769934d4 --- /dev/null +++ b/drivers/tee/optee/rstmem.c @@ -0,0 +1,329 @@ +// SPDX-License-Identifier: GPL-2.0-only +/*
- Copyright (c) 2025, Linaro Limited
- */
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#include <linux/errno.h> +#include <linux/genalloc.h> +#include <linux/slab.h> +#include <linux/string.h> +#include <linux/tee_core.h> +#include <linux/types.h> +#include "optee_private.h"
+struct optee_rstmem_cma_pool {
- struct tee_rstmem_pool pool;
- struct gen_pool *gen_pool;
- struct optee *optee;
- size_t page_count;
- u16 *end_points;
- u_int end_point_count;
- u_int align;
- refcount_t refcount;
- u32 use_case;
- struct tee_shm *rstmem;
- /* Protects when initializing and tearing down this struct */
- struct mutex mutex;
+};
+static struct optee_rstmem_cma_pool * +to_rstmem_cma_pool(struct tee_rstmem_pool *pool) +{
- return container_of(pool, struct optee_rstmem_cma_pool, pool);
+}
+static int init_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{
- int rc;
- rp->rstmem = tee_shm_alloc_cma_phys_mem(rp->optee->ctx, rp->page_count,
rp->align);
- if (IS_ERR(rp->rstmem)) {
rc = PTR_ERR(rp->rstmem);
goto err_null_rstmem;
- }
- /*
* TODO unmap the memory range since the physical memory will
* become inaccesible after the lend_rstmem() call.
*/
What's your plan for this TODO? I think we need a CMA allocator here which can allocate un-mapped memory such that any cache speculation won't lead to CPU hangs once the memory restriction comes into picture.
- rc = rp->optee->ops->lend_rstmem(rp->optee, rp->rstmem, rp->end_points,
rp->end_point_count, rp->use_case);
- if (rc)
goto err_put_shm;
- rp->rstmem->flags |= TEE_SHM_DYNAMIC;
- rp->gen_pool = gen_pool_create(PAGE_SHIFT, -1);
- if (!rp->gen_pool) {
rc = -ENOMEM;
goto err_reclaim;
- }
- rc = gen_pool_add(rp->gen_pool, rp->rstmem->paddr,
rp->rstmem->size, -1);
- if (rc)
goto err_free_pool;
- refcount_set(&rp->refcount, 1);
- return 0;
+err_free_pool:
- gen_pool_destroy(rp->gen_pool);
- rp->gen_pool = NULL;
+err_reclaim:
- rp->optee->ops->reclaim_rstmem(rp->optee, rp->rstmem);
+err_put_shm:
- tee_shm_put(rp->rstmem);
+err_null_rstmem:
- rp->rstmem = NULL;
- return rc;
+}
+static int get_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{
- int rc = 0;
- if (!refcount_inc_not_zero(&rp->refcount)) {
mutex_lock(&rp->mutex);
if (rp->gen_pool) {
/*
* Another thread has already initialized the pool
* before us, or the pool was just about to be torn
* down. Either way we only need to increase the
* refcount and we're done.
*/
refcount_inc(&rp->refcount);
} else {
rc = init_cma_rstmem(rp);
}
mutex_unlock(&rp->mutex);
- }
- return rc;
+}
+static void release_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{
- gen_pool_destroy(rp->gen_pool);
- rp->gen_pool = NULL;
- rp->optee->ops->reclaim_rstmem(rp->optee, rp->rstmem);
- rp->rstmem->flags &= ~TEE_SHM_DYNAMIC;
- WARN(refcount_read(&rp->rstmem->refcount) != 1, "Unexpected refcount");
- tee_shm_put(rp->rstmem);
- rp->rstmem = NULL;
+}
+static void put_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{
- if (refcount_dec_and_test(&rp->refcount)) {
mutex_lock(&rp->mutex);
if (rp->gen_pool)
release_cma_rstmem(rp);
mutex_unlock(&rp->mutex);
- }
+}
+static int rstmem_pool_op_cma_alloc(struct tee_rstmem_pool *pool,
struct sg_table *sgt, size_t size,
size_t *offs)
+{
- struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool);
- size_t sz = ALIGN(size, PAGE_SIZE);
- phys_addr_t pa;
- int rc;
- rc = get_cma_rstmem(rp);
- if (rc)
return rc;
- pa = gen_pool_alloc(rp->gen_pool, sz);
- if (!pa) {
rc = -ENOMEM;
goto err_put;
- }
- rc = sg_alloc_table(sgt, 1, GFP_KERNEL);
- if (rc)
goto err_free;
- sg_set_page(sgt->sgl, phys_to_page(pa), size, 0);
- *offs = pa - rp->rstmem->paddr;
- return 0;
+err_free:
- gen_pool_free(rp->gen_pool, pa, size);
+err_put:
- put_cma_rstmem(rp);
- return rc;
+}
+static void rstmem_pool_op_cma_free(struct tee_rstmem_pool *pool,
struct sg_table *sgt)
+{
- struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool);
- struct scatterlist *sg;
- int i;
- for_each_sgtable_sg(sgt, sg, i)
gen_pool_free(rp->gen_pool, sg_phys(sg), sg->length);
- sg_free_table(sgt);
- put_cma_rstmem(rp);
+}
+static int rstmem_pool_op_cma_update_shm(struct tee_rstmem_pool *pool,
struct sg_table *sgt, size_t offs,
struct tee_shm *shm,
struct tee_shm **parent_shm)
+{
- struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool);
- *parent_shm = rp->rstmem;
- return 0;
+}
+static void pool_op_cma_destroy_pool(struct tee_rstmem_pool *pool) +{
- struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool);
- mutex_destroy(&rp->mutex);
- kfree(rp);
+}
+static struct tee_rstmem_pool_ops rstmem_pool_ops_cma = {
- .alloc = rstmem_pool_op_cma_alloc,
- .free = rstmem_pool_op_cma_free,
- .update_shm = rstmem_pool_op_cma_update_shm,
- .destroy_pool = pool_op_cma_destroy_pool,
+};
+static int get_rstmem_config(struct optee *optee, u32 use_case,
size_t *min_size, u_int *min_align,
u16 *end_points, u_int *ep_count)
I guess this end points terminology is specific to FF-A ABI. Is there any relevance for this in the common APIs?
-Sumit
+{
- struct tee_param params[2] = {
[0] = {
.attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT,
.u.value.a = use_case,
},
[1] = {
.attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT,
},
- };
- struct optee_shm_arg_entry *entry;
- struct tee_shm *shm_param = NULL;
- struct optee_msg_arg *msg_arg;
- struct tee_shm *shm;
- u_int offs;
- int rc;
- if (end_points && *ep_count) {
params[1].u.memref.size = *ep_count * sizeof(*end_points);
shm_param = tee_shm_alloc_priv_buf(optee->ctx,
params[1].u.memref.size);
if (IS_ERR(shm_param))
return PTR_ERR(shm_param);
params[1].u.memref.shm = shm_param;
- }
- msg_arg = optee_get_msg_arg(optee->ctx, ARRAY_SIZE(params), &entry,
&shm, &offs);
- if (IS_ERR(msg_arg)) {
rc = PTR_ERR(msg_arg);
goto out_free_shm;
- }
- msg_arg->cmd = OPTEE_MSG_CMD_GET_RSTMEM_CONFIG;
- rc = optee->ops->to_msg_param(optee, msg_arg->params,
ARRAY_SIZE(params), params,
false /*!update_out*/);
- if (rc)
goto out_free_msg;
- rc = optee->ops->do_call_with_arg(optee->ctx, shm, offs, false);
- if (rc)
goto out_free_msg;
- if (msg_arg->ret && msg_arg->ret != TEEC_ERROR_SHORT_BUFFER) {
rc = -EINVAL;
goto out_free_msg;
- }
- rc = optee->ops->from_msg_param(optee, params, ARRAY_SIZE(params),
msg_arg->params, true /*update_out*/);
- if (rc)
goto out_free_msg;
- if (!msg_arg->ret && end_points &&
*ep_count < params[1].u.memref.size / sizeof(u16)) {
rc = -EINVAL;
goto out_free_msg;
- }
- *min_size = params[0].u.value.a;
- *min_align = params[0].u.value.b;
- *ep_count = params[1].u.memref.size / sizeof(u16);
- if (msg_arg->ret == TEEC_ERROR_SHORT_BUFFER) {
rc = -ENOSPC;
goto out_free_msg;
- }
- if (end_points)
memcpy(end_points, tee_shm_get_va(shm_param, 0),
params[1].u.memref.size);
+out_free_msg:
- optee_free_msg_arg(optee->ctx, entry, offs);
+out_free_shm:
- if (shm_param)
tee_shm_free(shm_param);
- return rc;
+}
+struct tee_rstmem_pool *optee_rstmem_alloc_cma_pool(struct optee *optee,
enum tee_dma_heap_id id)
+{
- struct optee_rstmem_cma_pool *rp;
- u32 use_case = id;
- size_t min_size;
- int rc;
- rp = kzalloc(sizeof(*rp), GFP_KERNEL);
- if (!rp)
return ERR_PTR(-ENOMEM);
- rp->use_case = use_case;
- rc = get_rstmem_config(optee, use_case, &min_size, &rp->align, NULL,
&rp->end_point_count);
- if (rc) {
if (rc != -ENOSPC)
goto err;
rp->end_points = kcalloc(rp->end_point_count,
sizeof(*rp->end_points), GFP_KERNEL);
if (!rp->end_points) {
rc = -ENOMEM;
goto err;
}
rc = get_rstmem_config(optee, use_case, &min_size, &rp->align,
rp->end_points, &rp->end_point_count);
if (rc)
goto err_kfree_eps;
- }
- rp->pool.ops = &rstmem_pool_ops_cma;
- rp->optee = optee;
- rp->page_count = min_size / PAGE_SIZE;
- mutex_init(&rp->mutex);
- return &rp->pool;
+err_kfree_eps:
- kfree(rp->end_points);
+err:
- kfree(rp);
- return ERR_PTR(rc);
+}
2.43.0
Hi Sumit,
On Tue, Mar 25, 2025 at 8:42 AM Sumit Garg sumit.garg@kernel.org wrote:
On Wed, Mar 05, 2025 at 02:04:15PM +0100, Jens Wiklander wrote:
Add support in the OP-TEE backend driver dynamic restricted memory allocation with FF-A.
The restricted memory pools for dynamically allocated restrict memory are instantiated when requested by user-space. This instantiation can fail if OP-TEE doesn't support the requested use-case of restricted memory.
Restricted memory pools based on a static carveout or dynamic allocation can coexist for different use-cases. We use only dynamic allocation with FF-A.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/tee/optee/Makefile | 1 + drivers/tee/optee/ffa_abi.c | 143 ++++++++++++- drivers/tee/optee/optee_private.h | 13 +- drivers/tee/optee/rstmem.c | 329 ++++++++++++++++++++++++++++++ 4 files changed, 483 insertions(+), 3 deletions(-) create mode 100644 drivers/tee/optee/rstmem.c
diff --git a/drivers/tee/optee/Makefile b/drivers/tee/optee/Makefile index a6eff388d300..498969fb8e40 100644 --- a/drivers/tee/optee/Makefile +++ b/drivers/tee/optee/Makefile @@ -4,6 +4,7 @@ optee-objs += core.o optee-objs += call.o optee-objs += notif.o optee-objs += rpc.o +optee-objs += rstmem.o optee-objs += supp.o optee-objs += device.o optee-objs += smc_abi.o diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c index e4b08cd195f3..6a55114232ef 100644 --- a/drivers/tee/optee/ffa_abi.c +++ b/drivers/tee/optee/ffa_abi.c @@ -672,6 +672,123 @@ static int optee_ffa_do_call_with_arg(struct tee_context *ctx, return optee_ffa_yielding_call(ctx, &data, rpc_arg, system_thread); }
+static int do_call_lend_rstmem(struct optee *optee, u64 cookie, u32 use_case) +{
struct optee_shm_arg_entry *entry;
struct optee_msg_arg *msg_arg;
struct tee_shm *shm;
u_int offs;
int rc;
msg_arg = optee_get_msg_arg(optee->ctx, 1, &entry, &shm, &offs);
if (IS_ERR(msg_arg))
return PTR_ERR(msg_arg);
msg_arg->cmd = OPTEE_MSG_CMD_ASSIGN_RSTMEM;
msg_arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT;
msg_arg->params[0].u.value.a = cookie;
msg_arg->params[0].u.value.b = use_case;
rc = optee->ops->do_call_with_arg(optee->ctx, shm, offs, false);
if (rc)
goto out;
if (msg_arg->ret != TEEC_SUCCESS) {
rc = -EINVAL;
goto out;
}
+out:
optee_free_msg_arg(optee->ctx, entry, offs);
return rc;
+}
+static int optee_ffa_lend_rstmem(struct optee *optee, struct tee_shm *rstmem,
u16 *end_points, unsigned int ep_count,
u32 use_case)
+{
struct ffa_device *ffa_dev = optee->ffa.ffa_dev;
const struct ffa_mem_ops *mem_ops = ffa_dev->ops->mem_ops;
const struct ffa_msg_ops *msg_ops = ffa_dev->ops->msg_ops;
struct ffa_send_direct_data data;
struct ffa_mem_region_attributes *mem_attr;
struct ffa_mem_ops_args args = {
.use_txbuf = true,
.tag = use_case,
};
struct page *page;
struct scatterlist sgl;
unsigned int n;
int rc;
mem_attr = kcalloc(ep_count, sizeof(*mem_attr), GFP_KERNEL);
for (n = 0; n < ep_count; n++) {
mem_attr[n].receiver = end_points[n];
mem_attr[n].attrs = FFA_MEM_RW;
}
args.attrs = mem_attr;
args.nattrs = ep_count;
page = phys_to_page(rstmem->paddr);
sg_init_table(&sgl, 1);
sg_set_page(&sgl, page, rstmem->size, 0);
args.sg = &sgl;
rc = mem_ops->memory_lend(&args);
kfree(mem_attr);
if (rc)
return rc;
rc = do_call_lend_rstmem(optee, args.g_handle, use_case);
if (rc)
goto err_reclaim;
rc = optee_shm_add_ffa_handle(optee, rstmem, args.g_handle);
if (rc)
goto err_unreg;
rstmem->sec_world_id = args.g_handle;
return 0;
+err_unreg:
data = (struct ffa_send_direct_data){
.data0 = OPTEE_FFA_RELEASE_RSTMEM,
.data1 = (u32)args.g_handle,
.data2 = (u32)(args.g_handle >> 32),
};
msg_ops->sync_send_receive(ffa_dev, &data);
+err_reclaim:
mem_ops->memory_reclaim(args.g_handle, 0);
return rc;
+}
+static int optee_ffa_reclaim_rstmem(struct optee *optee, struct tee_shm *rstmem) +{
struct ffa_device *ffa_dev = optee->ffa.ffa_dev;
const struct ffa_msg_ops *msg_ops = ffa_dev->ops->msg_ops;
const struct ffa_mem_ops *mem_ops = ffa_dev->ops->mem_ops;
u64 global_handle = rstmem->sec_world_id;
struct ffa_send_direct_data data = {
.data0 = OPTEE_FFA_RELEASE_RSTMEM,
.data1 = (u32)global_handle,
.data2 = (u32)(global_handle >> 32)
};
int rc;
optee_shm_rem_ffa_handle(optee, global_handle);
rstmem->sec_world_id = 0;
rc = msg_ops->sync_send_receive(ffa_dev, &data);
if (rc)
pr_err("Release SHM id 0x%llx rc %d\n", global_handle, rc);
rc = mem_ops->memory_reclaim(global_handle, 0);
if (rc)
pr_err("mem_reclaim: 0x%llx %d", global_handle, rc);
return rc;
+}
/*
- Driver initialization
@@ -833,6 +950,8 @@ static const struct optee_ops optee_ffa_ops = { .do_call_with_arg = optee_ffa_do_call_with_arg, .to_msg_param = optee_ffa_to_msg_param, .from_msg_param = optee_ffa_from_msg_param,
.lend_rstmem = optee_ffa_lend_rstmem,
.reclaim_rstmem = optee_ffa_reclaim_rstmem,
};
static void optee_ffa_remove(struct ffa_device *ffa_dev) @@ -941,7 +1060,7 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev) optee->pool, optee); if (IS_ERR(teedev)) { rc = PTR_ERR(teedev);
goto err_free_pool;
goto err_free_shm_pool; } optee->teedev = teedev;
@@ -988,6 +1107,24 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev) rc); }
if (IS_ENABLED(CONFIG_CMA) && !IS_MODULE(CONFIG_OPTEE) &&
The CMA dependency should be managed via Kconfig.
Yes, I'll fix it.
(sec_caps & OPTEE_FFA_SEC_CAP_RSTMEM)) {
enum tee_dma_heap_id id = TEE_DMA_HEAP_SECURE_VIDEO_PLAY;
struct tee_rstmem_pool *pool;
pool = optee_rstmem_alloc_cma_pool(optee, id);
if (IS_ERR(pool)) {
rc = PTR_ERR(pool);
goto err_notif_uninit;
}
rc = tee_device_register_dma_heap(optee->teedev, id, pool);
if (rc) {
pool->ops->destroy_pool(pool);
goto err_notif_uninit;
}
}
rc = optee_enumerate_devices(PTA_CMD_GET_DEVICES); if (rc) goto err_unregister_devices;
@@ -1001,6 +1138,8 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev)
err_unregister_devices: optee_unregister_devices();
tee_device_unregister_all_dma_heaps(optee->teedev);
+err_notif_uninit: if (optee->ffa.bottom_half_value != U32_MAX) notif_ops->notify_relinquish(ffa_dev, optee->ffa.bottom_half_value); @@ -1018,7 +1157,7 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev) tee_device_unregister(optee->supp_teedev); err_unreg_teedev: tee_device_unregister(optee->teedev); -err_free_pool: +err_free_shm_pool: tee_shm_pool_free(pool); err_free_optee: kfree(optee); diff --git a/drivers/tee/optee/optee_private.h b/drivers/tee/optee/optee_private.h index 20eda508dbac..faab31ad7c52 100644 --- a/drivers/tee/optee/optee_private.h +++ b/drivers/tee/optee/optee_private.h @@ -174,9 +174,14 @@ struct optee;
- @do_call_with_arg: enters OP-TEE in secure world
- @to_msg_param: converts from struct tee_param to OPTEE_MSG parameters
- @from_msg_param: converts from OPTEE_MSG parameters to struct tee_param
- @lend_rstmem: lends physically contiguous memory as restricted
memory, inaccessible by the kernel
- @reclaim_rstmem: reclaims restricted memory previously lent with
@lend_rstmem() and makes it accessible by the
kernel again
- These OPs are only supposed to be used internally in the OP-TEE driver
- as a way of abstracting the different methogs of entering OP-TEE in
*/
- as a way of abstracting the different methods of entering OP-TEE in
- secure world.
struct optee_ops { @@ -191,6 +196,10 @@ struct optee_ops { size_t num_params, const struct optee_msg_param *msg_params, bool update_out);
int (*lend_rstmem)(struct optee *optee, struct tee_shm *rstmem,
u16 *end_points, unsigned int ep_count,
u32 use_case);
int (*reclaim_rstmem)(struct optee *optee, struct tee_shm *rstmem);
};
/** @@ -285,6 +294,8 @@ u32 optee_supp_thrd_req(struct tee_context *ctx, u32 func, size_t num_params, void optee_supp_init(struct optee_supp *supp); void optee_supp_uninit(struct optee_supp *supp); void optee_supp_release(struct optee_supp *supp); +struct tee_rstmem_pool *optee_rstmem_alloc_cma_pool(struct optee *optee,
enum tee_dma_heap_id id);
int optee_supp_recv(struct tee_context *ctx, u32 *func, u32 *num_params, struct tee_param *param); diff --git a/drivers/tee/optee/rstmem.c b/drivers/tee/optee/rstmem.c new file mode 100644 index 000000000000..ea27769934d4 --- /dev/null +++ b/drivers/tee/optee/rstmem.c @@ -0,0 +1,329 @@ +// SPDX-License-Identifier: GPL-2.0-only +/*
- Copyright (c) 2025, Linaro Limited
- */
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#include <linux/errno.h> +#include <linux/genalloc.h> +#include <linux/slab.h> +#include <linux/string.h> +#include <linux/tee_core.h> +#include <linux/types.h> +#include "optee_private.h"
+struct optee_rstmem_cma_pool {
struct tee_rstmem_pool pool;
struct gen_pool *gen_pool;
struct optee *optee;
size_t page_count;
u16 *end_points;
u_int end_point_count;
u_int align;
refcount_t refcount;
u32 use_case;
struct tee_shm *rstmem;
/* Protects when initializing and tearing down this struct */
struct mutex mutex;
+};
+static struct optee_rstmem_cma_pool * +to_rstmem_cma_pool(struct tee_rstmem_pool *pool) +{
return container_of(pool, struct optee_rstmem_cma_pool, pool);
+}
+static int init_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{
int rc;
rp->rstmem = tee_shm_alloc_cma_phys_mem(rp->optee->ctx, rp->page_count,
rp->align);
if (IS_ERR(rp->rstmem)) {
rc = PTR_ERR(rp->rstmem);
goto err_null_rstmem;
}
/*
* TODO unmap the memory range since the physical memory will
* become inaccesible after the lend_rstmem() call.
*/
What's your plan for this TODO? I think we need a CMA allocator here which can allocate un-mapped memory such that any cache speculation won't lead to CPU hangs once the memory restriction comes into picture.
What happens is platform-specific. For some platforms, it might be enough to avoid explicit access. Yes, a CMA allocator with unmapped memory or where memory can be unmapped is one option.
rc = rp->optee->ops->lend_rstmem(rp->optee, rp->rstmem, rp->end_points,
rp->end_point_count, rp->use_case);
if (rc)
goto err_put_shm;
rp->rstmem->flags |= TEE_SHM_DYNAMIC;
rp->gen_pool = gen_pool_create(PAGE_SHIFT, -1);
if (!rp->gen_pool) {
rc = -ENOMEM;
goto err_reclaim;
}
rc = gen_pool_add(rp->gen_pool, rp->rstmem->paddr,
rp->rstmem->size, -1);
if (rc)
goto err_free_pool;
refcount_set(&rp->refcount, 1);
return 0;
+err_free_pool:
gen_pool_destroy(rp->gen_pool);
rp->gen_pool = NULL;
+err_reclaim:
rp->optee->ops->reclaim_rstmem(rp->optee, rp->rstmem);
+err_put_shm:
tee_shm_put(rp->rstmem);
+err_null_rstmem:
rp->rstmem = NULL;
return rc;
+}
+static int get_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{
int rc = 0;
if (!refcount_inc_not_zero(&rp->refcount)) {
mutex_lock(&rp->mutex);
if (rp->gen_pool) {
/*
* Another thread has already initialized the pool
* before us, or the pool was just about to be torn
* down. Either way we only need to increase the
* refcount and we're done.
*/
refcount_inc(&rp->refcount);
} else {
rc = init_cma_rstmem(rp);
}
mutex_unlock(&rp->mutex);
}
return rc;
+}
+static void release_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{
gen_pool_destroy(rp->gen_pool);
rp->gen_pool = NULL;
rp->optee->ops->reclaim_rstmem(rp->optee, rp->rstmem);
rp->rstmem->flags &= ~TEE_SHM_DYNAMIC;
WARN(refcount_read(&rp->rstmem->refcount) != 1, "Unexpected refcount");
tee_shm_put(rp->rstmem);
rp->rstmem = NULL;
+}
+static void put_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{
if (refcount_dec_and_test(&rp->refcount)) {
mutex_lock(&rp->mutex);
if (rp->gen_pool)
release_cma_rstmem(rp);
mutex_unlock(&rp->mutex);
}
+}
+static int rstmem_pool_op_cma_alloc(struct tee_rstmem_pool *pool,
struct sg_table *sgt, size_t size,
size_t *offs)
+{
struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool);
size_t sz = ALIGN(size, PAGE_SIZE);
phys_addr_t pa;
int rc;
rc = get_cma_rstmem(rp);
if (rc)
return rc;
pa = gen_pool_alloc(rp->gen_pool, sz);
if (!pa) {
rc = -ENOMEM;
goto err_put;
}
rc = sg_alloc_table(sgt, 1, GFP_KERNEL);
if (rc)
goto err_free;
sg_set_page(sgt->sgl, phys_to_page(pa), size, 0);
*offs = pa - rp->rstmem->paddr;
return 0;
+err_free:
gen_pool_free(rp->gen_pool, pa, size);
+err_put:
put_cma_rstmem(rp);
return rc;
+}
+static void rstmem_pool_op_cma_free(struct tee_rstmem_pool *pool,
struct sg_table *sgt)
+{
struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool);
struct scatterlist *sg;
int i;
for_each_sgtable_sg(sgt, sg, i)
gen_pool_free(rp->gen_pool, sg_phys(sg), sg->length);
sg_free_table(sgt);
put_cma_rstmem(rp);
+}
+static int rstmem_pool_op_cma_update_shm(struct tee_rstmem_pool *pool,
struct sg_table *sgt, size_t offs,
struct tee_shm *shm,
struct tee_shm **parent_shm)
+{
struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool);
*parent_shm = rp->rstmem;
return 0;
+}
+static void pool_op_cma_destroy_pool(struct tee_rstmem_pool *pool) +{
struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool);
mutex_destroy(&rp->mutex);
kfree(rp);
+}
+static struct tee_rstmem_pool_ops rstmem_pool_ops_cma = {
.alloc = rstmem_pool_op_cma_alloc,
.free = rstmem_pool_op_cma_free,
.update_shm = rstmem_pool_op_cma_update_shm,
.destroy_pool = pool_op_cma_destroy_pool,
+};
+static int get_rstmem_config(struct optee *optee, u32 use_case,
size_t *min_size, u_int *min_align,
u16 *end_points, u_int *ep_count)
I guess this end points terminology is specific to FF-A ABI. Is there any relevance for this in the common APIs?
Yes, endpoints are specific to FF-A ABI. The list of end-points must be presented to FFA_MEM_LEND. We're relying on the secure world to know which endpoints are needed for a specific use case.
Cheers, Jens
-Sumit
+{
struct tee_param params[2] = {
[0] = {
.attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT,
.u.value.a = use_case,
},
[1] = {
.attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT,
},
};
struct optee_shm_arg_entry *entry;
struct tee_shm *shm_param = NULL;
struct optee_msg_arg *msg_arg;
struct tee_shm *shm;
u_int offs;
int rc;
if (end_points && *ep_count) {
params[1].u.memref.size = *ep_count * sizeof(*end_points);
shm_param = tee_shm_alloc_priv_buf(optee->ctx,
params[1].u.memref.size);
if (IS_ERR(shm_param))
return PTR_ERR(shm_param);
params[1].u.memref.shm = shm_param;
}
msg_arg = optee_get_msg_arg(optee->ctx, ARRAY_SIZE(params), &entry,
&shm, &offs);
if (IS_ERR(msg_arg)) {
rc = PTR_ERR(msg_arg);
goto out_free_shm;
}
msg_arg->cmd = OPTEE_MSG_CMD_GET_RSTMEM_CONFIG;
rc = optee->ops->to_msg_param(optee, msg_arg->params,
ARRAY_SIZE(params), params,
false /*!update_out*/);
if (rc)
goto out_free_msg;
rc = optee->ops->do_call_with_arg(optee->ctx, shm, offs, false);
if (rc)
goto out_free_msg;
if (msg_arg->ret && msg_arg->ret != TEEC_ERROR_SHORT_BUFFER) {
rc = -EINVAL;
goto out_free_msg;
}
rc = optee->ops->from_msg_param(optee, params, ARRAY_SIZE(params),
msg_arg->params, true /*update_out*/);
if (rc)
goto out_free_msg;
if (!msg_arg->ret && end_points &&
*ep_count < params[1].u.memref.size / sizeof(u16)) {
rc = -EINVAL;
goto out_free_msg;
}
*min_size = params[0].u.value.a;
*min_align = params[0].u.value.b;
*ep_count = params[1].u.memref.size / sizeof(u16);
if (msg_arg->ret == TEEC_ERROR_SHORT_BUFFER) {
rc = -ENOSPC;
goto out_free_msg;
}
if (end_points)
memcpy(end_points, tee_shm_get_va(shm_param, 0),
params[1].u.memref.size);
+out_free_msg:
optee_free_msg_arg(optee->ctx, entry, offs);
+out_free_shm:
if (shm_param)
tee_shm_free(shm_param);
return rc;
+}
+struct tee_rstmem_pool *optee_rstmem_alloc_cma_pool(struct optee *optee,
enum tee_dma_heap_id id)
+{
struct optee_rstmem_cma_pool *rp;
u32 use_case = id;
size_t min_size;
int rc;
rp = kzalloc(sizeof(*rp), GFP_KERNEL);
if (!rp)
return ERR_PTR(-ENOMEM);
rp->use_case = use_case;
rc = get_rstmem_config(optee, use_case, &min_size, &rp->align, NULL,
&rp->end_point_count);
if (rc) {
if (rc != -ENOSPC)
goto err;
rp->end_points = kcalloc(rp->end_point_count,
sizeof(*rp->end_points), GFP_KERNEL);
if (!rp->end_points) {
rc = -ENOMEM;
goto err;
}
rc = get_rstmem_config(optee, use_case, &min_size, &rp->align,
rp->end_points, &rp->end_point_count);
if (rc)
goto err_kfree_eps;
}
rp->pool.ops = &rstmem_pool_ops_cma;
rp->optee = optee;
rp->page_count = min_size / PAGE_SIZE;
mutex_init(&rp->mutex);
return &rp->pool;
+err_kfree_eps:
kfree(rp->end_points);
+err:
kfree(rp);
return ERR_PTR(rc);
+}
2.43.0
+ MM folks to seek guidance here.
On Thu, Mar 27, 2025 at 09:07:34AM +0100, Jens Wiklander wrote:
Hi Sumit,
On Tue, Mar 25, 2025 at 8:42 AM Sumit Garg sumit.garg@kernel.org wrote:
On Wed, Mar 05, 2025 at 02:04:15PM +0100, Jens Wiklander wrote:
Add support in the OP-TEE backend driver dynamic restricted memory allocation with FF-A.
The restricted memory pools for dynamically allocated restrict memory are instantiated when requested by user-space. This instantiation can fail if OP-TEE doesn't support the requested use-case of restricted memory.
Restricted memory pools based on a static carveout or dynamic allocation can coexist for different use-cases. We use only dynamic allocation with FF-A.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/tee/optee/Makefile | 1 + drivers/tee/optee/ffa_abi.c | 143 ++++++++++++- drivers/tee/optee/optee_private.h | 13 +- drivers/tee/optee/rstmem.c | 329 ++++++++++++++++++++++++++++++ 4 files changed, 483 insertions(+), 3 deletions(-) create mode 100644 drivers/tee/optee/rstmem.c
<snip>
diff --git a/drivers/tee/optee/rstmem.c b/drivers/tee/optee/rstmem.c new file mode 100644 index 000000000000..ea27769934d4 --- /dev/null +++ b/drivers/tee/optee/rstmem.c @@ -0,0 +1,329 @@ +// SPDX-License-Identifier: GPL-2.0-only +/*
- Copyright (c) 2025, Linaro Limited
- */
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#include <linux/errno.h> +#include <linux/genalloc.h> +#include <linux/slab.h> +#include <linux/string.h> +#include <linux/tee_core.h> +#include <linux/types.h> +#include "optee_private.h"
+struct optee_rstmem_cma_pool {
struct tee_rstmem_pool pool;
struct gen_pool *gen_pool;
struct optee *optee;
size_t page_count;
u16 *end_points;
u_int end_point_count;
u_int align;
refcount_t refcount;
u32 use_case;
struct tee_shm *rstmem;
/* Protects when initializing and tearing down this struct */
struct mutex mutex;
+};
+static struct optee_rstmem_cma_pool * +to_rstmem_cma_pool(struct tee_rstmem_pool *pool) +{
return container_of(pool, struct optee_rstmem_cma_pool, pool);
+}
+static int init_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{
int rc;
rp->rstmem = tee_shm_alloc_cma_phys_mem(rp->optee->ctx, rp->page_count,
rp->align);
if (IS_ERR(rp->rstmem)) {
rc = PTR_ERR(rp->rstmem);
goto err_null_rstmem;
}
/*
* TODO unmap the memory range since the physical memory will
* become inaccesible after the lend_rstmem() call.
*/
What's your plan for this TODO? I think we need a CMA allocator here which can allocate un-mapped memory such that any cache speculation won't lead to CPU hangs once the memory restriction comes into picture.
What happens is platform-specific. For some platforms, it might be enough to avoid explicit access. Yes, a CMA allocator with unmapped memory or where memory can be unmapped is one option.
Did you get a chance to enable real memory protection on RockPi board? This will atleast ensure that mapped restricted memory without explicit access works fine. Since otherwise once people start to enable real memory restriction in OP-TEE, there can be chances of random hang ups due to cache speculation.
MM folks,
Basically what we are trying to achieve here is a "no-map" DT behaviour [1] which is rather dynamic in nature. The use-case here is that a memory block allocated from CMA can be marked restricted at runtime where we would like the Linux not being able to directly or indirectly (cache speculation) access it. Once memory restriction use-case has been completed, the memory block can be marked as normal and freed for further CMA allocation.
It will be apprciated if you can guide us regarding the appropriate APIs to use for un-mapping/mamping CMA allocations for this use-case.
[1] https://github.com/devicetree-org/dt-schema/blob/main/dtschema/schemas/reser...
-Sumit
rc = rp->optee->ops->lend_rstmem(rp->optee, rp->rstmem, rp->end_points,
rp->end_point_count, rp->use_case);
if (rc)
goto err_put_shm;
rp->rstmem->flags |= TEE_SHM_DYNAMIC;
rp->gen_pool = gen_pool_create(PAGE_SHIFT, -1);
if (!rp->gen_pool) {
rc = -ENOMEM;
goto err_reclaim;
}
rc = gen_pool_add(rp->gen_pool, rp->rstmem->paddr,
rp->rstmem->size, -1);
if (rc)
goto err_free_pool;
refcount_set(&rp->refcount, 1);
return 0;
+err_free_pool:
gen_pool_destroy(rp->gen_pool);
rp->gen_pool = NULL;
+err_reclaim:
rp->optee->ops->reclaim_rstmem(rp->optee, rp->rstmem);
+err_put_shm:
tee_shm_put(rp->rstmem);
+err_null_rstmem:
rp->rstmem = NULL;
return rc;
+}
+static int get_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{
int rc = 0;
if (!refcount_inc_not_zero(&rp->refcount)) {
mutex_lock(&rp->mutex);
if (rp->gen_pool) {
/*
* Another thread has already initialized the pool
* before us, or the pool was just about to be torn
* down. Either way we only need to increase the
* refcount and we're done.
*/
refcount_inc(&rp->refcount);
} else {
rc = init_cma_rstmem(rp);
}
mutex_unlock(&rp->mutex);
}
return rc;
+}
+static void release_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{
gen_pool_destroy(rp->gen_pool);
rp->gen_pool = NULL;
rp->optee->ops->reclaim_rstmem(rp->optee, rp->rstmem);
rp->rstmem->flags &= ~TEE_SHM_DYNAMIC;
WARN(refcount_read(&rp->rstmem->refcount) != 1, "Unexpected refcount");
tee_shm_put(rp->rstmem);
rp->rstmem = NULL;
+}
+static void put_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{
if (refcount_dec_and_test(&rp->refcount)) {
mutex_lock(&rp->mutex);
if (rp->gen_pool)
release_cma_rstmem(rp);
mutex_unlock(&rp->mutex);
}
+}
+static int rstmem_pool_op_cma_alloc(struct tee_rstmem_pool *pool,
struct sg_table *sgt, size_t size,
size_t *offs)
+{
struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool);
size_t sz = ALIGN(size, PAGE_SIZE);
phys_addr_t pa;
int rc;
rc = get_cma_rstmem(rp);
if (rc)
return rc;
pa = gen_pool_alloc(rp->gen_pool, sz);
if (!pa) {
rc = -ENOMEM;
goto err_put;
}
rc = sg_alloc_table(sgt, 1, GFP_KERNEL);
if (rc)
goto err_free;
sg_set_page(sgt->sgl, phys_to_page(pa), size, 0);
*offs = pa - rp->rstmem->paddr;
return 0;
+err_free:
gen_pool_free(rp->gen_pool, pa, size);
+err_put:
put_cma_rstmem(rp);
return rc;
+}
+static void rstmem_pool_op_cma_free(struct tee_rstmem_pool *pool,
struct sg_table *sgt)
+{
struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool);
struct scatterlist *sg;
int i;
for_each_sgtable_sg(sgt, sg, i)
gen_pool_free(rp->gen_pool, sg_phys(sg), sg->length);
sg_free_table(sgt);
put_cma_rstmem(rp);
+}
+static int rstmem_pool_op_cma_update_shm(struct tee_rstmem_pool *pool,
struct sg_table *sgt, size_t offs,
struct tee_shm *shm,
struct tee_shm **parent_shm)
+{
struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool);
*parent_shm = rp->rstmem;
return 0;
+}
+static void pool_op_cma_destroy_pool(struct tee_rstmem_pool *pool) +{
struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool);
mutex_destroy(&rp->mutex);
kfree(rp);
+}
+static struct tee_rstmem_pool_ops rstmem_pool_ops_cma = {
.alloc = rstmem_pool_op_cma_alloc,
.free = rstmem_pool_op_cma_free,
.update_shm = rstmem_pool_op_cma_update_shm,
.destroy_pool = pool_op_cma_destroy_pool,
+};
+static int get_rstmem_config(struct optee *optee, u32 use_case,
size_t *min_size, u_int *min_align,
u16 *end_points, u_int *ep_count)
I guess this end points terminology is specific to FF-A ABI. Is there any relevance for this in the common APIs?
Yes, endpoints are specific to FF-A ABI. The list of end-points must be presented to FFA_MEM_LEND. We're relying on the secure world to know which endpoints are needed for a specific use case.
Cheers, Jens
-Sumit
+{
struct tee_param params[2] = {
[0] = {
.attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT,
.u.value.a = use_case,
},
[1] = {
.attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT,
},
};
struct optee_shm_arg_entry *entry;
struct tee_shm *shm_param = NULL;
struct optee_msg_arg *msg_arg;
struct tee_shm *shm;
u_int offs;
int rc;
if (end_points && *ep_count) {
params[1].u.memref.size = *ep_count * sizeof(*end_points);
shm_param = tee_shm_alloc_priv_buf(optee->ctx,
params[1].u.memref.size);
if (IS_ERR(shm_param))
return PTR_ERR(shm_param);
params[1].u.memref.shm = shm_param;
}
msg_arg = optee_get_msg_arg(optee->ctx, ARRAY_SIZE(params), &entry,
&shm, &offs);
if (IS_ERR(msg_arg)) {
rc = PTR_ERR(msg_arg);
goto out_free_shm;
}
msg_arg->cmd = OPTEE_MSG_CMD_GET_RSTMEM_CONFIG;
rc = optee->ops->to_msg_param(optee, msg_arg->params,
ARRAY_SIZE(params), params,
false /*!update_out*/);
if (rc)
goto out_free_msg;
rc = optee->ops->do_call_with_arg(optee->ctx, shm, offs, false);
if (rc)
goto out_free_msg;
if (msg_arg->ret && msg_arg->ret != TEEC_ERROR_SHORT_BUFFER) {
rc = -EINVAL;
goto out_free_msg;
}
rc = optee->ops->from_msg_param(optee, params, ARRAY_SIZE(params),
msg_arg->params, true /*update_out*/);
if (rc)
goto out_free_msg;
if (!msg_arg->ret && end_points &&
*ep_count < params[1].u.memref.size / sizeof(u16)) {
rc = -EINVAL;
goto out_free_msg;
}
*min_size = params[0].u.value.a;
*min_align = params[0].u.value.b;
*ep_count = params[1].u.memref.size / sizeof(u16);
if (msg_arg->ret == TEEC_ERROR_SHORT_BUFFER) {
rc = -ENOSPC;
goto out_free_msg;
}
if (end_points)
memcpy(end_points, tee_shm_get_va(shm_param, 0),
params[1].u.memref.size);
+out_free_msg:
optee_free_msg_arg(optee->ctx, entry, offs);
+out_free_shm:
if (shm_param)
tee_shm_free(shm_param);
return rc;
+}
+struct tee_rstmem_pool *optee_rstmem_alloc_cma_pool(struct optee *optee,
enum tee_dma_heap_id id)
+{
struct optee_rstmem_cma_pool *rp;
u32 use_case = id;
size_t min_size;
int rc;
rp = kzalloc(sizeof(*rp), GFP_KERNEL);
if (!rp)
return ERR_PTR(-ENOMEM);
rp->use_case = use_case;
rc = get_rstmem_config(optee, use_case, &min_size, &rp->align, NULL,
&rp->end_point_count);
if (rc) {
if (rc != -ENOSPC)
goto err;
rp->end_points = kcalloc(rp->end_point_count,
sizeof(*rp->end_points), GFP_KERNEL);
if (!rp->end_points) {
rc = -ENOMEM;
goto err;
}
rc = get_rstmem_config(optee, use_case, &min_size, &rp->align,
rp->end_points, &rp->end_point_count);
if (rc)
goto err_kfree_eps;
}
rp->pool.ops = &rstmem_pool_ops_cma;
rp->optee = optee;
rp->page_count = min_size / PAGE_SIZE;
mutex_init(&rp->mutex);
return &rp->pool;
+err_kfree_eps:
kfree(rp->end_points);
+err:
kfree(rp);
return ERR_PTR(rc);
+}
2.43.0
On Tue, Apr 1, 2025 at 12:13 PM Sumit Garg sumit.garg@kernel.org wrote:
- MM folks to seek guidance here.
On Thu, Mar 27, 2025 at 09:07:34AM +0100, Jens Wiklander wrote:
Hi Sumit,
On Tue, Mar 25, 2025 at 8:42 AM Sumit Garg sumit.garg@kernel.org wrote:
On Wed, Mar 05, 2025 at 02:04:15PM +0100, Jens Wiklander wrote:
Add support in the OP-TEE backend driver dynamic restricted memory allocation with FF-A.
The restricted memory pools for dynamically allocated restrict memory are instantiated when requested by user-space. This instantiation can fail if OP-TEE doesn't support the requested use-case of restricted memory.
Restricted memory pools based on a static carveout or dynamic allocation can coexist for different use-cases. We use only dynamic allocation with FF-A.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/tee/optee/Makefile | 1 + drivers/tee/optee/ffa_abi.c | 143 ++++++++++++- drivers/tee/optee/optee_private.h | 13 +- drivers/tee/optee/rstmem.c | 329 ++++++++++++++++++++++++++++++ 4 files changed, 483 insertions(+), 3 deletions(-) create mode 100644 drivers/tee/optee/rstmem.c
<snip>
diff --git a/drivers/tee/optee/rstmem.c b/drivers/tee/optee/rstmem.c new file mode 100644 index 000000000000..ea27769934d4 --- /dev/null +++ b/drivers/tee/optee/rstmem.c @@ -0,0 +1,329 @@ +// SPDX-License-Identifier: GPL-2.0-only +/*
- Copyright (c) 2025, Linaro Limited
- */
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#include <linux/errno.h> +#include <linux/genalloc.h> +#include <linux/slab.h> +#include <linux/string.h> +#include <linux/tee_core.h> +#include <linux/types.h> +#include "optee_private.h"
+struct optee_rstmem_cma_pool {
struct tee_rstmem_pool pool;
struct gen_pool *gen_pool;
struct optee *optee;
size_t page_count;
u16 *end_points;
u_int end_point_count;
u_int align;
refcount_t refcount;
u32 use_case;
struct tee_shm *rstmem;
/* Protects when initializing and tearing down this struct */
struct mutex mutex;
+};
+static struct optee_rstmem_cma_pool * +to_rstmem_cma_pool(struct tee_rstmem_pool *pool) +{
return container_of(pool, struct optee_rstmem_cma_pool, pool);
+}
+static int init_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{
int rc;
rp->rstmem = tee_shm_alloc_cma_phys_mem(rp->optee->ctx, rp->page_count,
rp->align);
if (IS_ERR(rp->rstmem)) {
rc = PTR_ERR(rp->rstmem);
goto err_null_rstmem;
}
/*
* TODO unmap the memory range since the physical memory will
* become inaccesible after the lend_rstmem() call.
*/
What's your plan for this TODO? I think we need a CMA allocator here which can allocate un-mapped memory such that any cache speculation won't lead to CPU hangs once the memory restriction comes into picture.
What happens is platform-specific. For some platforms, it might be enough to avoid explicit access. Yes, a CMA allocator with unmapped memory or where memory can be unmapped is one option.
Did you get a chance to enable real memory protection on RockPi board?
No, I don't think I have access to the needed documentation for the board to set it up for relevant peripherals.
This will atleast ensure that mapped restricted memory without explicit access works fine. Since otherwise once people start to enable real memory restriction in OP-TEE, there can be chances of random hang ups due to cache speculation.
A hypervisor in the normal world can also make the memory inaccessible to the kernel. That shouldn't cause any hangups due to cache speculation.
Cheers, Jens
MM folks,
Basically what we are trying to achieve here is a "no-map" DT behaviour [1] which is rather dynamic in nature. The use-case here is that a memory block allocated from CMA can be marked restricted at runtime where we would like the Linux not being able to directly or indirectly (cache speculation) access it. Once memory restriction use-case has been completed, the memory block can be marked as normal and freed for further CMA allocation.
It will be apprciated if you can guide us regarding the appropriate APIs to use for un-mapping/mamping CMA allocations for this use-case.
[1] https://github.com/devicetree-org/dt-schema/blob/main/dtschema/schemas/reser...
-Sumit
rc = rp->optee->ops->lend_rstmem(rp->optee, rp->rstmem, rp->end_points,
rp->end_point_count, rp->use_case);
if (rc)
goto err_put_shm;
rp->rstmem->flags |= TEE_SHM_DYNAMIC;
rp->gen_pool = gen_pool_create(PAGE_SHIFT, -1);
if (!rp->gen_pool) {
rc = -ENOMEM;
goto err_reclaim;
}
rc = gen_pool_add(rp->gen_pool, rp->rstmem->paddr,
rp->rstmem->size, -1);
if (rc)
goto err_free_pool;
refcount_set(&rp->refcount, 1);
return 0;
+err_free_pool:
gen_pool_destroy(rp->gen_pool);
rp->gen_pool = NULL;
+err_reclaim:
rp->optee->ops->reclaim_rstmem(rp->optee, rp->rstmem);
+err_put_shm:
tee_shm_put(rp->rstmem);
+err_null_rstmem:
rp->rstmem = NULL;
return rc;
+}
+static int get_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{
int rc = 0;
if (!refcount_inc_not_zero(&rp->refcount)) {
mutex_lock(&rp->mutex);
if (rp->gen_pool) {
/*
* Another thread has already initialized the pool
* before us, or the pool was just about to be torn
* down. Either way we only need to increase the
* refcount and we're done.
*/
refcount_inc(&rp->refcount);
} else {
rc = init_cma_rstmem(rp);
}
mutex_unlock(&rp->mutex);
}
return rc;
+}
+static void release_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{
gen_pool_destroy(rp->gen_pool);
rp->gen_pool = NULL;
rp->optee->ops->reclaim_rstmem(rp->optee, rp->rstmem);
rp->rstmem->flags &= ~TEE_SHM_DYNAMIC;
WARN(refcount_read(&rp->rstmem->refcount) != 1, "Unexpected refcount");
tee_shm_put(rp->rstmem);
rp->rstmem = NULL;
+}
+static void put_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{
if (refcount_dec_and_test(&rp->refcount)) {
mutex_lock(&rp->mutex);
if (rp->gen_pool)
release_cma_rstmem(rp);
mutex_unlock(&rp->mutex);
}
+}
+static int rstmem_pool_op_cma_alloc(struct tee_rstmem_pool *pool,
struct sg_table *sgt, size_t size,
size_t *offs)
+{
struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool);
size_t sz = ALIGN(size, PAGE_SIZE);
phys_addr_t pa;
int rc;
rc = get_cma_rstmem(rp);
if (rc)
return rc;
pa = gen_pool_alloc(rp->gen_pool, sz);
if (!pa) {
rc = -ENOMEM;
goto err_put;
}
rc = sg_alloc_table(sgt, 1, GFP_KERNEL);
if (rc)
goto err_free;
sg_set_page(sgt->sgl, phys_to_page(pa), size, 0);
*offs = pa - rp->rstmem->paddr;
return 0;
+err_free:
gen_pool_free(rp->gen_pool, pa, size);
+err_put:
put_cma_rstmem(rp);
return rc;
+}
+static void rstmem_pool_op_cma_free(struct tee_rstmem_pool *pool,
struct sg_table *sgt)
+{
struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool);
struct scatterlist *sg;
int i;
for_each_sgtable_sg(sgt, sg, i)
gen_pool_free(rp->gen_pool, sg_phys(sg), sg->length);
sg_free_table(sgt);
put_cma_rstmem(rp);
+}
+static int rstmem_pool_op_cma_update_shm(struct tee_rstmem_pool *pool,
struct sg_table *sgt, size_t offs,
struct tee_shm *shm,
struct tee_shm **parent_shm)
+{
struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool);
*parent_shm = rp->rstmem;
return 0;
+}
+static void pool_op_cma_destroy_pool(struct tee_rstmem_pool *pool) +{
struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool);
mutex_destroy(&rp->mutex);
kfree(rp);
+}
+static struct tee_rstmem_pool_ops rstmem_pool_ops_cma = {
.alloc = rstmem_pool_op_cma_alloc,
.free = rstmem_pool_op_cma_free,
.update_shm = rstmem_pool_op_cma_update_shm,
.destroy_pool = pool_op_cma_destroy_pool,
+};
+static int get_rstmem_config(struct optee *optee, u32 use_case,
size_t *min_size, u_int *min_align,
u16 *end_points, u_int *ep_count)
I guess this end points terminology is specific to FF-A ABI. Is there any relevance for this in the common APIs?
Yes, endpoints are specific to FF-A ABI. The list of end-points must be presented to FFA_MEM_LEND. We're relying on the secure world to know which endpoints are needed for a specific use case.
Cheers, Jens
-Sumit
+{
struct tee_param params[2] = {
[0] = {
.attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT,
.u.value.a = use_case,
},
[1] = {
.attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT,
},
};
struct optee_shm_arg_entry *entry;
struct tee_shm *shm_param = NULL;
struct optee_msg_arg *msg_arg;
struct tee_shm *shm;
u_int offs;
int rc;
if (end_points && *ep_count) {
params[1].u.memref.size = *ep_count * sizeof(*end_points);
shm_param = tee_shm_alloc_priv_buf(optee->ctx,
params[1].u.memref.size);
if (IS_ERR(shm_param))
return PTR_ERR(shm_param);
params[1].u.memref.shm = shm_param;
}
msg_arg = optee_get_msg_arg(optee->ctx, ARRAY_SIZE(params), &entry,
&shm, &offs);
if (IS_ERR(msg_arg)) {
rc = PTR_ERR(msg_arg);
goto out_free_shm;
}
msg_arg->cmd = OPTEE_MSG_CMD_GET_RSTMEM_CONFIG;
rc = optee->ops->to_msg_param(optee, msg_arg->params,
ARRAY_SIZE(params), params,
false /*!update_out*/);
if (rc)
goto out_free_msg;
rc = optee->ops->do_call_with_arg(optee->ctx, shm, offs, false);
if (rc)
goto out_free_msg;
if (msg_arg->ret && msg_arg->ret != TEEC_ERROR_SHORT_BUFFER) {
rc = -EINVAL;
goto out_free_msg;
}
rc = optee->ops->from_msg_param(optee, params, ARRAY_SIZE(params),
msg_arg->params, true /*update_out*/);
if (rc)
goto out_free_msg;
if (!msg_arg->ret && end_points &&
*ep_count < params[1].u.memref.size / sizeof(u16)) {
rc = -EINVAL;
goto out_free_msg;
}
*min_size = params[0].u.value.a;
*min_align = params[0].u.value.b;
*ep_count = params[1].u.memref.size / sizeof(u16);
if (msg_arg->ret == TEEC_ERROR_SHORT_BUFFER) {
rc = -ENOSPC;
goto out_free_msg;
}
if (end_points)
memcpy(end_points, tee_shm_get_va(shm_param, 0),
params[1].u.memref.size);
+out_free_msg:
optee_free_msg_arg(optee->ctx, entry, offs);
+out_free_shm:
if (shm_param)
tee_shm_free(shm_param);
return rc;
+}
+struct tee_rstmem_pool *optee_rstmem_alloc_cma_pool(struct optee *optee,
enum tee_dma_heap_id id)
+{
struct optee_rstmem_cma_pool *rp;
u32 use_case = id;
size_t min_size;
int rc;
rp = kzalloc(sizeof(*rp), GFP_KERNEL);
if (!rp)
return ERR_PTR(-ENOMEM);
rp->use_case = use_case;
rc = get_rstmem_config(optee, use_case, &min_size, &rp->align, NULL,
&rp->end_point_count);
if (rc) {
if (rc != -ENOSPC)
goto err;
rp->end_points = kcalloc(rp->end_point_count,
sizeof(*rp->end_points), GFP_KERNEL);
if (!rp->end_points) {
rc = -ENOMEM;
goto err;
}
rc = get_rstmem_config(optee, use_case, &min_size, &rp->align,
rp->end_points, &rp->end_point_count);
if (rc)
goto err_kfree_eps;
}
rp->pool.ops = &rstmem_pool_ops_cma;
rp->optee = optee;
rp->page_count = min_size / PAGE_SIZE;
mutex_init(&rp->mutex);
return &rp->pool;
+err_kfree_eps:
kfree(rp->end_points);
+err:
kfree(rp);
return ERR_PTR(rc);
+}
2.43.0
Add support in the OP-TEE backend driver for dynamic restricted memory allocation using the SMC ABI.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org --- drivers/tee/optee/smc_abi.c | 96 +++++++++++++++++++++++++++++++------ 1 file changed, 81 insertions(+), 15 deletions(-)
diff --git a/drivers/tee/optee/smc_abi.c b/drivers/tee/optee/smc_abi.c index a14ff0b7d3b3..aa574ee6e277 100644 --- a/drivers/tee/optee/smc_abi.c +++ b/drivers/tee/optee/smc_abi.c @@ -1001,6 +1001,69 @@ static int optee_smc_do_call_with_arg(struct tee_context *ctx, return rc; }
+static int optee_smc_lend_rstmem(struct optee *optee, struct tee_shm *rstmem, + u16 *end_points, unsigned int ep_count, + u32 use_case) +{ + struct optee_shm_arg_entry *entry; + struct optee_msg_arg *msg_arg; + struct tee_shm *shm; + u_int offs; + int rc; + + msg_arg = optee_get_msg_arg(optee->ctx, 2, &entry, &shm, &offs); + if (IS_ERR(msg_arg)) + return PTR_ERR(msg_arg); + + msg_arg->cmd = OPTEE_MSG_CMD_LEND_RSTMEM; + msg_arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT; + msg_arg->params[0].u.value.a = use_case; + msg_arg->params[1].attr = OPTEE_MSG_ATTR_TYPE_TMEM_INPUT; + msg_arg->params[1].u.tmem.buf_ptr = rstmem->paddr; + msg_arg->params[1].u.tmem.size = rstmem->size; + msg_arg->params[1].u.tmem.shm_ref = (u_long)rstmem; + + rc = optee->ops->do_call_with_arg(optee->ctx, shm, offs, false); + if (rc) + goto out; + if (msg_arg->ret != TEEC_SUCCESS) { + rc = -EINVAL; + goto out; + } + rstmem->sec_world_id = (u_long)rstmem; + +out: + optee_free_msg_arg(optee->ctx, entry, offs); + return rc; +} + +static int optee_smc_reclaim_rstmem(struct optee *optee, struct tee_shm *rstmem) +{ + struct optee_shm_arg_entry *entry; + struct optee_msg_arg *msg_arg; + struct tee_shm *shm; + u_int offs; + int rc; + + msg_arg = optee_get_msg_arg(optee->ctx, 1, &entry, &shm, &offs); + if (IS_ERR(msg_arg)) + return PTR_ERR(msg_arg); + + msg_arg->cmd = OPTEE_MSG_CMD_RECLAIM_RSTMEM; + msg_arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_RMEM_INPUT; + msg_arg->params[0].u.rmem.shm_ref = (u_long)rstmem; + + rc = optee->ops->do_call_with_arg(optee->ctx, shm, offs, false); + if (rc) + goto out; + if (msg_arg->ret != TEEC_SUCCESS) + rc = -EINVAL; + +out: + optee_free_msg_arg(optee->ctx, entry, offs); + return rc; +} + /* * 5. Asynchronous notification */ @@ -1252,6 +1315,8 @@ static const struct optee_ops optee_ops = { .do_call_with_arg = optee_smc_do_call_with_arg, .to_msg_param = optee_to_msg_param, .from_msg_param = optee_from_msg_param, + .lend_rstmem = optee_smc_lend_rstmem, + .reclaim_rstmem = optee_smc_reclaim_rstmem, };
static int enable_async_notif(optee_invoke_fn *invoke_fn) @@ -1622,11 +1687,13 @@ static inline int optee_load_fw(struct platform_device *pdev,
static int optee_sdp_pool_init(struct optee *optee) { + bool sdp = optee->smc.sec_caps & OPTEE_SMC_SEC_CAP_SDP; + bool dyn_sdp = optee->smc.sec_caps & OPTEE_SMC_SEC_CAP_DYNAMIC_RSTMEM; enum tee_dma_heap_id heap_id = TEE_DMA_HEAP_SECURE_VIDEO_PLAY; - struct tee_rstmem_pool *pool; - int rc; + struct tee_rstmem_pool *pool = ERR_PTR(-EINVAL); + int rc = -EINVAL;
- if (optee->smc.sec_caps & OPTEE_SMC_SEC_CAP_SDP) { + if (sdp) { union { struct arm_smccc_res smccc; struct optee_smc_get_sdp_config_result result; @@ -1634,25 +1701,24 @@ static int optee_sdp_pool_init(struct optee *optee)
optee->smc.invoke_fn(OPTEE_SMC_GET_SDP_CONFIG, 0, 0, 0, 0, 0, 0, 0, &res.smccc); - if (res.result.status != OPTEE_SMC_RETURN_OK) { - pr_err("Secure Data Path service not available\n"); - return 0; - } + if (res.result.status == OPTEE_SMC_RETURN_OK) + pool = tee_rstmem_static_pool_alloc(res.result.start, + res.result.size); + }
- pool = tee_rstmem_static_pool_alloc(res.result.start, - res.result.size); - if (IS_ERR(pool)) - return PTR_ERR(pool); + if (dyn_sdp && IS_ERR(pool)) + pool = optee_rstmem_alloc_cma_pool(optee, heap_id);
+ if (!IS_ERR(pool)) { rc = tee_device_register_dma_heap(optee->teedev, heap_id, pool); if (rc) - goto err; + pool->ops->destroy_pool(pool); }
+ if (rc && (sdp || dyn_sdp)) + pr_err("Secure Data Path service not available\n"); + return 0; -err: - pool->ops->destroy_pool(pool); - return rc; }
static int optee_probe(struct platform_device *pdev)
On Wed, Mar 05, 2025 at 02:04:16PM +0100, Jens Wiklander wrote:
Add support in the OP-TEE backend driver for dynamic restricted memory allocation using the SMC ABI.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/tee/optee/smc_abi.c | 96 +++++++++++++++++++++++++++++++------ 1 file changed, 81 insertions(+), 15 deletions(-)
diff --git a/drivers/tee/optee/smc_abi.c b/drivers/tee/optee/smc_abi.c index a14ff0b7d3b3..aa574ee6e277 100644 --- a/drivers/tee/optee/smc_abi.c +++ b/drivers/tee/optee/smc_abi.c @@ -1001,6 +1001,69 @@ static int optee_smc_do_call_with_arg(struct tee_context *ctx, return rc; } +static int optee_smc_lend_rstmem(struct optee *optee, struct tee_shm *rstmem,
u16 *end_points, unsigned int ep_count,
u32 use_case)
+{
- struct optee_shm_arg_entry *entry;
- struct optee_msg_arg *msg_arg;
- struct tee_shm *shm;
- u_int offs;
- int rc;
- msg_arg = optee_get_msg_arg(optee->ctx, 2, &entry, &shm, &offs);
- if (IS_ERR(msg_arg))
return PTR_ERR(msg_arg);
- msg_arg->cmd = OPTEE_MSG_CMD_LEND_RSTMEM;
- msg_arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT;
- msg_arg->params[0].u.value.a = use_case;
- msg_arg->params[1].attr = OPTEE_MSG_ATTR_TYPE_TMEM_INPUT;
- msg_arg->params[1].u.tmem.buf_ptr = rstmem->paddr;
- msg_arg->params[1].u.tmem.size = rstmem->size;
- msg_arg->params[1].u.tmem.shm_ref = (u_long)rstmem;
- rc = optee->ops->do_call_with_arg(optee->ctx, shm, offs, false);
- if (rc)
goto out;
- if (msg_arg->ret != TEEC_SUCCESS) {
rc = -EINVAL;
goto out;
- }
- rstmem->sec_world_id = (u_long)rstmem;
+out:
- optee_free_msg_arg(optee->ctx, entry, offs);
- return rc;
+}
+static int optee_smc_reclaim_rstmem(struct optee *optee, struct tee_shm *rstmem) +{
- struct optee_shm_arg_entry *entry;
- struct optee_msg_arg *msg_arg;
- struct tee_shm *shm;
- u_int offs;
- int rc;
- msg_arg = optee_get_msg_arg(optee->ctx, 1, &entry, &shm, &offs);
- if (IS_ERR(msg_arg))
return PTR_ERR(msg_arg);
- msg_arg->cmd = OPTEE_MSG_CMD_RECLAIM_RSTMEM;
- msg_arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_RMEM_INPUT;
- msg_arg->params[0].u.rmem.shm_ref = (u_long)rstmem;
- rc = optee->ops->do_call_with_arg(optee->ctx, shm, offs, false);
- if (rc)
goto out;
- if (msg_arg->ret != TEEC_SUCCESS)
rc = -EINVAL;
+out:
- optee_free_msg_arg(optee->ctx, entry, offs);
- return rc;
+}
/*
- Asynchronous notification
*/ @@ -1252,6 +1315,8 @@ static const struct optee_ops optee_ops = { .do_call_with_arg = optee_smc_do_call_with_arg, .to_msg_param = optee_to_msg_param, .from_msg_param = optee_from_msg_param,
- .lend_rstmem = optee_smc_lend_rstmem,
- .reclaim_rstmem = optee_smc_reclaim_rstmem,
}; static int enable_async_notif(optee_invoke_fn *invoke_fn) @@ -1622,11 +1687,13 @@ static inline int optee_load_fw(struct platform_device *pdev, static int optee_sdp_pool_init(struct optee *optee) {
- bool sdp = optee->smc.sec_caps & OPTEE_SMC_SEC_CAP_SDP;
- bool dyn_sdp = optee->smc.sec_caps & OPTEE_SMC_SEC_CAP_DYNAMIC_RSTMEM; enum tee_dma_heap_id heap_id = TEE_DMA_HEAP_SECURE_VIDEO_PLAY;
- struct tee_rstmem_pool *pool;
- int rc;
- struct tee_rstmem_pool *pool = ERR_PTR(-EINVAL);
- int rc = -EINVAL;
- if (optee->smc.sec_caps & OPTEE_SMC_SEC_CAP_SDP) {
- if (sdp) { union { struct arm_smccc_res smccc; struct optee_smc_get_sdp_config_result result;
@@ -1634,25 +1701,24 @@ static int optee_sdp_pool_init(struct optee *optee) optee->smc.invoke_fn(OPTEE_SMC_GET_SDP_CONFIG, 0, 0, 0, 0, 0, 0, 0, &res.smccc);
if (res.result.status != OPTEE_SMC_RETURN_OK) {
pr_err("Secure Data Path service not available\n");
return 0;
}
if (res.result.status == OPTEE_SMC_RETURN_OK)
pool = tee_rstmem_static_pool_alloc(res.result.start,
res.result.size);
- }
pool = tee_rstmem_static_pool_alloc(res.result.start,
res.result.size);
if (IS_ERR(pool))
return PTR_ERR(pool);
- if (dyn_sdp && IS_ERR(pool))
pool = optee_rstmem_alloc_cma_pool(optee, heap_id);
- if (!IS_ERR(pool)) { rc = tee_device_register_dma_heap(optee->teedev, heap_id, pool); if (rc)
goto err;
}pool->ops->destroy_pool(pool);
- if (rc && (sdp || dyn_sdp))
pr_err("Secure Data Path service not available\n");
Rather than an error message we should just use pr_info().
-Sumit
- return 0;
-err:
- pool->ops->destroy_pool(pool);
- return rc;
} static int optee_probe(struct platform_device *pdev) -- 2.43.0
Hi,
On Wed, Mar 5, 2025 at 2:06 PM Jens Wiklander jens.wiklander@linaro.org wrote:
Hi,
This patch set allocates the restricted DMA-bufs from a DMA-heap instantiated from the TEE subsystem.
The TEE subsystem handles the DMA-buf allocations since it is the TEE (OP-TEE, AMD-TEE, TS-TEE, or perhaps a future QTEE) which sets up the restrictions for the memory used for the DMA-bufs.
The DMA-heap uses a restricted memory pool provided by the backend TEE driver, allowing it to choose how to allocate the restricted physical memory.
The allocated DMA-bufs must be imported with a new TEE_IOC_SHM_REGISTER_FD before they can be passed as arguments when requesting services from the secure world.
Three use-cases (Secure Video Playback, Trusted UI, and Secure Video Recording) has been identified so far to serve as examples of what can be expected. The use-cases has predefined DMA-heap names, "restricted,secure-video", "restricted,trusted-ui", and "restricted,secure-video-record". The backend driver registers restricted memory pools for the use-cases it supports.
When preparing a v7 of this patch set, I'll switch to "protected" instead of "restricted" based on Nicolas Dufresne's comment [1], unless someone objects.
[1] https://lore.kernel.org/lkml/32c29526416c07c37819aedabcbf1e562ee98bf2.camel@...
Cheers, Jens
Each use-case has it's own restricted memory pool since different use-cases requires isolation from different parts of the system. A restricted memory pool can be based on a static carveout instantiated while probing the TEE backend driver, or dynamically allocated from CMA and made restricted as needed by the TEE.
This can be tested on a RockPi 4B+ with the following steps: repo init -u https://github.com/jenswi-linaro/manifest.git -m rockpi4.xml \ -b prototype/sdp-v6 repo sync -j8 cd build make toolchains -j$(nproc) make all -j$(nproc) # Copy ../out/rockpi4.img to an SD card and boot the RockPi from that # Connect a monitor to the RockPi # login and at the prompt: gst-launch-1.0 videotestsrc ! \ aesenc key=1f9423681beb9a79215820f6bda73d0f \ iv=e9aa8e834d8d70b7e0d254ff670dd718 serialize-iv=true ! \ aesdec key=1f9423681beb9a79215820f6bda73d0f ! \ kmssink
The aesdec module has been hacked to use an OP-TEE TA to decrypt the stream into restricted DMA-bufs which are consumed by the kmssink.
The primitive QEMU tests from previous patch set can be tested on RockPi in the same way with: xtest --sdp-basic
The primitive test are tested on QEMU with the following steps: repo init -u https://github.com/jenswi-linaro/manifest.git -m qemu_v8.xml \ -b prototype/sdp-v6 repo sync -j8 cd build make toolchains -j$(nproc) make SPMC_AT_EL=1 all -j$(nproc) make SPMC_AT_EL=1 run-only # login and at the prompt: xtest --sdp-basic
The SPMC_AT_EL=1 parameter configures the build with FF-A and an SPMC at S-EL1 inside OP-TEE. The parameter can be changed into SPMC_AT_EL=n to test without FF-A using the original SMC ABI instead. Please remember to do %rm -rf ../trusted-firmware-a/build/qemu for TF-A to be rebuilt properly using the new configuration.
https://optee.readthedocs.io/en/latest/building/prerequisites.html list dependencies needed to build the above.
The tests are pretty basic, mostly checking that a Trusted Application in the secure world can access and manipulate the memory. There are also some negative tests for out of bounds buffers etc.
Thanks, Jens
Changes since V5:
- Removing "tee: add restricted memory allocation" and "tee: add TEE_IOC_RSTMEM_FD_INFO"
- Adding "tee: implement restricted DMA-heap", "tee: new ioctl to a register tee_shm from a dmabuf file descriptor", "tee: add tee_shm_alloc_cma_phys_mem()", "optee: pass parent device to tee_device_alloc()", and "tee: tee_device_alloc(): copy dma_mask from parent device"
- The two TEE driver OPs "rstmem_alloc()" and "rstmem_free()" are replaced with a struct tee_rstmem_pool abstraction.
- Replaced the the TEE_IOC_RSTMEM_ALLOC user space API with the DMA-heap API
Changes since V4:
- Adding the patch "tee: add TEE_IOC_RSTMEM_FD_INFO" needed by the GStreamer demo
- Removing the dummy CPU access and mmap functions from the dma_buf_ops
- Fixing a compile error in "optee: FF-A: dynamic restricted memory allocation" reported by kernel test robot lkp@intel.com
Changes since V3:
- Make the use_case and flags field in struct tee_shm u32's instead of u16's
- Add more description for TEE_IOC_RSTMEM_ALLOC in the header file
- Import namespace DMA_BUF in module tee, reported by lkp@intel.com
- Added a note in the commit message for "optee: account for direction while converting parameters" why it's needed
- Factor out dynamic restricted memory allocation from "optee: support restricted memory allocation" into two new commits "optee: FF-A: dynamic restricted memory allocation" and "optee: smc abi: dynamic restricted memory allocation"
- Guard CMA usage with #ifdef CONFIG_CMA, effectively disabling dynamic restricted memory allocate if CMA isn't configured
Changes since the V2 RFC:
- Based on v6.12
- Replaced the flags for SVP and Trusted UID memory with a u32 field with unique id for each use case
- Added dynamic allocation of restricted memory pools
- Added OP-TEE ABI both with and without FF-A for dynamic restricted memory
- Added support for FF-A with FFA_LEND
Changes since the V1 RFC:
- Based on v6.11
- Complete rewrite, replacing the restricted heap with TEE_IOC_RSTMEM_ALLOC
Changes since Olivier's post [2]:
- Based on Yong Wu's post [1] where much of dma-buf handling is done in the generic restricted heap
- Simplifications and cleanup
- New commit message for "dma-buf: heaps: add Linaro restricted dmabuf heap support"
- Replaced the word "secure" with "restricted" where applicable
Etienne Carriere (1): tee: new ioctl to a register tee_shm from a dmabuf file descriptor
Jens Wiklander (9): tee: tee_device_alloc(): copy dma_mask from parent device optee: pass parent device to tee_device_alloc() optee: account for direction while converting parameters optee: sync secure world ABI headers tee: implement restricted DMA-heap tee: add tee_shm_alloc_cma_phys_mem() optee: support restricted memory allocation optee: FF-A: dynamic restricted memory allocation optee: smc abi: dynamic restricted memory allocation
drivers/tee/Makefile | 1 + drivers/tee/optee/Makefile | 1 + drivers/tee/optee/call.c | 10 +- drivers/tee/optee/core.c | 1 + drivers/tee/optee/ffa_abi.c | 194 +++++++++++- drivers/tee/optee/optee_ffa.h | 27 +- drivers/tee/optee/optee_msg.h | 65 ++++- drivers/tee/optee/optee_private.h | 55 +++- drivers/tee/optee/optee_smc.h | 71 ++++- drivers/tee/optee/rpc.c | 31 +- drivers/tee/optee/rstmem.c | 329 +++++++++++++++++++++ drivers/tee/optee/smc_abi.c | 190 ++++++++++-- drivers/tee/tee_core.c | 147 +++++++--- drivers/tee/tee_heap.c | 470 ++++++++++++++++++++++++++++++ drivers/tee/tee_private.h | 7 + drivers/tee/tee_shm.c | 199 ++++++++++++- include/linux/tee_core.h | 67 +++++ include/linux/tee_drv.h | 10 + include/uapi/linux/tee.h | 29 ++ 19 files changed, 1781 insertions(+), 123 deletions(-) create mode 100644 drivers/tee/optee/rstmem.c create mode 100644 drivers/tee/tee_heap.c
base-commit: 7eb172143d5508b4da468ed59ee857c6e5e01da6
2.43.0
op-tee@lists.trustedfirmware.org