Hi,
This patch set is based on top of Yong Wu's restricted heap patch set [1]. It's also a continuation on Olivier's Add dma-buf secure-heap patch set [2].
The Linaro restricted heap uses genalloc in the kernel to manage the heap carvout. This is a difference from the Mediatek restricted heap which relies on the secure world to manage the carveout.
I've tried to adress the comments on [2], but [1] introduces changes so I'm afraid I've had to skip some comments.
This can be tested on QEMU with the following steps: repo init -u https://github.com/jenswi-linaro/manifest.git -m qemu_v8.xml \ -b prototype/sdp-v1 repo sync -j8 cd build make toolchains -j4 make all -j$(nproc) make run-only # login and at the prompt: xtest --sdp-basic
https://optee.readthedocs.io/en/latest/building/prerequisites.html list dependencies needed to build the above.
The tests are pretty basic, mostly checking that a Trusted Application in the secure world can access and manipulate the memory.
Cheers, Jens
[1] https://lore.kernel.org/dri-devel/20240515112308.10171-1-yong.wu@mediatek.co... [2] https://lore.kernel.org/lkml/20220805135330.970-1-olivier.masse@nxp.com/
Changes since Olivier's post [2]: * Based on Yong Wu's post [1] where much of dma-buf handling is done in the generic restricted heap * Simplifications and cleanup * New commit message for "dma-buf: heaps: add Linaro restricted dmabuf heap support" * Replaced the word "secure" with "restricted" where applicable
Etienne Carriere (1): tee: new ioctl to a register tee_shm from a dmabuf file descriptor
Jens Wiklander (2): dma-buf: heaps: restricted_heap: add no_map attribute dma-buf: heaps: add Linaro restricted dmabuf heap support
Olivier Masse (1): dt-bindings: reserved-memory: add linaro,restricted-heap
.../linaro,restricted-heap.yaml | 56 ++++++ drivers/dma-buf/heaps/Kconfig | 10 ++ drivers/dma-buf/heaps/Makefile | 1 + drivers/dma-buf/heaps/restricted_heap.c | 17 +- drivers/dma-buf/heaps/restricted_heap.h | 2 + .../dma-buf/heaps/restricted_heap_linaro.c | 165 ++++++++++++++++++ drivers/tee/tee_core.c | 38 ++++ drivers/tee/tee_shm.c | 104 ++++++++++- include/linux/tee_drv.h | 11 ++ include/uapi/linux/tee.h | 29 +++ 10 files changed, 426 insertions(+), 7 deletions(-) create mode 100644 Documentation/devicetree/bindings/reserved-memory/linaro,restricted-heap.yaml create mode 100644 drivers/dma-buf/heaps/restricted_heap_linaro.c
Add a no_map attribute to struct restricted_heap_attachment and struct restricted_heap to skip the call to dma_map_sgtable() if set. This avoids trying to map a dma-buf that doens't refer to memory accessible by the kernel.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org --- drivers/dma-buf/heaps/restricted_heap.c | 17 +++++++++++++---- drivers/dma-buf/heaps/restricted_heap.h | 2 ++ 2 files changed, 15 insertions(+), 4 deletions(-)
diff --git a/drivers/dma-buf/heaps/restricted_heap.c b/drivers/dma-buf/heaps/restricted_heap.c index 8bc8a5e3f969..4bf28e3727ca 100644 --- a/drivers/dma-buf/heaps/restricted_heap.c +++ b/drivers/dma-buf/heaps/restricted_heap.c @@ -16,6 +16,7 @@ struct restricted_heap_attachment { struct sg_table *table; struct device *dev; + bool no_map; };
static int @@ -54,6 +55,8 @@ restricted_heap_memory_free(struct restricted_heap *rheap, struct restricted_buf static int restricted_heap_attach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) { struct restricted_buffer *restricted_buf = dmabuf->priv; + struct dma_heap *heap = restricted_buf->heap; + struct restricted_heap *rheap = dma_heap_get_drvdata(heap); struct restricted_heap_attachment *a; struct sg_table *table;
@@ -70,6 +73,7 @@ static int restricted_heap_attach(struct dma_buf *dmabuf, struct dma_buf_attachm sg_dma_mark_restricted(table->sgl); a->table = table; a->dev = attachment->dev; + a->no_map = rheap->no_map; attachment->priv = a;
return 0; @@ -92,9 +96,12 @@ restricted_heap_map_dma_buf(struct dma_buf_attachment *attachment, struct sg_table *table = a->table; int ret;
- ret = dma_map_sgtable(attachment->dev, table, direction, DMA_ATTR_SKIP_CPU_SYNC); - if (ret) - return ERR_PTR(ret); + if (!a->no_map) { + ret = dma_map_sgtable(attachment->dev, table, direction, + DMA_ATTR_SKIP_CPU_SYNC); + if (ret) + return ERR_PTR(ret); + } return table; }
@@ -106,7 +113,9 @@ restricted_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, struct sg_t
WARN_ON(a->table != table);
- dma_unmap_sgtable(attachment->dev, table, direction, DMA_ATTR_SKIP_CPU_SYNC); + if (!a->no_map) + dma_unmap_sgtable(attachment->dev, table, direction, + DMA_ATTR_SKIP_CPU_SYNC); }
static int diff --git a/drivers/dma-buf/heaps/restricted_heap.h b/drivers/dma-buf/heaps/restricted_heap.h index 7dec4b8a471b..94cc0842f70d 100644 --- a/drivers/dma-buf/heaps/restricted_heap.h +++ b/drivers/dma-buf/heaps/restricted_heap.h @@ -27,6 +27,8 @@ struct restricted_heap { unsigned long cma_paddr; unsigned long cma_size;
+ bool no_map; + void *priv_data; };
Am 30.08.24 um 09:03 schrieb Jens Wiklander:
Add a no_map attribute to struct restricted_heap_attachment and struct restricted_heap to skip the call to dma_map_sgtable() if set. This avoids trying to map a dma-buf that doens't refer to memory accessible by the kernel.
You seem to have a misunderstanding here dma_map_sgtable() is called to map a table into IOMMU and not any kernel address space.
So please explain why you need that?
Regards, Christian.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/dma-buf/heaps/restricted_heap.c | 17 +++++++++++++---- drivers/dma-buf/heaps/restricted_heap.h | 2 ++ 2 files changed, 15 insertions(+), 4 deletions(-)
diff --git a/drivers/dma-buf/heaps/restricted_heap.c b/drivers/dma-buf/heaps/restricted_heap.c index 8bc8a5e3f969..4bf28e3727ca 100644 --- a/drivers/dma-buf/heaps/restricted_heap.c +++ b/drivers/dma-buf/heaps/restricted_heap.c @@ -16,6 +16,7 @@ struct restricted_heap_attachment { struct sg_table *table; struct device *dev;
- bool no_map; };
static int @@ -54,6 +55,8 @@ restricted_heap_memory_free(struct restricted_heap *rheap, struct restricted_buf static int restricted_heap_attach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) { struct restricted_buffer *restricted_buf = dmabuf->priv;
- struct dma_heap *heap = restricted_buf->heap;
- struct restricted_heap *rheap = dma_heap_get_drvdata(heap); struct restricted_heap_attachment *a; struct sg_table *table;
@@ -70,6 +73,7 @@ static int restricted_heap_attach(struct dma_buf *dmabuf, struct dma_buf_attachm sg_dma_mark_restricted(table->sgl); a->table = table; a->dev = attachment->dev;
- a->no_map = rheap->no_map; attachment->priv = a;
return 0; @@ -92,9 +96,12 @@ restricted_heap_map_dma_buf(struct dma_buf_attachment *attachment, struct sg_table *table = a->table; int ret;
- ret = dma_map_sgtable(attachment->dev, table, direction, DMA_ATTR_SKIP_CPU_SYNC);
- if (ret)
return ERR_PTR(ret);
- if (!a->no_map) {
ret = dma_map_sgtable(attachment->dev, table, direction,
DMA_ATTR_SKIP_CPU_SYNC);
if (ret)
return ERR_PTR(ret);
- } return table; }
@@ -106,7 +113,9 @@ restricted_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, struct sg_t WARN_ON(a->table != table);
- dma_unmap_sgtable(attachment->dev, table, direction, DMA_ATTR_SKIP_CPU_SYNC);
- if (!a->no_map)
dma_unmap_sgtable(attachment->dev, table, direction,
}DMA_ATTR_SKIP_CPU_SYNC);
static int diff --git a/drivers/dma-buf/heaps/restricted_heap.h b/drivers/dma-buf/heaps/restricted_heap.h index 7dec4b8a471b..94cc0842f70d 100644 --- a/drivers/dma-buf/heaps/restricted_heap.h +++ b/drivers/dma-buf/heaps/restricted_heap.h @@ -27,6 +27,8 @@ struct restricted_heap { unsigned long cma_paddr; unsigned long cma_size;
- bool no_map;
- void *priv_data; };
On Fri, Aug 30, 2024 at 10:47 AM Christian König christian.koenig@amd.com wrote:
Am 30.08.24 um 09:03 schrieb Jens Wiklander:
Add a no_map attribute to struct restricted_heap_attachment and struct restricted_heap to skip the call to dma_map_sgtable() if set. This avoids trying to map a dma-buf that doens't refer to memory accessible by the kernel.
You seem to have a misunderstanding here dma_map_sgtable() is called to map a table into IOMMU and not any kernel address space.
So please explain why you need that?
You're right, I had misunderstood dma_map_sgtable(). There's no need for the no_map attribute, so I'll remove it.
Perhaps you have a suggestion on how to fix a problem when using dma_map_sgtable()?
Without it, I'll have to assign a pointer to teedev->dev.dma_mask when using the restricted heap with the OP-TEE driver or there will be a warning in __dma_map_sg_attrs() ending with a failure when trying to register the dma-buf fd. OP-TEE is probed with a platform device, and taking the dma_mask pointer from that device works. Is that a good approach or is there a better way of assigning dma_mask?
Thanks, Jens
Regards, Christian.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/dma-buf/heaps/restricted_heap.c | 17 +++++++++++++---- drivers/dma-buf/heaps/restricted_heap.h | 2 ++ 2 files changed, 15 insertions(+), 4 deletions(-)
diff --git a/drivers/dma-buf/heaps/restricted_heap.c b/drivers/dma-buf/heaps/restricted_heap.c index 8bc8a5e3f969..4bf28e3727ca 100644 --- a/drivers/dma-buf/heaps/restricted_heap.c +++ b/drivers/dma-buf/heaps/restricted_heap.c @@ -16,6 +16,7 @@ struct restricted_heap_attachment { struct sg_table *table; struct device *dev;
bool no_map;
};
static int
@@ -54,6 +55,8 @@ restricted_heap_memory_free(struct restricted_heap *rheap, struct restricted_buf static int restricted_heap_attach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) { struct restricted_buffer *restricted_buf = dmabuf->priv;
struct dma_heap *heap = restricted_buf->heap;
struct restricted_heap *rheap = dma_heap_get_drvdata(heap); struct restricted_heap_attachment *a; struct sg_table *table;
@@ -70,6 +73,7 @@ static int restricted_heap_attach(struct dma_buf *dmabuf, struct dma_buf_attachm sg_dma_mark_restricted(table->sgl); a->table = table; a->dev = attachment->dev;
a->no_map = rheap->no_map; attachment->priv = a; return 0;
@@ -92,9 +96,12 @@ restricted_heap_map_dma_buf(struct dma_buf_attachment *attachment, struct sg_table *table = a->table; int ret;
ret = dma_map_sgtable(attachment->dev, table, direction, DMA_ATTR_SKIP_CPU_SYNC);
if (ret)
return ERR_PTR(ret);
if (!a->no_map) {
ret = dma_map_sgtable(attachment->dev, table, direction,
DMA_ATTR_SKIP_CPU_SYNC);
if (ret)
return ERR_PTR(ret);
}} return table;
@@ -106,7 +113,9 @@ restricted_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, struct sg_t
WARN_ON(a->table != table);
dma_unmap_sgtable(attachment->dev, table, direction, DMA_ATTR_SKIP_CPU_SYNC);
if (!a->no_map)
dma_unmap_sgtable(attachment->dev, table, direction,
DMA_ATTR_SKIP_CPU_SYNC);
}
static int
diff --git a/drivers/dma-buf/heaps/restricted_heap.h b/drivers/dma-buf/heaps/restricted_heap.h index 7dec4b8a471b..94cc0842f70d 100644 --- a/drivers/dma-buf/heaps/restricted_heap.h +++ b/drivers/dma-buf/heaps/restricted_heap.h @@ -27,6 +27,8 @@ struct restricted_heap { unsigned long cma_paddr; unsigned long cma_size;
bool no_map;
};void *priv_data;
Am 05.09.24 um 08:56 schrieb Jens Wiklander:
On Fri, Aug 30, 2024 at 10:47 AM Christian König christian.koenig@amd.com wrote:
Am 30.08.24 um 09:03 schrieb Jens Wiklander:
Add a no_map attribute to struct restricted_heap_attachment and struct restricted_heap to skip the call to dma_map_sgtable() if set. This avoids trying to map a dma-buf that doens't refer to memory accessible by the kernel.
You seem to have a misunderstanding here dma_map_sgtable() is called to map a table into IOMMU and not any kernel address space.
So please explain why you need that?
You're right, I had misunderstood dma_map_sgtable(). There's no need for the no_map attribute, so I'll remove it.
Perhaps you have a suggestion on how to fix a problem when using dma_map_sgtable()?
Without it, I'll have to assign a pointer to teedev->dev.dma_mask when using the restricted heap with the OP-TEE driver or there will be a warning in __dma_map_sg_attrs() ending with a failure when trying to register the dma-buf fd. OP-TEE is probed with a platform device, and taking the dma_mask pointer from that device works. Is that a good approach or is there a better way of assigning dma_mask?
Mhm, I don't know the full picture so I have to make some assumptions.
The teedev is just a virtual device which represents the restricted memory access paths of a real device?
If that is true then taking the dma-mask from the real device is most likely the right thing to do.
Regards, Christian.
Thanks, Jens
Regards, Christian.
Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/dma-buf/heaps/restricted_heap.c | 17 +++++++++++++---- drivers/dma-buf/heaps/restricted_heap.h | 2 ++ 2 files changed, 15 insertions(+), 4 deletions(-)
diff --git a/drivers/dma-buf/heaps/restricted_heap.c b/drivers/dma-buf/heaps/restricted_heap.c index 8bc8a5e3f969..4bf28e3727ca 100644 --- a/drivers/dma-buf/heaps/restricted_heap.c +++ b/drivers/dma-buf/heaps/restricted_heap.c @@ -16,6 +16,7 @@ struct restricted_heap_attachment { struct sg_table *table; struct device *dev;
bool no_map;
};
static int
@@ -54,6 +55,8 @@ restricted_heap_memory_free(struct restricted_heap *rheap, struct restricted_buf static int restricted_heap_attach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) { struct restricted_buffer *restricted_buf = dmabuf->priv;
struct dma_heap *heap = restricted_buf->heap;
struct restricted_heap *rheap = dma_heap_get_drvdata(heap); struct restricted_heap_attachment *a; struct sg_table *table;
@@ -70,6 +73,7 @@ static int restricted_heap_attach(struct dma_buf *dmabuf, struct dma_buf_attachm sg_dma_mark_restricted(table->sgl); a->table = table; a->dev = attachment->dev;
a->no_map = rheap->no_map; attachment->priv = a; return 0;
@@ -92,9 +96,12 @@ restricted_heap_map_dma_buf(struct dma_buf_attachment *attachment, struct sg_table *table = a->table; int ret;
ret = dma_map_sgtable(attachment->dev, table, direction, DMA_ATTR_SKIP_CPU_SYNC);
if (ret)
return ERR_PTR(ret);
if (!a->no_map) {
ret = dma_map_sgtable(attachment->dev, table, direction,
DMA_ATTR_SKIP_CPU_SYNC);
if (ret)
return ERR_PTR(ret);
}} return table;
@@ -106,7 +113,9 @@ restricted_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, struct sg_t
WARN_ON(a->table != table);
dma_unmap_sgtable(attachment->dev, table, direction, DMA_ATTR_SKIP_CPU_SYNC);
if (!a->no_map)
dma_unmap_sgtable(attachment->dev, table, direction,
DMA_ATTR_SKIP_CPU_SYNC);
}
static int
diff --git a/drivers/dma-buf/heaps/restricted_heap.h b/drivers/dma-buf/heaps/restricted_heap.h index 7dec4b8a471b..94cc0842f70d 100644 --- a/drivers/dma-buf/heaps/restricted_heap.h +++ b/drivers/dma-buf/heaps/restricted_heap.h @@ -27,6 +27,8 @@ struct restricted_heap { unsigned long cma_paddr; unsigned long cma_size;
bool no_map;
};void *priv_data;
From: Etienne Carriere etienne.carriere@linaro.org
Enable userspace to create a tee_shm object that refers to a dmabuf reference.
Userspace registers the dmabuf file descriptor as in a tee_shm object. The registration is completed with a tee_shm file descriptor returned to userspace.
Userspace is free to close the dmabuf file descriptor now since all the resources are now held via the tee_shm object.
Closing the tee_shm file descriptor will release all resources used by the tee_shm object.
This change only support dmabuf references that relates to physically contiguous memory buffers.
New tee_shm flag to identify tee_shm objects built from a registered dmabuf, TEE_SHM_DMA_BUF.
Signed-off-by: Etienne Carriere etienne.carriere@linaro.org Signed-off-by: Olivier Masse olivier.masse@nxp.com Signed-off-by: Jens Wiklander jens.wiklander@linaro.org --- drivers/tee/tee_core.c | 38 ++++++++++++++ drivers/tee/tee_shm.c | 104 +++++++++++++++++++++++++++++++++++++-- include/linux/tee_drv.h | 11 +++++ include/uapi/linux/tee.h | 29 +++++++++++ 4 files changed, 179 insertions(+), 3 deletions(-)
diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c index e59c20d74b36..3dfd5428d58c 100644 --- a/drivers/tee/tee_core.c +++ b/drivers/tee/tee_core.c @@ -356,6 +356,42 @@ tee_ioctl_shm_register(struct tee_context *ctx, return ret; }
+static int tee_ioctl_shm_register_fd(struct tee_context *ctx, + struct tee_ioctl_shm_register_fd_data __user *udata) +{ + struct tee_ioctl_shm_register_fd_data data; + struct tee_shm *shm; + long ret; + + if (copy_from_user(&data, udata, sizeof(data))) + return -EFAULT; + + /* Currently no input flags are supported */ + if (data.flags) + return -EINVAL; + + shm = tee_shm_register_fd(ctx, data.fd); + if (IS_ERR(shm)) + return -EINVAL; + + data.id = shm->id; + data.flags = shm->flags; + data.size = shm->size; + + if (copy_to_user(udata, &data, sizeof(data))) + ret = -EFAULT; + else + ret = tee_shm_get_fd(shm); + + /* + * When user space closes the file descriptor the shared memory + * should be freed or if tee_shm_get_fd() failed then it will + * be freed immediately. + */ + tee_shm_put(shm); + return ret; +} + static int params_from_user(struct tee_context *ctx, struct tee_param *params, size_t num_params, struct tee_ioctl_param __user *uparams) @@ -830,6 +866,8 @@ static long tee_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) return tee_ioctl_shm_alloc(ctx, uarg); case TEE_IOC_SHM_REGISTER: return tee_ioctl_shm_register(ctx, uarg); + case TEE_IOC_SHM_REGISTER_FD: + return tee_ioctl_shm_register_fd(ctx, uarg); case TEE_IOC_OPEN_SESSION: return tee_ioctl_open_session(ctx, uarg); case TEE_IOC_INVOKE: diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c index 731d9028b67f..a1cb3c8b6423 100644 --- a/drivers/tee/tee_shm.c +++ b/drivers/tee/tee_shm.c @@ -4,6 +4,7 @@ */ #include <linux/anon_inodes.h> #include <linux/device.h> +#include <linux/dma-buf.h> #include <linux/idr.h> #include <linux/mm.h> #include <linux/sched.h> @@ -14,6 +15,14 @@ #include <linux/highmem.h> #include "tee_private.h"
+/* extra references appended to shm object for registered shared memory */ +struct tee_shm_dmabuf_ref { + struct tee_shm shm; + struct dma_buf *dmabuf; + struct dma_buf_attachment *attach; + struct sg_table *sgt; +}; + static void shm_put_kernel_pages(struct page **pages, size_t page_count) { size_t n; @@ -44,7 +53,16 @@ static void release_registered_pages(struct tee_shm *shm)
static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) { - if (shm->flags & TEE_SHM_POOL) { + if (shm->flags & TEE_SHM_DMA_BUF) { + struct tee_shm_dmabuf_ref *ref; + + ref = container_of(shm, struct tee_shm_dmabuf_ref, shm); + dma_buf_unmap_attachment(ref->attach, ref->sgt, + DMA_BIDIRECTIONAL); + + dma_buf_detach(ref->dmabuf, ref->attach); + dma_buf_put(ref->dmabuf); + } else if (shm->flags & TEE_SHM_POOL) { teedev->pool->ops->free(teedev->pool, shm); } else if (shm->flags & TEE_SHM_DYNAMIC) { int rc = teedev->desc->ops->shm_unregister(shm->ctx, shm); @@ -56,7 +74,8 @@ static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) release_registered_pages(shm); }
- teedev_ctx_put(shm->ctx); + if (shm->ctx) + teedev_ctx_put(shm->ctx);
kfree(shm);
@@ -168,7 +187,7 @@ struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size) * tee_client_invoke_func(). The memory allocated is later freed with a * call to tee_shm_free(). * - * @returns a pointer to 'struct tee_shm' + * @returns a pointer to 'struct tee_shm' on success, and ERR_PTR on failure */ struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size) { @@ -178,6 +197,85 @@ struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size) } EXPORT_SYMBOL_GPL(tee_shm_alloc_kernel_buf);
+struct tee_shm *tee_shm_register_fd(struct tee_context *ctx, int fd) +{ + struct tee_shm_dmabuf_ref *ref; + int rc; + + if (!tee_device_get(ctx->teedev)) + return ERR_PTR(-EINVAL); + + teedev_ctx_get(ctx); + + ref = kzalloc(sizeof(*ref), GFP_KERNEL); + if (!ref) { + rc = -ENOMEM; + goto err_put_tee; + } + + refcount_set(&ref->shm.refcount, 1); + ref->shm.ctx = ctx; + ref->shm.id = -1; + + ref->dmabuf = dma_buf_get(fd); + if (IS_ERR(ref->dmabuf)) { + rc = PTR_ERR(ref->dmabuf); + goto err_put_dmabuf; + } + + ref->attach = dma_buf_attach(ref->dmabuf, &ref->shm.ctx->teedev->dev); + if (IS_ERR(ref->attach)) { + rc = PTR_ERR(ref->attach); + goto err_detach; + } + + ref->sgt = dma_buf_map_attachment(ref->attach, DMA_BIDIRECTIONAL); + if (IS_ERR(ref->sgt)) { + rc = PTR_ERR(ref->sgt); + goto err_unmap_attachement; + } + + if (sg_nents(ref->sgt->sgl) != 1) { + rc = PTR_ERR(ref->sgt->sgl); + goto err_unmap_attachement; + } + + ref->shm.paddr = page_to_phys(sg_page(ref->sgt->sgl)); + ref->shm.size = ref->sgt->sgl->length; + ref->shm.flags = TEE_SHM_DMA_BUF; + + mutex_lock(&ref->shm.ctx->teedev->mutex); + ref->shm.id = idr_alloc(&ref->shm.ctx->teedev->idr, &ref->shm, + 1, 0, GFP_KERNEL); + mutex_unlock(&ref->shm.ctx->teedev->mutex); + if (ref->shm.id < 0) { + rc = ref->shm.id; + goto err_idr_remove; + } + + return &ref->shm; + +err_idr_remove: + mutex_lock(&ctx->teedev->mutex); + idr_remove(&ctx->teedev->idr, ref->shm.id); + mutex_unlock(&ctx->teedev->mutex); +err_unmap_attachement: + dma_buf_unmap_attachment(ref->attach, ref->sgt, DMA_BIDIRECTIONAL); +err_detach: + dma_buf_detach(ref->dmabuf, ref->attach); +err_put_dmabuf: + dma_buf_put(ref->dmabuf); + kfree(ref); +err_put_tee: + teedev_ctx_put(ctx); + tee_device_put(ctx->teedev); + + return ERR_PTR(rc); +} +EXPORT_SYMBOL_GPL(tee_shm_register_fd); + + + /** * tee_shm_alloc_priv_buf() - Allocate shared memory for a privately shared * kernel buffer diff --git a/include/linux/tee_drv.h b/include/linux/tee_drv.h index 71632e3c5f18..6a1fee689007 100644 --- a/include/linux/tee_drv.h +++ b/include/linux/tee_drv.h @@ -25,6 +25,7 @@ #define TEE_SHM_USER_MAPPED BIT(1) /* Memory mapped in user space */ #define TEE_SHM_POOL BIT(2) /* Memory allocated from pool */ #define TEE_SHM_PRIV BIT(3) /* Memory private to TEE driver */ +#define TEE_SHM_DMA_BUF BIT(4) /* Memory with dma-buf handle */
struct device; struct tee_device; @@ -275,6 +276,16 @@ void *tee_get_drvdata(struct tee_device *teedev); struct tee_shm *tee_shm_alloc_priv_buf(struct tee_context *ctx, size_t size); struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size);
+/** + * tee_shm_register_fd() - Register shared memory from file descriptor + * + * @ctx: Context that allocates the shared memory + * @fd: Shared memory file descriptor reference + * + * @returns a pointer to 'struct tee_shm' on success, and ERR_PTR on failure + */ +struct tee_shm *tee_shm_register_fd(struct tee_context *ctx, int fd); + struct tee_shm *tee_shm_register_kernel_buf(struct tee_context *ctx, void *addr, size_t length);
diff --git a/include/uapi/linux/tee.h b/include/uapi/linux/tee.h index 23e57164693c..77bc8ef24d3c 100644 --- a/include/uapi/linux/tee.h +++ b/include/uapi/linux/tee.h @@ -117,6 +117,35 @@ struct tee_ioctl_shm_alloc_data { #define TEE_IOC_SHM_ALLOC _IOWR(TEE_IOC_MAGIC, TEE_IOC_BASE + 1, \ struct tee_ioctl_shm_alloc_data)
+/** + * struct tee_ioctl_shm_register_fd_data - Shared memory registering argument + * @fd: [in] File descriptor identifying the shared memory + * @size: [out] Size of shared memory to allocate + * @flags: [in] Flags to/from allocation. + * @id: [out] Identifier of the shared memory + * + * The flags field should currently be zero as input. Updated by the call + * with actual flags as defined by TEE_IOCTL_SHM_* above. + * This structure is used as argument for TEE_IOC_SHM_REGISTER_FD below. + */ +struct tee_ioctl_shm_register_fd_data { + __s64 fd; + __u64 size; + __u32 flags; + __s32 id; +} __aligned(8); + +/** + * TEE_IOC_SHM_REGISTER_FD - register a shared memory from a file descriptor + * + * Returns a file descriptor on success or < 0 on failure + * + * The returned file descriptor refers to the shared memory object in kernel + * land. The shared memory is freed when the descriptor is closed. + */ +#define TEE_IOC_SHM_REGISTER_FD _IOWR(TEE_IOC_MAGIC, TEE_IOC_BASE + 8, \ + struct tee_ioctl_shm_register_fd_data) + /** * struct tee_ioctl_buf_data - Variable sized buffer * @buf_ptr: [in] A __user pointer to a buffer
On Fri, Aug 30, 2024 at 12:04 AM Jens Wiklander jens.wiklander@linaro.org wrote:
From: Etienne Carriere etienne.carriere@linaro.org
Enable userspace to create a tee_shm object that refers to a dmabuf reference.
Userspace registers the dmabuf file descriptor as in a tee_shm object. The registration is completed with a tee_shm file descriptor returned to userspace.
Userspace is free to close the dmabuf file descriptor now since all the resources are now held via the tee_shm object.
Closing the tee_shm file descriptor will release all resources used by the tee_shm object.
This change only support dmabuf references that relates to physically contiguous memory buffers.
New tee_shm flag to identify tee_shm objects built from a registered dmabuf, TEE_SHM_DMA_BUF.
Signed-off-by: Etienne Carriere etienne.carriere@linaro.org Signed-off-by: Olivier Masse olivier.masse@nxp.com Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/tee/tee_core.c | 38 ++++++++++++++ drivers/tee/tee_shm.c | 104 +++++++++++++++++++++++++++++++++++++-- include/linux/tee_drv.h | 11 +++++ include/uapi/linux/tee.h | 29 +++++++++++ 4 files changed, 179 insertions(+), 3 deletions(-)
diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c index e59c20d74b36..3dfd5428d58c 100644 --- a/drivers/tee/tee_core.c +++ b/drivers/tee/tee_core.c @@ -356,6 +356,42 @@ tee_ioctl_shm_register(struct tee_context *ctx, return ret; }
+static int tee_ioctl_shm_register_fd(struct tee_context *ctx,
struct tee_ioctl_shm_register_fd_data __user *udata)
+{
struct tee_ioctl_shm_register_fd_data data;
struct tee_shm *shm;
long ret;
if (copy_from_user(&data, udata, sizeof(data)))
return -EFAULT;
/* Currently no input flags are supported */
if (data.flags)
return -EINVAL;
shm = tee_shm_register_fd(ctx, data.fd);
if (IS_ERR(shm))
return -EINVAL;
data.id = shm->id;
data.flags = shm->flags;
data.size = shm->size;
if (copy_to_user(udata, &data, sizeof(data)))
ret = -EFAULT;
else
ret = tee_shm_get_fd(shm);
/*
* When user space closes the file descriptor the shared memory
* should be freed or if tee_shm_get_fd() failed then it will
* be freed immediately.
*/
tee_shm_put(shm);
return ret;
+}
static int params_from_user(struct tee_context *ctx, struct tee_param *params, size_t num_params, struct tee_ioctl_param __user *uparams) @@ -830,6 +866,8 @@ static long tee_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) return tee_ioctl_shm_alloc(ctx, uarg); case TEE_IOC_SHM_REGISTER: return tee_ioctl_shm_register(ctx, uarg);
case TEE_IOC_SHM_REGISTER_FD:
return tee_ioctl_shm_register_fd(ctx, uarg); case TEE_IOC_OPEN_SESSION: return tee_ioctl_open_session(ctx, uarg); case TEE_IOC_INVOKE:
diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c index 731d9028b67f..a1cb3c8b6423 100644 --- a/drivers/tee/tee_shm.c +++ b/drivers/tee/tee_shm.c @@ -4,6 +4,7 @@ */ #include <linux/anon_inodes.h> #include <linux/device.h> +#include <linux/dma-buf.h> #include <linux/idr.h> #include <linux/mm.h> #include <linux/sched.h> @@ -14,6 +15,14 @@ #include <linux/highmem.h> #include "tee_private.h"
+/* extra references appended to shm object for registered shared memory */ +struct tee_shm_dmabuf_ref {
struct tee_shm shm;
struct dma_buf *dmabuf;
struct dma_buf_attachment *attach;
struct sg_table *sgt;
+};
static void shm_put_kernel_pages(struct page **pages, size_t page_count) { size_t n; @@ -44,7 +53,16 @@ static void release_registered_pages(struct tee_shm *shm)
static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) {
if (shm->flags & TEE_SHM_POOL) {
if (shm->flags & TEE_SHM_DMA_BUF) {
struct tee_shm_dmabuf_ref *ref;
ref = container_of(shm, struct tee_shm_dmabuf_ref, shm);
dma_buf_unmap_attachment(ref->attach, ref->sgt,
DMA_BIDIRECTIONAL);
dma_buf_detach(ref->dmabuf, ref->attach);
dma_buf_put(ref->dmabuf);
} else if (shm->flags & TEE_SHM_POOL) { teedev->pool->ops->free(teedev->pool, shm); } else if (shm->flags & TEE_SHM_DYNAMIC) { int rc = teedev->desc->ops->shm_unregister(shm->ctx, shm);
@@ -56,7 +74,8 @@ static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) release_registered_pages(shm); }
teedev_ctx_put(shm->ctx);
if (shm->ctx)
teedev_ctx_put(shm->ctx); kfree(shm);
@@ -168,7 +187,7 @@ struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size)
- tee_client_invoke_func(). The memory allocated is later freed with a
- call to tee_shm_free().
- @returns a pointer to 'struct tee_shm'
*/
- @returns a pointer to 'struct tee_shm' on success, and ERR_PTR on failure
struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size) { @@ -178,6 +197,85 @@ struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size) } EXPORT_SYMBOL_GPL(tee_shm_alloc_kernel_buf);
+struct tee_shm *tee_shm_register_fd(struct tee_context *ctx, int fd) +{
struct tee_shm_dmabuf_ref *ref;
int rc;
if (!tee_device_get(ctx->teedev))
return ERR_PTR(-EINVAL);
teedev_ctx_get(ctx);
ref = kzalloc(sizeof(*ref), GFP_KERNEL);
if (!ref) {
rc = -ENOMEM;
goto err_put_tee;
}
refcount_set(&ref->shm.refcount, 1);
ref->shm.ctx = ctx;
ref->shm.id = -1;
ref->dmabuf = dma_buf_get(fd);
if (IS_ERR(ref->dmabuf)) {
rc = PTR_ERR(ref->dmabuf);
goto err_put_dmabuf;
}
Hi,
Most of the gotos in the errors paths from here on look offset by one to me. Attempting to put a dmabuf after failing to get, detaching after failing to attach, unmapping after failing to map, removing an IDR after failing to allocate one.
ref->attach = dma_buf_attach(ref->dmabuf, &ref->shm.ctx->teedev->dev);
if (IS_ERR(ref->attach)) {
rc = PTR_ERR(ref->attach);
goto err_detach;
}
ref->sgt = dma_buf_map_attachment(ref->attach, DMA_BIDIRECTIONAL);
if (IS_ERR(ref->sgt)) {
rc = PTR_ERR(ref->sgt);
goto err_unmap_attachement;
}
if (sg_nents(ref->sgt->sgl) != 1) {
rc = PTR_ERR(ref->sgt->sgl);
goto err_unmap_attachement;
}
ref->shm.paddr = page_to_phys(sg_page(ref->sgt->sgl));
ref->shm.size = ref->sgt->sgl->length;
ref->shm.flags = TEE_SHM_DMA_BUF;
mutex_lock(&ref->shm.ctx->teedev->mutex);
ref->shm.id = idr_alloc(&ref->shm.ctx->teedev->idr, &ref->shm,
1, 0, GFP_KERNEL);
mutex_unlock(&ref->shm.ctx->teedev->mutex);
if (ref->shm.id < 0) {
rc = ref->shm.id;
goto err_idr_remove;
}
return &ref->shm;
+err_idr_remove:
mutex_lock(&ctx->teedev->mutex);
idr_remove(&ctx->teedev->idr, ref->shm.id);
mutex_unlock(&ctx->teedev->mutex);
+err_unmap_attachement:
dma_buf_unmap_attachment(ref->attach, ref->sgt, DMA_BIDIRECTIONAL);
+err_detach:
dma_buf_detach(ref->dmabuf, ref->attach);
+err_put_dmabuf:
dma_buf_put(ref->dmabuf);
kfree(ref);
+err_put_tee:
teedev_ctx_put(ctx);
tee_device_put(ctx->teedev);
return ERR_PTR(rc);
+} +EXPORT_SYMBOL_GPL(tee_shm_register_fd);
/**
- tee_shm_alloc_priv_buf() - Allocate shared memory for a privately shared
kernel buffer
diff --git a/include/linux/tee_drv.h b/include/linux/tee_drv.h index 71632e3c5f18..6a1fee689007 100644 --- a/include/linux/tee_drv.h +++ b/include/linux/tee_drv.h @@ -25,6 +25,7 @@ #define TEE_SHM_USER_MAPPED BIT(1) /* Memory mapped in user space */ #define TEE_SHM_POOL BIT(2) /* Memory allocated from pool */ #define TEE_SHM_PRIV BIT(3) /* Memory private to TEE driver */ +#define TEE_SHM_DMA_BUF BIT(4) /* Memory with dma-buf handle */
struct device; struct tee_device; @@ -275,6 +276,16 @@ void *tee_get_drvdata(struct tee_device *teedev); struct tee_shm *tee_shm_alloc_priv_buf(struct tee_context *ctx, size_t size); struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size);
+/**
- tee_shm_register_fd() - Register shared memory from file descriptor
- @ctx: Context that allocates the shared memory
- @fd: Shared memory file descriptor reference
- @returns a pointer to 'struct tee_shm' on success, and ERR_PTR on failure
- */
+struct tee_shm *tee_shm_register_fd(struct tee_context *ctx, int fd);
struct tee_shm *tee_shm_register_kernel_buf(struct tee_context *ctx, void *addr, size_t length);
diff --git a/include/uapi/linux/tee.h b/include/uapi/linux/tee.h index 23e57164693c..77bc8ef24d3c 100644 --- a/include/uapi/linux/tee.h +++ b/include/uapi/linux/tee.h @@ -117,6 +117,35 @@ struct tee_ioctl_shm_alloc_data { #define TEE_IOC_SHM_ALLOC _IOWR(TEE_IOC_MAGIC, TEE_IOC_BASE + 1, \ struct tee_ioctl_shm_alloc_data)
+/**
- struct tee_ioctl_shm_register_fd_data - Shared memory registering argument
- @fd: [in] File descriptor identifying the shared memory
- @size: [out] Size of shared memory to allocate
- @flags: [in] Flags to/from allocation.
- @id: [out] Identifier of the shared memory
- The flags field should currently be zero as input. Updated by the call
- with actual flags as defined by TEE_IOCTL_SHM_* above.
- This structure is used as argument for TEE_IOC_SHM_REGISTER_FD below.
- */
+struct tee_ioctl_shm_register_fd_data {
__s64 fd;
__u64 size;
__u32 flags;
__s32 id;
+} __aligned(8);
+/**
- TEE_IOC_SHM_REGISTER_FD - register a shared memory from a file descriptor
- Returns a file descriptor on success or < 0 on failure
- The returned file descriptor refers to the shared memory object in kernel
- land. The shared memory is freed when the descriptor is closed.
- */
+#define TEE_IOC_SHM_REGISTER_FD _IOWR(TEE_IOC_MAGIC, TEE_IOC_BASE + 8, \
struct tee_ioctl_shm_register_fd_data)
/**
- struct tee_ioctl_buf_data - Variable sized buffer
- @buf_ptr: [in] A __user pointer to a buffer
-- 2.34.1
On Tue, Sep 3, 2024 at 7:50 PM T.J. Mercier tjmercier@google.com wrote:
On Fri, Aug 30, 2024 at 12:04 AM Jens Wiklander jens.wiklander@linaro.org wrote:
From: Etienne Carriere etienne.carriere@linaro.org
Enable userspace to create a tee_shm object that refers to a dmabuf reference.
Userspace registers the dmabuf file descriptor as in a tee_shm object. The registration is completed with a tee_shm file descriptor returned to userspace.
Userspace is free to close the dmabuf file descriptor now since all the resources are now held via the tee_shm object.
Closing the tee_shm file descriptor will release all resources used by the tee_shm object.
This change only support dmabuf references that relates to physically contiguous memory buffers.
New tee_shm flag to identify tee_shm objects built from a registered dmabuf, TEE_SHM_DMA_BUF.
Signed-off-by: Etienne Carriere etienne.carriere@linaro.org Signed-off-by: Olivier Masse olivier.masse@nxp.com Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/tee/tee_core.c | 38 ++++++++++++++ drivers/tee/tee_shm.c | 104 +++++++++++++++++++++++++++++++++++++-- include/linux/tee_drv.h | 11 +++++ include/uapi/linux/tee.h | 29 +++++++++++ 4 files changed, 179 insertions(+), 3 deletions(-)
diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c index e59c20d74b36..3dfd5428d58c 100644 --- a/drivers/tee/tee_core.c +++ b/drivers/tee/tee_core.c @@ -356,6 +356,42 @@ tee_ioctl_shm_register(struct tee_context *ctx, return ret; }
+static int tee_ioctl_shm_register_fd(struct tee_context *ctx,
struct tee_ioctl_shm_register_fd_data __user *udata)
+{
struct tee_ioctl_shm_register_fd_data data;
struct tee_shm *shm;
long ret;
if (copy_from_user(&data, udata, sizeof(data)))
return -EFAULT;
/* Currently no input flags are supported */
if (data.flags)
return -EINVAL;
shm = tee_shm_register_fd(ctx, data.fd);
if (IS_ERR(shm))
return -EINVAL;
data.id = shm->id;
data.flags = shm->flags;
data.size = shm->size;
if (copy_to_user(udata, &data, sizeof(data)))
ret = -EFAULT;
else
ret = tee_shm_get_fd(shm);
/*
* When user space closes the file descriptor the shared memory
* should be freed or if tee_shm_get_fd() failed then it will
* be freed immediately.
*/
tee_shm_put(shm);
return ret;
+}
static int params_from_user(struct tee_context *ctx, struct tee_param *params, size_t num_params, struct tee_ioctl_param __user *uparams) @@ -830,6 +866,8 @@ static long tee_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) return tee_ioctl_shm_alloc(ctx, uarg); case TEE_IOC_SHM_REGISTER: return tee_ioctl_shm_register(ctx, uarg);
case TEE_IOC_SHM_REGISTER_FD:
return tee_ioctl_shm_register_fd(ctx, uarg); case TEE_IOC_OPEN_SESSION: return tee_ioctl_open_session(ctx, uarg); case TEE_IOC_INVOKE:
diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c index 731d9028b67f..a1cb3c8b6423 100644 --- a/drivers/tee/tee_shm.c +++ b/drivers/tee/tee_shm.c @@ -4,6 +4,7 @@ */ #include <linux/anon_inodes.h> #include <linux/device.h> +#include <linux/dma-buf.h> #include <linux/idr.h> #include <linux/mm.h> #include <linux/sched.h> @@ -14,6 +15,14 @@ #include <linux/highmem.h> #include "tee_private.h"
+/* extra references appended to shm object for registered shared memory */ +struct tee_shm_dmabuf_ref {
struct tee_shm shm;
struct dma_buf *dmabuf;
struct dma_buf_attachment *attach;
struct sg_table *sgt;
+};
static void shm_put_kernel_pages(struct page **pages, size_t page_count) { size_t n; @@ -44,7 +53,16 @@ static void release_registered_pages(struct tee_shm *shm)
static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) {
if (shm->flags & TEE_SHM_POOL) {
if (shm->flags & TEE_SHM_DMA_BUF) {
struct tee_shm_dmabuf_ref *ref;
ref = container_of(shm, struct tee_shm_dmabuf_ref, shm);
dma_buf_unmap_attachment(ref->attach, ref->sgt,
DMA_BIDIRECTIONAL);
dma_buf_detach(ref->dmabuf, ref->attach);
dma_buf_put(ref->dmabuf);
} else if (shm->flags & TEE_SHM_POOL) { teedev->pool->ops->free(teedev->pool, shm); } else if (shm->flags & TEE_SHM_DYNAMIC) { int rc = teedev->desc->ops->shm_unregister(shm->ctx, shm);
@@ -56,7 +74,8 @@ static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) release_registered_pages(shm); }
teedev_ctx_put(shm->ctx);
if (shm->ctx)
teedev_ctx_put(shm->ctx); kfree(shm);
@@ -168,7 +187,7 @@ struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size)
- tee_client_invoke_func(). The memory allocated is later freed with a
- call to tee_shm_free().
- @returns a pointer to 'struct tee_shm'
*/
- @returns a pointer to 'struct tee_shm' on success, and ERR_PTR on failure
struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size) { @@ -178,6 +197,85 @@ struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size) } EXPORT_SYMBOL_GPL(tee_shm_alloc_kernel_buf);
+struct tee_shm *tee_shm_register_fd(struct tee_context *ctx, int fd) +{
struct tee_shm_dmabuf_ref *ref;
int rc;
if (!tee_device_get(ctx->teedev))
return ERR_PTR(-EINVAL);
teedev_ctx_get(ctx);
ref = kzalloc(sizeof(*ref), GFP_KERNEL);
if (!ref) {
rc = -ENOMEM;
goto err_put_tee;
}
refcount_set(&ref->shm.refcount, 1);
ref->shm.ctx = ctx;
ref->shm.id = -1;
ref->dmabuf = dma_buf_get(fd);
if (IS_ERR(ref->dmabuf)) {
rc = PTR_ERR(ref->dmabuf);
goto err_put_dmabuf;
}
Hi,
Most of the gotos in the errors paths from here on look offset by one to me. Attempting to put a dmabuf after failing to get, detaching after failing to attach, unmapping after failing to map, removing an IDR after failing to allocate one.
You're right, I'll fix them.
Thanks, Jens
ref->attach = dma_buf_attach(ref->dmabuf, &ref->shm.ctx->teedev->dev);
if (IS_ERR(ref->attach)) {
rc = PTR_ERR(ref->attach);
goto err_detach;
}
ref->sgt = dma_buf_map_attachment(ref->attach, DMA_BIDIRECTIONAL);
if (IS_ERR(ref->sgt)) {
rc = PTR_ERR(ref->sgt);
goto err_unmap_attachement;
}
if (sg_nents(ref->sgt->sgl) != 1) {
rc = PTR_ERR(ref->sgt->sgl);
goto err_unmap_attachement;
}
ref->shm.paddr = page_to_phys(sg_page(ref->sgt->sgl));
ref->shm.size = ref->sgt->sgl->length;
ref->shm.flags = TEE_SHM_DMA_BUF;
mutex_lock(&ref->shm.ctx->teedev->mutex);
ref->shm.id = idr_alloc(&ref->shm.ctx->teedev->idr, &ref->shm,
1, 0, GFP_KERNEL);
mutex_unlock(&ref->shm.ctx->teedev->mutex);
if (ref->shm.id < 0) {
rc = ref->shm.id;
goto err_idr_remove;
}
return &ref->shm;
+err_idr_remove:
mutex_lock(&ctx->teedev->mutex);
idr_remove(&ctx->teedev->idr, ref->shm.id);
mutex_unlock(&ctx->teedev->mutex);
+err_unmap_attachement:
dma_buf_unmap_attachment(ref->attach, ref->sgt, DMA_BIDIRECTIONAL);
+err_detach:
dma_buf_detach(ref->dmabuf, ref->attach);
+err_put_dmabuf:
dma_buf_put(ref->dmabuf);
kfree(ref);
+err_put_tee:
teedev_ctx_put(ctx);
tee_device_put(ctx->teedev);
return ERR_PTR(rc);
+} +EXPORT_SYMBOL_GPL(tee_shm_register_fd);
/**
- tee_shm_alloc_priv_buf() - Allocate shared memory for a privately shared
kernel buffer
diff --git a/include/linux/tee_drv.h b/include/linux/tee_drv.h index 71632e3c5f18..6a1fee689007 100644 --- a/include/linux/tee_drv.h +++ b/include/linux/tee_drv.h @@ -25,6 +25,7 @@ #define TEE_SHM_USER_MAPPED BIT(1) /* Memory mapped in user space */ #define TEE_SHM_POOL BIT(2) /* Memory allocated from pool */ #define TEE_SHM_PRIV BIT(3) /* Memory private to TEE driver */ +#define TEE_SHM_DMA_BUF BIT(4) /* Memory with dma-buf handle */
struct device; struct tee_device; @@ -275,6 +276,16 @@ void *tee_get_drvdata(struct tee_device *teedev); struct tee_shm *tee_shm_alloc_priv_buf(struct tee_context *ctx, size_t size); struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size);
+/**
- tee_shm_register_fd() - Register shared memory from file descriptor
- @ctx: Context that allocates the shared memory
- @fd: Shared memory file descriptor reference
- @returns a pointer to 'struct tee_shm' on success, and ERR_PTR on failure
- */
+struct tee_shm *tee_shm_register_fd(struct tee_context *ctx, int fd);
struct tee_shm *tee_shm_register_kernel_buf(struct tee_context *ctx, void *addr, size_t length);
diff --git a/include/uapi/linux/tee.h b/include/uapi/linux/tee.h index 23e57164693c..77bc8ef24d3c 100644 --- a/include/uapi/linux/tee.h +++ b/include/uapi/linux/tee.h @@ -117,6 +117,35 @@ struct tee_ioctl_shm_alloc_data { #define TEE_IOC_SHM_ALLOC _IOWR(TEE_IOC_MAGIC, TEE_IOC_BASE + 1, \ struct tee_ioctl_shm_alloc_data)
+/**
- struct tee_ioctl_shm_register_fd_data - Shared memory registering argument
- @fd: [in] File descriptor identifying the shared memory
- @size: [out] Size of shared memory to allocate
- @flags: [in] Flags to/from allocation.
- @id: [out] Identifier of the shared memory
- The flags field should currently be zero as input. Updated by the call
- with actual flags as defined by TEE_IOCTL_SHM_* above.
- This structure is used as argument for TEE_IOC_SHM_REGISTER_FD below.
- */
+struct tee_ioctl_shm_register_fd_data {
__s64 fd;
__u64 size;
__u32 flags;
__s32 id;
+} __aligned(8);
+/**
- TEE_IOC_SHM_REGISTER_FD - register a shared memory from a file descriptor
- Returns a file descriptor on success or < 0 on failure
- The returned file descriptor refers to the shared memory object in kernel
- land. The shared memory is freed when the descriptor is closed.
- */
+#define TEE_IOC_SHM_REGISTER_FD _IOWR(TEE_IOC_MAGIC, TEE_IOC_BASE + 8, \
struct tee_ioctl_shm_register_fd_data)
/**
- struct tee_ioctl_buf_data - Variable sized buffer
- @buf_ptr: [in] A __user pointer to a buffer
-- 2.34.1
From: Olivier Masse olivier.masse@nxp.com
DMABUF reserved memory definition for OP-TEE secure data path feature.
Signed-off-by: Olivier Masse olivier.masse@nxp.com Signed-off-by: Jens Wiklander jens.wiklander@linaro.org --- .../linaro,restricted-heap.yaml | 56 +++++++++++++++++++ 1 file changed, 56 insertions(+) create mode 100644 Documentation/devicetree/bindings/reserved-memory/linaro,restricted-heap.yaml
diff --git a/Documentation/devicetree/bindings/reserved-memory/linaro,restricted-heap.yaml b/Documentation/devicetree/bindings/reserved-memory/linaro,restricted-heap.yaml new file mode 100644 index 000000000000..0ab87cf02775 --- /dev/null +++ b/Documentation/devicetree/bindings/reserved-memory/linaro,restricted-heap.yaml @@ -0,0 +1,56 @@ +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/reserved-memory/linaro,restricted-heap.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Linaro Secure DMABUF Heap + +maintainers: + - Olivier Masse olivier.masse@nxp.com + +description: + Linaro OP-TEE firmware needs a reserved memory for the + Secure Data Path feature (aka SDP). + The purpose is to provide a restricted memory heap which allow + the normal world OS (REE) to allocate/free restricted buffers. + The TEE is reponsible for protecting the SDP memory buffers. + TEE Trusted Application can access restricted memory references + provided as parameters (DMABUF file descriptor). + +allOf: + - $ref: "reserved-memory.yaml" + +properties: + compatible: + const: linaro,restricted-heap + + reg: + description: + Region of memory reserved for OP-TEE SDP feature + + no-map: + $ref: /schemas/types.yaml#/definitions/flag + description: + Avoid creating a virtual mapping of the region as part of the OS' + standard mapping of system memory. + +unevaluatedProperties: false + +required: + - compatible + - reg + - no-map + +examples: + - | + reserved-memory { + #address-cells = <2>; + #size-cells = <2>; + + sdp@3e800000 { + compatible = "linaro,restricted-heap"; + no-map; + reg = <0 0x3E800000 0 0x00400000>; + }; + };
On Fri, Aug 30, 2024 at 09:03:50AM +0200, Jens Wiklander wrote:
From: Olivier Masse olivier.masse@nxp.com
DMABUF reserved memory definition for OP-TEE secure data path feature.
Signed-off-by: Olivier Masse olivier.masse@nxp.com Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
.../linaro,restricted-heap.yaml | 56 +++++++++++++++++++ 1 file changed, 56 insertions(+) create mode 100644 Documentation/devicetree/bindings/reserved-memory/linaro,restricted-heap.yaml
diff --git a/Documentation/devicetree/bindings/reserved-memory/linaro,restricted-heap.yaml b/Documentation/devicetree/bindings/reserved-memory/linaro,restricted-heap.yaml new file mode 100644 index 000000000000..0ab87cf02775 --- /dev/null +++ b/Documentation/devicetree/bindings/reserved-memory/linaro,restricted-heap.yaml @@ -0,0 +1,56 @@ +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/reserved-memory/linaro,restricted-heap.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml#
+title: Linaro Secure DMABUF Heap
+maintainers:
- Olivier Masse olivier.masse@nxp.com
+description:
- Linaro OP-TEE firmware needs a reserved memory for the
- Secure Data Path feature (aka SDP).
- The purpose is to provide a restricted memory heap which allow
- the normal world OS (REE) to allocate/free restricted buffers.
- The TEE is reponsible for protecting the SDP memory buffers.
- TEE Trusted Application can access restricted memory references
- provided as parameters (DMABUF file descriptor).
And what is the difference from regular reserved memory? Why it cannot be used?
+allOf:
- $ref: "reserved-memory.yaml"
It does not look like you tested the bindings, at least after quick look. Please run (see Documentation/devicetree/bindings/writing-schema.rst for instructions). Maybe you need to update your dtschema and yamllint.
+properties:
- compatible:
- const: linaro,restricted-heap
- reg:
- description:
Region of memory reserved for OP-TEE SDP feature
- no-map:
- $ref: /schemas/types.yaml#/definitions/flag
- description:
Avoid creating a virtual mapping of the region as part of the OS'
standard mapping of system memory.
+unevaluatedProperties: false
This goes after "required:" block.
+required:
- compatible
- reg
- no-map
+examples:
- |
- reserved-memory {
- #address-cells = <2>;
- #size-cells = <2>;
- sdp@3e800000 {
compatible = "linaro,restricted-heap";
no-map;
reg = <0 0x3E800000 0 0x00400000>;
lowercase hex
Best regards, Krzysztof
On Fri, Aug 30, 2024 at 10:20 AM Krzysztof Kozlowski krzk@kernel.org wrote:
On Fri, Aug 30, 2024 at 09:03:50AM +0200, Jens Wiklander wrote:
From: Olivier Masse olivier.masse@nxp.com
DMABUF reserved memory definition for OP-TEE secure data path feature.
Signed-off-by: Olivier Masse olivier.masse@nxp.com Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
.../linaro,restricted-heap.yaml | 56 +++++++++++++++++++ 1 file changed, 56 insertions(+) create mode 100644 Documentation/devicetree/bindings/reserved-memory/linaro,restricted-heap.yaml
diff --git a/Documentation/devicetree/bindings/reserved-memory/linaro,restricted-heap.yaml b/Documentation/devicetree/bindings/reserved-memory/linaro,restricted-heap.yaml new file mode 100644 index 000000000000..0ab87cf02775 --- /dev/null +++ b/Documentation/devicetree/bindings/reserved-memory/linaro,restricted-heap.yaml @@ -0,0 +1,56 @@ +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/reserved-memory/linaro,restricted-heap.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml#
+title: Linaro Secure DMABUF Heap
+maintainers:
- Olivier Masse olivier.masse@nxp.com
+description:
- Linaro OP-TEE firmware needs a reserved memory for the
- Secure Data Path feature (aka SDP).
- The purpose is to provide a restricted memory heap which allow
- the normal world OS (REE) to allocate/free restricted buffers.
- The TEE is reponsible for protecting the SDP memory buffers.
- TEE Trusted Application can access restricted memory references
- provided as parameters (DMABUF file descriptor).
And what is the difference from regular reserved memory? Why it cannot be used?
Good question. I need a compatible = "linaro,restricted-heap" to find it, but it appears that's permitted with regular reserved memory. Let's drop this patch. Thanks for pointing me in the right direction.
+allOf:
- $ref: "reserved-memory.yaml"
It does not look like you tested the bindings, at least after quick look. Please run (see Documentation/devicetree/bindings/writing-schema.rst for instructions). Maybe you need to update your dtschema and yamllint.
You're right, sorry.
+properties:
- compatible:
- const: linaro,restricted-heap
- reg:
- description:
Region of memory reserved for OP-TEE SDP feature
- no-map:
- $ref: /schemas/types.yaml#/definitions/flag
- description:
Avoid creating a virtual mapping of the region as part of the OS'
standard mapping of system memory.
+unevaluatedProperties: false
This goes after "required:" block.
OK
+required:
- compatible
- reg
- no-map
+examples:
- |
- reserved-memory {
- #address-cells = <2>;
- #size-cells = <2>;
- sdp@3e800000 {
compatible = "linaro,restricted-heap";
no-map;
reg = <0 0x3E800000 0 0x00400000>;
lowercase hex
OK
Thanks, Jens
Best regards, Krzysztof
Add a Linaro restricted heap using the linaro,restricted-heap bindings implemented based on the generic restricted heap.
The bindings defines a range of physical restricted memory. The heap manages this address range using genalloc. The allocated dma-buf file descriptor can later be registered with the TEE subsystem for later use via Trusted Applications in the secure world.
Co-developed-by: Olivier Masse olivier.masse@nxp.com Signed-off-by: Olivier Masse olivier.masse@nxp.com Signed-off-by: Jens Wiklander jens.wiklander@linaro.org --- drivers/dma-buf/heaps/Kconfig | 10 ++ drivers/dma-buf/heaps/Makefile | 1 + .../dma-buf/heaps/restricted_heap_linaro.c | 165 ++++++++++++++++++ 3 files changed, 176 insertions(+) create mode 100644 drivers/dma-buf/heaps/restricted_heap_linaro.c
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index 58903bc62ac8..82e2c5d09242 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -28,3 +28,13 @@ config DMABUF_HEAPS_RESTRICTED_MTK help Enable restricted dma-buf heaps for MediaTek platform. This heap is backed by TEE client interfaces. If in doubt, say N. + +config DMABUF_HEAPS_RESTRICTED_LINARO + bool "Linaro DMA-BUF Restricted Heap" + depends on DMABUF_HEAPS_RESTRICTED + help + Choose this option to enable the Linaro restricted dma-buf heap. + The restricted heap pools are defined according to the DT. Heaps + are allocated in the pools using gen allocater. + If in doubt, say N. + diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index 0028aa9d875f..66b2f67c47b5 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -2,4 +2,5 @@ obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED) += restricted_heap.o obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED_MTK) += restricted_heap_mtk.o +obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED_LINARO) += restricted_heap_linaro.o obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o diff --git a/drivers/dma-buf/heaps/restricted_heap_linaro.c b/drivers/dma-buf/heaps/restricted_heap_linaro.c new file mode 100644 index 000000000000..4b08ed514023 --- /dev/null +++ b/drivers/dma-buf/heaps/restricted_heap_linaro.c @@ -0,0 +1,165 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * DMABUF secure heap exporter + * + * Copyright 2021 NXP. + * Copyright 2024 Linaro Limited. + */ + +#define pr_fmt(fmt) "rheap_linaro: " fmt + +#include <linux/dma-buf.h> +#include <linux/err.h> +#include <linux/genalloc.h> +#include <linux/module.h> +#include <linux/of.h> +#include <linux/of_fdt.h> +#include <linux/of_reserved_mem.h> +#include <linux/scatterlist.h> +#include <linux/slab.h> + +#include "restricted_heap.h" + +#define MAX_HEAP_COUNT 2 +#define HEAP_NAME_LEN 32 + +struct resmem_restricted { + phys_addr_t base; + phys_addr_t size; + + char name[HEAP_NAME_LEN]; + + bool no_map; +}; + +static struct resmem_restricted restricted_data[MAX_HEAP_COUNT] = {0}; +static unsigned int restricted_data_count; + +static int linaro_restricted_memory_allocate(struct restricted_heap *heap, + struct restricted_buffer *buf) +{ + struct gen_pool *pool = heap->priv_data; + unsigned long pa; + int ret; + + buf->size = ALIGN(buf->size, PAGE_SIZE); + pa = gen_pool_alloc(pool, buf->size); + if (!pa) + return -ENOMEM; + + ret = sg_alloc_table(&buf->sg_table, 1, GFP_KERNEL); + if (ret) { + gen_pool_free(pool, pa, buf->size); + return ret; + } + + sg_set_page(buf->sg_table.sgl, phys_to_page(pa), buf->size, 0); + + return 0; +} + +static void linaro_restricted_memory_free(struct restricted_heap *heap, + struct restricted_buffer *buf) +{ + struct gen_pool *pool = heap->priv_data; + struct scatterlist *sg; + unsigned int i; + + for_each_sg(buf->sg_table.sgl, sg, buf->sg_table.nents, i) + gen_pool_free(pool, page_to_phys(sg_page(sg)), sg->length); + sg_free_table(&buf->sg_table); +} + +static const struct restricted_heap_ops linaro_restricted_heap_ops = { + .alloc = linaro_restricted_memory_allocate, + .free = linaro_restricted_memory_free, +}; + +static int add_heap(struct resmem_restricted *mem) +{ + struct restricted_heap *heap; + struct gen_pool *pool; + int ret; + + if (mem->base == 0 || mem->size == 0) { + pr_err("restricted_data base or size is not correct\n"); + return -EINVAL; + } + + heap = kzalloc(sizeof(*heap), GFP_KERNEL); + if (!heap) + return -ENOMEM; + + pool = gen_pool_create(PAGE_SHIFT, -1); + if (!pool) { + ret = -ENOMEM; + goto err_free_heap; + } + + ret = gen_pool_add(pool, mem->base, mem->size, -1); + if (ret) + goto err_free_pool; + + heap->no_map = mem->no_map; + heap->priv_data = pool; + heap->name = mem->name; + heap->ops = &linaro_restricted_heap_ops; + + ret = restricted_heap_add(heap); + if (ret) + goto err_free_pool; + + return 0; + +err_free_pool: + gen_pool_destroy(pool); +err_free_heap: + kfree(heap); + + return ret; +} + +static int __init rmem_restricted_heap_setup(struct reserved_mem *rmem) +{ + size_t len = HEAP_NAME_LEN; + const char *s; + bool no_map; + + if (WARN_ONCE(restricted_data_count >= MAX_HEAP_COUNT, + "Cannot handle more than %u restricted heaps\n", + MAX_HEAP_COUNT)) + return -EINVAL; + + no_map = of_get_flat_dt_prop(rmem->fdt_node, "no-map", NULL); + s = strchr(rmem->name, '@'); + if (s) + len = umin(s - rmem->name + 1, len); + + restricted_data[restricted_data_count].base = rmem->base; + restricted_data[restricted_data_count].size = rmem->size; + restricted_data[restricted_data_count].no_map = no_map; + strscpy(restricted_data[restricted_data_count].name, rmem->name, len); + + restricted_data_count++; + return 0; +} + +RESERVEDMEM_OF_DECLARE(linaro_restricted_heap, "linaro,restricted-heap", + rmem_restricted_heap_setup); + +static int linaro_restricted_heap_init(void) +{ + unsigned int i; + int ret; + + for (i = 0; i < restricted_data_count; i++) { + ret = add_heap(&restricted_data[i]); + if (ret) + return ret; + } + return 0; +} + +module_init(linaro_restricted_heap_init); +MODULE_DESCRIPTION("Linaro Restricted Heap Driver"); +MODULE_LICENSE("GPL");
On Fri, Aug 30, 2024 at 12:04 AM Jens Wiklander jens.wiklander@linaro.org wrote:
Add a Linaro restricted heap using the linaro,restricted-heap bindings implemented based on the generic restricted heap.
The bindings defines a range of physical restricted memory. The heap manages this address range using genalloc. The allocated dma-buf file descriptor can later be registered with the TEE subsystem for later use via Trusted Applications in the secure world.
Co-developed-by: Olivier Masse olivier.masse@nxp.com Signed-off-by: Olivier Masse olivier.masse@nxp.com Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/dma-buf/heaps/Kconfig | 10 ++ drivers/dma-buf/heaps/Makefile | 1 + .../dma-buf/heaps/restricted_heap_linaro.c | 165 ++++++++++++++++++ 3 files changed, 176 insertions(+) create mode 100644 drivers/dma-buf/heaps/restricted_heap_linaro.c
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index 58903bc62ac8..82e2c5d09242 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -28,3 +28,13 @@ config DMABUF_HEAPS_RESTRICTED_MTK help Enable restricted dma-buf heaps for MediaTek platform. This heap is backed by TEE client interfaces. If in doubt, say N.
+config DMABUF_HEAPS_RESTRICTED_LINARO
bool "Linaro DMA-BUF Restricted Heap"
depends on DMABUF_HEAPS_RESTRICTED
help
Choose this option to enable the Linaro restricted dma-buf heap.
The restricted heap pools are defined according to the DT. Heaps
are allocated in the pools using gen allocater.
If in doubt, say N.
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index 0028aa9d875f..66b2f67c47b5 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -2,4 +2,5 @@ obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED) += restricted_heap.o obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED_MTK) += restricted_heap_mtk.o +obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED_LINARO) += restricted_heap_linaro.o obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o diff --git a/drivers/dma-buf/heaps/restricted_heap_linaro.c b/drivers/dma-buf/heaps/restricted_heap_linaro.c new file mode 100644 index 000000000000..4b08ed514023 --- /dev/null +++ b/drivers/dma-buf/heaps/restricted_heap_linaro.c @@ -0,0 +1,165 @@ +// SPDX-License-Identifier: GPL-2.0 +/*
- DMABUF secure heap exporter
- Copyright 2021 NXP.
- Copyright 2024 Linaro Limited.
- */
+#define pr_fmt(fmt) "rheap_linaro: " fmt
+#include <linux/dma-buf.h> +#include <linux/err.h> +#include <linux/genalloc.h> +#include <linux/module.h> +#include <linux/of.h> +#include <linux/of_fdt.h> +#include <linux/of_reserved_mem.h> +#include <linux/scatterlist.h> +#include <linux/slab.h>
+#include "restricted_heap.h"
+#define MAX_HEAP_COUNT 2
Are multiple supported because of what Cyrille mentioned here about permissions? https://lore.kernel.org/lkml/DBBPR04MB7514E006455AEA407041E4F788709@DBBPR04M...
So this is just some arbitrary limit? I'd prefer to have some sort of documentation about this.
+#define HEAP_NAME_LEN 32
+struct resmem_restricted {
phys_addr_t base;
phys_addr_t size;
char name[HEAP_NAME_LEN];
bool no_map;
+};
+static struct resmem_restricted restricted_data[MAX_HEAP_COUNT] = {0}; +static unsigned int restricted_data_count;
+static int linaro_restricted_memory_allocate(struct restricted_heap *heap,
struct restricted_buffer *buf)
+{
struct gen_pool *pool = heap->priv_data;
unsigned long pa;
int ret;
buf->size = ALIGN(buf->size, PAGE_SIZE);
pa = gen_pool_alloc(pool, buf->size);
if (!pa)
return -ENOMEM;
ret = sg_alloc_table(&buf->sg_table, 1, GFP_KERNEL);
if (ret) {
gen_pool_free(pool, pa, buf->size);
return ret;
}
sg_set_page(buf->sg_table.sgl, phys_to_page(pa), buf->size, 0);
return 0;
+}
+static void linaro_restricted_memory_free(struct restricted_heap *heap,
struct restricted_buffer *buf)
+{
struct gen_pool *pool = heap->priv_data;
struct scatterlist *sg;
unsigned int i;
for_each_sg(buf->sg_table.sgl, sg, buf->sg_table.nents, i)
gen_pool_free(pool, page_to_phys(sg_page(sg)), sg->length);
sg_free_table(&buf->sg_table);
+}
+static const struct restricted_heap_ops linaro_restricted_heap_ops = {
.alloc = linaro_restricted_memory_allocate,
.free = linaro_restricted_memory_free,
+};
+static int add_heap(struct resmem_restricted *mem) +{
struct restricted_heap *heap;
struct gen_pool *pool;
int ret;
if (mem->base == 0 || mem->size == 0) {
pr_err("restricted_data base or size is not correct\n");
return -EINVAL;
}
heap = kzalloc(sizeof(*heap), GFP_KERNEL);
if (!heap)
return -ENOMEM;
pool = gen_pool_create(PAGE_SHIFT, -1);
if (!pool) {
ret = -ENOMEM;
goto err_free_heap;
}
ret = gen_pool_add(pool, mem->base, mem->size, -1);
if (ret)
goto err_free_pool;
heap->no_map = mem->no_map;
heap->priv_data = pool;
heap->name = mem->name;
heap->ops = &linaro_restricted_heap_ops;
ret = restricted_heap_add(heap);
if (ret)
goto err_free_pool;
return 0;
+err_free_pool:
gen_pool_destroy(pool);
+err_free_heap:
kfree(heap);
return ret;
+}
+static int __init rmem_restricted_heap_setup(struct reserved_mem *rmem) +{
size_t len = HEAP_NAME_LEN;
const char *s;
bool no_map;
if (WARN_ONCE(restricted_data_count >= MAX_HEAP_COUNT,
"Cannot handle more than %u restricted heaps\n",
MAX_HEAP_COUNT))
return -EINVAL;
no_map = of_get_flat_dt_prop(rmem->fdt_node, "no-map", NULL);
s = strchr(rmem->name, '@');
if (s)
len = umin(s - rmem->name + 1, len);
restricted_data[restricted_data_count].base = rmem->base;
restricted_data[restricted_data_count].size = rmem->size;
restricted_data[restricted_data_count].no_map = no_map;
strscpy(restricted_data[restricted_data_count].name, rmem->name, len);
restricted_data_count++;
return 0;
+}
+RESERVEDMEM_OF_DECLARE(linaro_restricted_heap, "linaro,restricted-heap",
rmem_restricted_heap_setup);
+static int linaro_restricted_heap_init(void) +{
unsigned int i;
int ret;
for (i = 0; i < restricted_data_count; i++) {
ret = add_heap(&restricted_data[i]);
if (ret)
return ret;
}
return 0;
+}
+module_init(linaro_restricted_heap_init); +MODULE_DESCRIPTION("Linaro Restricted Heap Driver");
+MODULE_LICENSE("GPL");
2.34.1
On Tue, Sep 3, 2024 at 7:50 PM T.J. Mercier tjmercier@google.com wrote:
On Fri, Aug 30, 2024 at 12:04 AM Jens Wiklander jens.wiklander@linaro.org wrote:
Add a Linaro restricted heap using the linaro,restricted-heap bindings implemented based on the generic restricted heap.
The bindings defines a range of physical restricted memory. The heap manages this address range using genalloc. The allocated dma-buf file descriptor can later be registered with the TEE subsystem for later use via Trusted Applications in the secure world.
Co-developed-by: Olivier Masse olivier.masse@nxp.com Signed-off-by: Olivier Masse olivier.masse@nxp.com Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/dma-buf/heaps/Kconfig | 10 ++ drivers/dma-buf/heaps/Makefile | 1 + .../dma-buf/heaps/restricted_heap_linaro.c | 165 ++++++++++++++++++ 3 files changed, 176 insertions(+) create mode 100644 drivers/dma-buf/heaps/restricted_heap_linaro.c
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index 58903bc62ac8..82e2c5d09242 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -28,3 +28,13 @@ config DMABUF_HEAPS_RESTRICTED_MTK help Enable restricted dma-buf heaps for MediaTek platform. This heap is backed by TEE client interfaces. If in doubt, say N.
+config DMABUF_HEAPS_RESTRICTED_LINARO
bool "Linaro DMA-BUF Restricted Heap"
depends on DMABUF_HEAPS_RESTRICTED
help
Choose this option to enable the Linaro restricted dma-buf heap.
The restricted heap pools are defined according to the DT. Heaps
are allocated in the pools using gen allocater.
If in doubt, say N.
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index 0028aa9d875f..66b2f67c47b5 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -2,4 +2,5 @@ obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED) += restricted_heap.o obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED_MTK) += restricted_heap_mtk.o +obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED_LINARO) += restricted_heap_linaro.o obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o diff --git a/drivers/dma-buf/heaps/restricted_heap_linaro.c b/drivers/dma-buf/heaps/restricted_heap_linaro.c new file mode 100644 index 000000000000..4b08ed514023 --- /dev/null +++ b/drivers/dma-buf/heaps/restricted_heap_linaro.c @@ -0,0 +1,165 @@ +// SPDX-License-Identifier: GPL-2.0 +/*
- DMABUF secure heap exporter
- Copyright 2021 NXP.
- Copyright 2024 Linaro Limited.
- */
+#define pr_fmt(fmt) "rheap_linaro: " fmt
+#include <linux/dma-buf.h> +#include <linux/err.h> +#include <linux/genalloc.h> +#include <linux/module.h> +#include <linux/of.h> +#include <linux/of_fdt.h> +#include <linux/of_reserved_mem.h> +#include <linux/scatterlist.h> +#include <linux/slab.h>
+#include "restricted_heap.h"
+#define MAX_HEAP_COUNT 2
Are multiple supported because of what Cyrille mentioned here about permissions? https://lore.kernel.org/lkml/DBBPR04MB7514E006455AEA407041E4F788709@DBBPR04M...
Yes, I kept that as is.
So this is just some arbitrary limit? I'd prefer to have some sort of documentation about this.
How about removing the limit and using dynamic allocation instead?
Thanks, Jens
+#define HEAP_NAME_LEN 32
+struct resmem_restricted {
phys_addr_t base;
phys_addr_t size;
char name[HEAP_NAME_LEN];
bool no_map;
+};
+static struct resmem_restricted restricted_data[MAX_HEAP_COUNT] = {0}; +static unsigned int restricted_data_count;
+static int linaro_restricted_memory_allocate(struct restricted_heap *heap,
struct restricted_buffer *buf)
+{
struct gen_pool *pool = heap->priv_data;
unsigned long pa;
int ret;
buf->size = ALIGN(buf->size, PAGE_SIZE);
pa = gen_pool_alloc(pool, buf->size);
if (!pa)
return -ENOMEM;
ret = sg_alloc_table(&buf->sg_table, 1, GFP_KERNEL);
if (ret) {
gen_pool_free(pool, pa, buf->size);
return ret;
}
sg_set_page(buf->sg_table.sgl, phys_to_page(pa), buf->size, 0);
return 0;
+}
+static void linaro_restricted_memory_free(struct restricted_heap *heap,
struct restricted_buffer *buf)
+{
struct gen_pool *pool = heap->priv_data;
struct scatterlist *sg;
unsigned int i;
for_each_sg(buf->sg_table.sgl, sg, buf->sg_table.nents, i)
gen_pool_free(pool, page_to_phys(sg_page(sg)), sg->length);
sg_free_table(&buf->sg_table);
+}
+static const struct restricted_heap_ops linaro_restricted_heap_ops = {
.alloc = linaro_restricted_memory_allocate,
.free = linaro_restricted_memory_free,
+};
+static int add_heap(struct resmem_restricted *mem) +{
struct restricted_heap *heap;
struct gen_pool *pool;
int ret;
if (mem->base == 0 || mem->size == 0) {
pr_err("restricted_data base or size is not correct\n");
return -EINVAL;
}
heap = kzalloc(sizeof(*heap), GFP_KERNEL);
if (!heap)
return -ENOMEM;
pool = gen_pool_create(PAGE_SHIFT, -1);
if (!pool) {
ret = -ENOMEM;
goto err_free_heap;
}
ret = gen_pool_add(pool, mem->base, mem->size, -1);
if (ret)
goto err_free_pool;
heap->no_map = mem->no_map;
heap->priv_data = pool;
heap->name = mem->name;
heap->ops = &linaro_restricted_heap_ops;
ret = restricted_heap_add(heap);
if (ret)
goto err_free_pool;
return 0;
+err_free_pool:
gen_pool_destroy(pool);
+err_free_heap:
kfree(heap);
return ret;
+}
+static int __init rmem_restricted_heap_setup(struct reserved_mem *rmem) +{
size_t len = HEAP_NAME_LEN;
const char *s;
bool no_map;
if (WARN_ONCE(restricted_data_count >= MAX_HEAP_COUNT,
"Cannot handle more than %u restricted heaps\n",
MAX_HEAP_COUNT))
return -EINVAL;
no_map = of_get_flat_dt_prop(rmem->fdt_node, "no-map", NULL);
s = strchr(rmem->name, '@');
if (s)
len = umin(s - rmem->name + 1, len);
restricted_data[restricted_data_count].base = rmem->base;
restricted_data[restricted_data_count].size = rmem->size;
restricted_data[restricted_data_count].no_map = no_map;
strscpy(restricted_data[restricted_data_count].name, rmem->name, len);
restricted_data_count++;
return 0;
+}
+RESERVEDMEM_OF_DECLARE(linaro_restricted_heap, "linaro,restricted-heap",
rmem_restricted_heap_setup);
+static int linaro_restricted_heap_init(void) +{
unsigned int i;
int ret;
for (i = 0; i < restricted_data_count; i++) {
ret = add_heap(&restricted_data[i]);
if (ret)
return ret;
}
return 0;
+}
+module_init(linaro_restricted_heap_init); +MODULE_DESCRIPTION("Linaro Restricted Heap Driver");
+MODULE_LICENSE("GPL");
2.34.1
On Wed, Sep 4, 2024 at 2:44 AM Jens Wiklander jens.wiklander@linaro.org wrote:
On Tue, Sep 3, 2024 at 7:50 PM T.J. Mercier tjmercier@google.com wrote:
On Fri, Aug 30, 2024 at 12:04 AM Jens Wiklander jens.wiklander@linaro.org wrote:
Add a Linaro restricted heap using the linaro,restricted-heap bindings implemented based on the generic restricted heap.
The bindings defines a range of physical restricted memory. The heap manages this address range using genalloc. The allocated dma-buf file descriptor can later be registered with the TEE subsystem for later use via Trusted Applications in the secure world.
Co-developed-by: Olivier Masse olivier.masse@nxp.com Signed-off-by: Olivier Masse olivier.masse@nxp.com Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/dma-buf/heaps/Kconfig | 10 ++ drivers/dma-buf/heaps/Makefile | 1 + .../dma-buf/heaps/restricted_heap_linaro.c | 165 ++++++++++++++++++ 3 files changed, 176 insertions(+) create mode 100644 drivers/dma-buf/heaps/restricted_heap_linaro.c
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index 58903bc62ac8..82e2c5d09242 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -28,3 +28,13 @@ config DMABUF_HEAPS_RESTRICTED_MTK help Enable restricted dma-buf heaps for MediaTek platform. This heap is backed by TEE client interfaces. If in doubt, say N.
+config DMABUF_HEAPS_RESTRICTED_LINARO
bool "Linaro DMA-BUF Restricted Heap"
depends on DMABUF_HEAPS_RESTRICTED
help
Choose this option to enable the Linaro restricted dma-buf heap.
The restricted heap pools are defined according to the DT. Heaps
are allocated in the pools using gen allocater.
If in doubt, say N.
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index 0028aa9d875f..66b2f67c47b5 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -2,4 +2,5 @@ obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED) += restricted_heap.o obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED_MTK) += restricted_heap_mtk.o +obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED_LINARO) += restricted_heap_linaro.o obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o diff --git a/drivers/dma-buf/heaps/restricted_heap_linaro.c b/drivers/dma-buf/heaps/restricted_heap_linaro.c new file mode 100644 index 000000000000..4b08ed514023 --- /dev/null +++ b/drivers/dma-buf/heaps/restricted_heap_linaro.c @@ -0,0 +1,165 @@ +// SPDX-License-Identifier: GPL-2.0 +/*
- DMABUF secure heap exporter
- Copyright 2021 NXP.
- Copyright 2024 Linaro Limited.
- */
+#define pr_fmt(fmt) "rheap_linaro: " fmt
+#include <linux/dma-buf.h> +#include <linux/err.h> +#include <linux/genalloc.h> +#include <linux/module.h> +#include <linux/of.h> +#include <linux/of_fdt.h> +#include <linux/of_reserved_mem.h> +#include <linux/scatterlist.h> +#include <linux/slab.h>
+#include "restricted_heap.h"
+#define MAX_HEAP_COUNT 2
Are multiple supported because of what Cyrille mentioned here about permissions? https://lore.kernel.org/lkml/DBBPR04MB7514E006455AEA407041E4F788709@DBBPR04M...
Yes, I kept that as is.
Ok thanks.
So this is just some arbitrary limit? I'd prefer to have some sort of documentation about this.
How about removing the limit and using dynamic allocation instead?
That works too!
Thanks, Jens
+#define HEAP_NAME_LEN 32
+struct resmem_restricted {
phys_addr_t base;
phys_addr_t size;
char name[HEAP_NAME_LEN];
bool no_map;
+};
+static struct resmem_restricted restricted_data[MAX_HEAP_COUNT] = {0}; +static unsigned int restricted_data_count;
+static int linaro_restricted_memory_allocate(struct restricted_heap *heap,
struct restricted_buffer *buf)
+{
struct gen_pool *pool = heap->priv_data;
unsigned long pa;
int ret;
buf->size = ALIGN(buf->size, PAGE_SIZE);
pa = gen_pool_alloc(pool, buf->size);
if (!pa)
return -ENOMEM;
ret = sg_alloc_table(&buf->sg_table, 1, GFP_KERNEL);
if (ret) {
gen_pool_free(pool, pa, buf->size);
return ret;
}
sg_set_page(buf->sg_table.sgl, phys_to_page(pa), buf->size, 0);
return 0;
+}
+static void linaro_restricted_memory_free(struct restricted_heap *heap,
struct restricted_buffer *buf)
+{
struct gen_pool *pool = heap->priv_data;
struct scatterlist *sg;
unsigned int i;
for_each_sg(buf->sg_table.sgl, sg, buf->sg_table.nents, i)
gen_pool_free(pool, page_to_phys(sg_page(sg)), sg->length);
sg_free_table(&buf->sg_table);
+}
+static const struct restricted_heap_ops linaro_restricted_heap_ops = {
.alloc = linaro_restricted_memory_allocate,
.free = linaro_restricted_memory_free,
+};
+static int add_heap(struct resmem_restricted *mem) +{
struct restricted_heap *heap;
struct gen_pool *pool;
int ret;
if (mem->base == 0 || mem->size == 0) {
pr_err("restricted_data base or size is not correct\n");
return -EINVAL;
}
heap = kzalloc(sizeof(*heap), GFP_KERNEL);
if (!heap)
return -ENOMEM;
pool = gen_pool_create(PAGE_SHIFT, -1);
if (!pool) {
ret = -ENOMEM;
goto err_free_heap;
}
ret = gen_pool_add(pool, mem->base, mem->size, -1);
if (ret)
goto err_free_pool;
heap->no_map = mem->no_map;
heap->priv_data = pool;
heap->name = mem->name;
heap->ops = &linaro_restricted_heap_ops;
ret = restricted_heap_add(heap);
if (ret)
goto err_free_pool;
return 0;
+err_free_pool:
gen_pool_destroy(pool);
+err_free_heap:
kfree(heap);
return ret;
+}
+static int __init rmem_restricted_heap_setup(struct reserved_mem *rmem) +{
size_t len = HEAP_NAME_LEN;
const char *s;
bool no_map;
if (WARN_ONCE(restricted_data_count >= MAX_HEAP_COUNT,
"Cannot handle more than %u restricted heaps\n",
MAX_HEAP_COUNT))
return -EINVAL;
no_map = of_get_flat_dt_prop(rmem->fdt_node, "no-map", NULL);
s = strchr(rmem->name, '@');
if (s)
len = umin(s - rmem->name + 1, len);
restricted_data[restricted_data_count].base = rmem->base;
restricted_data[restricted_data_count].size = rmem->size;
restricted_data[restricted_data_count].no_map = no_map;
strscpy(restricted_data[restricted_data_count].name, rmem->name, len);
restricted_data_count++;
return 0;
+}
+RESERVEDMEM_OF_DECLARE(linaro_restricted_heap, "linaro,restricted-heap",
rmem_restricted_heap_setup);
+static int linaro_restricted_heap_init(void) +{
unsigned int i;
int ret;
for (i = 0; i < restricted_data_count; i++) {
ret = add_heap(&restricted_data[i]);
if (ret)
return ret;
}
return 0;
+}
+module_init(linaro_restricted_heap_init); +MODULE_DESCRIPTION("Linaro Restricted Heap Driver");
+MODULE_LICENSE("GPL");
2.34.1
On Wed, Sep 4, 2024 at 11:42 PM T.J. Mercier tjmercier@google.com wrote:
On Wed, Sep 4, 2024 at 2:44 AM Jens Wiklander jens.wiklander@linaro.org wrote:
On Tue, Sep 3, 2024 at 7:50 PM T.J. Mercier tjmercier@google.com wrote:
On Fri, Aug 30, 2024 at 12:04 AM Jens Wiklander jens.wiklander@linaro.org wrote:
Add a Linaro restricted heap using the linaro,restricted-heap bindings implemented based on the generic restricted heap.
The bindings defines a range of physical restricted memory. The heap manages this address range using genalloc. The allocated dma-buf file descriptor can later be registered with the TEE subsystem for later use via Trusted Applications in the secure world.
Co-developed-by: Olivier Masse olivier.masse@nxp.com Signed-off-by: Olivier Masse olivier.masse@nxp.com Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/dma-buf/heaps/Kconfig | 10 ++ drivers/dma-buf/heaps/Makefile | 1 + .../dma-buf/heaps/restricted_heap_linaro.c | 165 ++++++++++++++++++ 3 files changed, 176 insertions(+) create mode 100644 drivers/dma-buf/heaps/restricted_heap_linaro.c
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index 58903bc62ac8..82e2c5d09242 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -28,3 +28,13 @@ config DMABUF_HEAPS_RESTRICTED_MTK help Enable restricted dma-buf heaps for MediaTek platform. This heap is backed by TEE client interfaces. If in doubt, say N.
+config DMABUF_HEAPS_RESTRICTED_LINARO
bool "Linaro DMA-BUF Restricted Heap"
depends on DMABUF_HEAPS_RESTRICTED
help
Choose this option to enable the Linaro restricted dma-buf heap.
The restricted heap pools are defined according to the DT. Heaps
are allocated in the pools using gen allocater.
If in doubt, say N.
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index 0028aa9d875f..66b2f67c47b5 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -2,4 +2,5 @@ obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED) += restricted_heap.o obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED_MTK) += restricted_heap_mtk.o +obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED_LINARO) += restricted_heap_linaro.o obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o diff --git a/drivers/dma-buf/heaps/restricted_heap_linaro.c b/drivers/dma-buf/heaps/restricted_heap_linaro.c new file mode 100644 index 000000000000..4b08ed514023 --- /dev/null +++ b/drivers/dma-buf/heaps/restricted_heap_linaro.c @@ -0,0 +1,165 @@ +// SPDX-License-Identifier: GPL-2.0 +/*
- DMABUF secure heap exporter
- Copyright 2021 NXP.
- Copyright 2024 Linaro Limited.
- */
+#define pr_fmt(fmt) "rheap_linaro: " fmt
+#include <linux/dma-buf.h> +#include <linux/err.h> +#include <linux/genalloc.h> +#include <linux/module.h> +#include <linux/of.h> +#include <linux/of_fdt.h> +#include <linux/of_reserved_mem.h> +#include <linux/scatterlist.h> +#include <linux/slab.h>
+#include "restricted_heap.h"
+#define MAX_HEAP_COUNT 2
Are multiple supported because of what Cyrille mentioned here about permissions? https://lore.kernel.org/lkml/DBBPR04MB7514E006455AEA407041E4F788709@DBBPR04M...
Yes, I kept that as is.
Ok thanks.
So this is just some arbitrary limit? I'd prefer to have some sort of documentation about this.
How about removing the limit and using dynamic allocation instead?
That works too!
It turns out that was easier said than done. The limit is hardcoded because dynamic memory allocation isn't available at that stage during boot. We have a short description of this heap in Kconfig. I'll add something about the limit there if that makes sense.
Thanks, Jens
On Mon, Sep 9, 2024 at 11:06 PM Jens Wiklander jens.wiklander@linaro.org wrote:
On Wed, Sep 4, 2024 at 11:42 PM T.J. Mercier tjmercier@google.com wrote:
On Wed, Sep 4, 2024 at 2:44 AM Jens Wiklander jens.wiklander@linaro.org wrote:
On Tue, Sep 3, 2024 at 7:50 PM T.J. Mercier tjmercier@google.com wrote:
On Fri, Aug 30, 2024 at 12:04 AM Jens Wiklander jens.wiklander@linaro.org wrote:
Add a Linaro restricted heap using the linaro,restricted-heap bindings implemented based on the generic restricted heap.
The bindings defines a range of physical restricted memory. The heap manages this address range using genalloc. The allocated dma-buf file descriptor can later be registered with the TEE subsystem for later use via Trusted Applications in the secure world.
Co-developed-by: Olivier Masse olivier.masse@nxp.com Signed-off-by: Olivier Masse olivier.masse@nxp.com Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/dma-buf/heaps/Kconfig | 10 ++ drivers/dma-buf/heaps/Makefile | 1 + .../dma-buf/heaps/restricted_heap_linaro.c | 165 ++++++++++++++++++ 3 files changed, 176 insertions(+) create mode 100644 drivers/dma-buf/heaps/restricted_heap_linaro.c
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index 58903bc62ac8..82e2c5d09242 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -28,3 +28,13 @@ config DMABUF_HEAPS_RESTRICTED_MTK help Enable restricted dma-buf heaps for MediaTek platform. This heap is backed by TEE client interfaces. If in doubt, say N.
+config DMABUF_HEAPS_RESTRICTED_LINARO
bool "Linaro DMA-BUF Restricted Heap"
depends on DMABUF_HEAPS_RESTRICTED
help
Choose this option to enable the Linaro restricted dma-buf heap.
The restricted heap pools are defined according to the DT. Heaps
are allocated in the pools using gen allocater.
If in doubt, say N.
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index 0028aa9d875f..66b2f67c47b5 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -2,4 +2,5 @@ obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED) += restricted_heap.o obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED_MTK) += restricted_heap_mtk.o +obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED_LINARO) += restricted_heap_linaro.o obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o diff --git a/drivers/dma-buf/heaps/restricted_heap_linaro.c b/drivers/dma-buf/heaps/restricted_heap_linaro.c new file mode 100644 index 000000000000..4b08ed514023 --- /dev/null +++ b/drivers/dma-buf/heaps/restricted_heap_linaro.c @@ -0,0 +1,165 @@ +// SPDX-License-Identifier: GPL-2.0 +/*
- DMABUF secure heap exporter
- Copyright 2021 NXP.
- Copyright 2024 Linaro Limited.
- */
+#define pr_fmt(fmt) "rheap_linaro: " fmt
+#include <linux/dma-buf.h> +#include <linux/err.h> +#include <linux/genalloc.h> +#include <linux/module.h> +#include <linux/of.h> +#include <linux/of_fdt.h> +#include <linux/of_reserved_mem.h> +#include <linux/scatterlist.h> +#include <linux/slab.h>
+#include "restricted_heap.h"
+#define MAX_HEAP_COUNT 2
Are multiple supported because of what Cyrille mentioned here about permissions? https://lore.kernel.org/lkml/DBBPR04MB7514E006455AEA407041E4F788709@DBBPR04M...
Yes, I kept that as is.
Ok thanks.
So this is just some arbitrary limit? I'd prefer to have some sort of documentation about this.
How about removing the limit and using dynamic allocation instead?
That works too!
It turns out that was easier said than done. The limit is hardcoded because dynamic memory allocation isn't available at that stage during boot. We have a short description of this heap in Kconfig. I'll add something about the limit there if that makes sense.
Thanks, Jens
Ah ok sounds good.
I noticed one other thing, linaro_restricted_heap_init and add_heap should probably have __init. Last week I sent a patch to add that for the CMA and system heaps.
On Tue, Sep 10, 2024 at 5:08 PM T.J. Mercier tjmercier@google.com wrote:
On Mon, Sep 9, 2024 at 11:06 PM Jens Wiklander jens.wiklander@linaro.org wrote:
On Wed, Sep 4, 2024 at 11:42 PM T.J. Mercier tjmercier@google.com wrote:
On Wed, Sep 4, 2024 at 2:44 AM Jens Wiklander jens.wiklander@linaro.org wrote:
On Tue, Sep 3, 2024 at 7:50 PM T.J. Mercier tjmercier@google.com wrote:
On Fri, Aug 30, 2024 at 12:04 AM Jens Wiklander jens.wiklander@linaro.org wrote:
Add a Linaro restricted heap using the linaro,restricted-heap bindings implemented based on the generic restricted heap.
The bindings defines a range of physical restricted memory. The heap manages this address range using genalloc. The allocated dma-buf file descriptor can later be registered with the TEE subsystem for later use via Trusted Applications in the secure world.
Co-developed-by: Olivier Masse olivier.masse@nxp.com Signed-off-by: Olivier Masse olivier.masse@nxp.com Signed-off-by: Jens Wiklander jens.wiklander@linaro.org
drivers/dma-buf/heaps/Kconfig | 10 ++ drivers/dma-buf/heaps/Makefile | 1 + .../dma-buf/heaps/restricted_heap_linaro.c | 165 ++++++++++++++++++ 3 files changed, 176 insertions(+) create mode 100644 drivers/dma-buf/heaps/restricted_heap_linaro.c
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index 58903bc62ac8..82e2c5d09242 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -28,3 +28,13 @@ config DMABUF_HEAPS_RESTRICTED_MTK help Enable restricted dma-buf heaps for MediaTek platform. This heap is backed by TEE client interfaces. If in doubt, say N.
+config DMABUF_HEAPS_RESTRICTED_LINARO
bool "Linaro DMA-BUF Restricted Heap"
depends on DMABUF_HEAPS_RESTRICTED
help
Choose this option to enable the Linaro restricted dma-buf heap.
The restricted heap pools are defined according to the DT. Heaps
are allocated in the pools using gen allocater.
If in doubt, say N.
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index 0028aa9d875f..66b2f67c47b5 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -2,4 +2,5 @@ obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED) += restricted_heap.o obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED_MTK) += restricted_heap_mtk.o +obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED_LINARO) += restricted_heap_linaro.o obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o diff --git a/drivers/dma-buf/heaps/restricted_heap_linaro.c b/drivers/dma-buf/heaps/restricted_heap_linaro.c new file mode 100644 index 000000000000..4b08ed514023 --- /dev/null +++ b/drivers/dma-buf/heaps/restricted_heap_linaro.c @@ -0,0 +1,165 @@ +// SPDX-License-Identifier: GPL-2.0 +/*
- DMABUF secure heap exporter
- Copyright 2021 NXP.
- Copyright 2024 Linaro Limited.
- */
+#define pr_fmt(fmt) "rheap_linaro: " fmt
+#include <linux/dma-buf.h> +#include <linux/err.h> +#include <linux/genalloc.h> +#include <linux/module.h> +#include <linux/of.h> +#include <linux/of_fdt.h> +#include <linux/of_reserved_mem.h> +#include <linux/scatterlist.h> +#include <linux/slab.h>
+#include "restricted_heap.h"
+#define MAX_HEAP_COUNT 2
Are multiple supported because of what Cyrille mentioned here about permissions? https://lore.kernel.org/lkml/DBBPR04MB7514E006455AEA407041E4F788709@DBBPR04M...
Yes, I kept that as is.
Ok thanks.
So this is just some arbitrary limit? I'd prefer to have some sort of documentation about this.
How about removing the limit and using dynamic allocation instead?
That works too!
It turns out that was easier said than done. The limit is hardcoded because dynamic memory allocation isn't available at that stage during boot. We have a short description of this heap in Kconfig. I'll add something about the limit there if that makes sense.
Thanks, Jens
Ah ok sounds good.
I noticed one other thing, linaro_restricted_heap_init and add_heap should probably have __init. Last week I sent a patch to add that for the CMA and system heaps.
Thanks, I'll add it.
Cheers, Jens
op-tee@lists.trustedfirmware.org