Hi Danny,
Could you check the inline comments in below?
On 3/31/2019 4:33 PM, Danny Shavit wrote: Hi David,
4.1. Regarding your proposal to a queue holding NSPE clients' requests: This was the original proposition we presented on the workshop. Eventually we have decided to go on a different solution that Ashu has proposed. I have developed this solution and it is the one described on the mailbox design document. Another option instead of queue is to expand the proposal on mailbox design document to contain more memory for the (let's say) 4 concurrent requests. Anyway, I prefer the fixed shared memory structure design over the queue as it is simpler in terms of maintenance and secure-nonsecure synchronization. Ashu also noted it is simpler in terms of memory checks and validations.
* Regarding the "step by step" approach with optimizations coming towards the end - I totally agree.
We can discuss about it more in the mailbox design document review.
5. I still don’t understand this. If the dispatcher function is the TF-M function that mailbox should call to pass the client message, then the type parameter is not enough. It should have a mailbox struct parameters with the message parameters including internal mailbox information.
Yes and no. There might be a mailbox message structure defined by mailbox document, or a structure defined in TF-M. The structure defined in TF-M can collect the necessary parameters according to the call type.
Actually, it might also depend on mailbox implementation. If mailbox implementation prefer to parsing all the parameters together in a single function, It is more natural to put the parsed parameters into the TF-M structure and pass the TF-M structure to dispatch function. If mailbox implementation provides dedicated parse function for each PSA client call type, the mailbox message structure can be passed into the dispatch function and mailbox parse function can be invoked inside PSA client call handlers.
I add a `...` in the dispatcher function and client call handler parameter lists. It may help make the pseudo code more precise.
Thanks, Danny
-----Original Message----- From: David Hu (Arm Technology China) David.Hu@arm.commailto:David.Hu@arm.com Sent: Friday, March 29, 2019 12:51 To: Danny Shavit Danny.Shavit@arm.commailto:Danny.Shavit@arm.com; tf-m@lists.trustedfirmware.orgmailto:tf-m@lists.trustedfirmware.org; Christopher Brand chris.brand@cypress.commailto:chris.brand@cypress.com Cc: nd nd@arm.commailto:nd@arm.com Subject: Re: [twin-cpu]Dual core communication design document for review
Hi Danny,
Please check the inline comments in below
On 3/28/2019 6:48 PM, Danny Shavit wrote:
Hi David, Please see my comments below: *2.2.* I think this is misleading. Inter-Processor Communicationis something provided by the platform. Mailboxmechanism is implemented using the Inter-Processor Communicationsupplied by the platform. So TF-M non-secure interface library notifies TF-M of the PSA client call request, via mailbox (and not via mailbox and IPC).
[David] Yes. It makes sense. Since the mailbox concept is not always bound up with IPC, I intent to imply that mailbox is implemented by IPC. But as you pointed out, it is a little confusing. I will specify the relationship of mailbox and IPC in another section.
*2.3.* Again, this is misleading. I think we should mention once that mailbox is based on Inter-Processor Communication capabilities of the platform, and from that point mention only mailbox (and not Inter-Processor Communication). Same for *2.5.*
[David] Inter-Processor Communication interrupt and its interrupt handler are both platform specific. the IPC interrupt handler is implemented by platform vendor. Accroding to the design, the IPC interrupt handler invokes some mailbox general APIs to execute some mailbox operations. In contrast to IPC, mailbox is a general service to support all the vendor platforms. Strickly speaking, mailbox service doesn't own a physical interrupt. It is connected with IPC interrupt.
When TF-M core/SPM invokes mailbox APIs, IPC does be transparent to upper layer and we don't need to mention IPC. But while regarding IPC interrupt, it is more reasonabel to explicitly use IPC terms, instead of mailbox. Thus I specially mention both of them and describe their flow in detail in section 4.1 and section 4.2 respectively .
**
*3.1.* Addition to the parameters list: minor version; input vector length; input vector length
[David] Yes. I will make the description more precise. Thx.
*4**.* I am not sure about the meaning of top / bottom half w.r.t PendSV. I think it should be better explained in the document. As far as I see it, the PendSV comes only at the end of the mailbox interrupt handler.
[David] The top and bottom halves are described in detail in section 4.2 PendSV handler
*4.1.* The recommendation to implement mailbox time-consuming operations like message copying in PendSV handle, has a price on the NSPE side. If we go for the mailbox design proposed on the workshop, then NSPE clients are block until the message is fully copied. For now, it seems to me more reasonable to do the mailbox work in the mailbox handler, unblock the client, and then assert PendSV for further handling in the SPM. *Regarding 4 and 4.1, **I think **we need **further disc**uss**ion **to make sure we both have the same understanding.*
[David] There are several advantages to execute time-consumping operations in PendSV, other than in IPC handler. I'd like to list just two of them here. 1. The PendSV can put in a low priority. Thus PendSV handler will not block other exception handlings. It can enhance the real-time capabilities in the whole system. 2. It can simplify the IPC interrupt priority setting. Otherwise, it is quite difficult to configure the IPC interrupt priority in diverse use scenario with different applications, to trade off the real-time issues of responding mailbox event and other exceptions. It is a similar idea to the *top half and bottom half of interrupt handling* concept in Linux kernel. In current design, the IPC interrupt handler is the *top half* of mailbox handling and the PendSV handler is the *bottom half*. Personally, putting mailbox handling in PendSV is a highlight in our workshop design.
According to my own understanding, it can be unnecessary to block NSPE side while wating for TF-M to copy message(s). A common design of mailbox is: - NSPE side maintains a mailbox queue. Each slot in the queue holds a mailbox message and related mailbox objects. - When an application requests secure service, the mailbox service in NSPE find an empty slot in the queue and prepare the message in that slot. - NSPE notifies SPE of the new mailbox event. - SPE copies the pending mailbox message(s) from the NSPE mailbox queue. The NSPE is blocked when SPE accesses the NSPE mailbox queue to avoid conflict. - While SPE is handling the mailbox messages, NSPE can continue inserting more mailbox messages into the NSPE mailbox queue, until the queue is full. As shown in above, it is necessary to block NSPE only when it is in critical section (SPE accesses NSPE queue) and the queue is full. Whether the queue is blocked or not, mainly depends on the applications and queue depth. But it is just an example for your reference. We don't have to striclty follow it.
Please note that, the recommendation is just a recommendation, intead of a mandatory implementation requirement. As I mentioned in the next paragraph, IPC interrupt handler can complete more mailbox operations in actual mailbox implementation. From the perspecitve of mailbox development, it is unnecessary to make everything exactly the same as we planned, in one shot. It makes more sense to implement the whole mailbox handling step by step. At first, we can put all the handling in IPC interrupt handling. Eventually, we can put parts of mailbox handling in PendSV handler, to optimize the whole system.
*4.**2**.* I think we need further explanation here. What does it mean that the mailbox handling should be added to PendSV handler, and how it is going to be done. I think it is better to be explicit and add real function names.
[David] The details are covered in section 4.2.1 After the mailbox design document is ready and mailbox APIs are determined, we can refer to the explicit mailbox APIs here. No pressure, Danny! :P
*4.**2**.**1.*
- Please note that current mailbox design supports multiple mailbox
requests handling in SPE, but only one request can be issued at a time (until request parameters are fully copied). So copying multiple messages is not relevant.
[David] As shown in the common mailbox design in above, a mailbox queue is maintained in NSPE side. Thus it is unnecessary to block NSPE while waiting for TF-M to copy messages. When TF-M is trapped in exception handling with higher priority than IPC interrupt does, it cannot repond the mailbox request in time. Assumed that mailbox queue is maintained in NSPE, it is possible that, although it may be rare considering the performance of M-profile cpus, multiple NSPE mailbox messages are pending from diverse NSPE applications.
It does be unlikely to find multiple messages according to current mailbox design. But the dual core communication is trying to provide a general protopye to cooperate with potentially diverse mailbox implementations. Thus it is necessary to cover the cases that there may be multiple pending mailbox messages in dual core communication design.
But anyway, as you can see in the document, it is not a must. So please feel free to continue the mailbox development.
- “Mailbox handling parses the mailbox message copied in SPE and
fetch the *information of th**e**PSA client call type, including the PSA client call type*.” – Please rephrase.
[David] Good catch. Thx a lot.
*4.3.1.* A compilation flag is mentioned. I assume you mean a twin-cpu-system flag. This is general and maybe better mentioned first in another section and not in the replying routine section.
[David] I cannot 100% assume that all the twin cpu specific functions will share the same flag. Besides, different functions may prefer diverse implementations. So I'd like to make it more flexible.
*5.* Between mailbox handling and PSA client call handlers, I miss the TF-M function that mailbox should call to pass the client message.
[David] It is a dispatcher function provided by TF-M core/SPM. Please refer to step 3 in section 4.2.1 and the dispatcher function pseudo code in chapter 5.
Thanks, Danny -----Original Message----- From: David Hu (Arm Technology China) <David.Hu@arm.commailto:David.Hu@arm.com> Sent: Wednesday, March 27, 2019 06:41 To: tf-m@lists.trustedfirmware.orgmailto:tf-m@lists.trustedfirmware.org; Christopher Brand <chris.brand@cypress.commailto:chris.brand@cypress.com>; Danny Shavit <Danny.Shavit@arm.commailto:Danny.Shavit@arm.com> Cc: nd <nd@arm.commailto:nd@arm.com> Subject: [twin-cpu]Dual core communication design document for review Hi all, I’ve posted a design document for dual core communication in twin cpu project. Could you please review it at https://developer.trustedfirmware.org/w/tf_m/design/twin-cpu/communication_p... Any comment is welcome. Thank you. Best regards, Hu Ziji