Hi Yuye,
Is this a violation of the FF-A spe, or is there any other problem?
I don’t think it is a violation of the FF-A spec. This is an implementation-defined mechanism and hence is out of scope concerning the spec. However, the current approach taken by Hafnium SPMC is most likely due to historical legacy reasons. If you were to implement the new approach(which would mimic the vcpu boot order on secondary CPU similar to primary CPU), it could have some ramifications on the current NWd driver for downstream projects. I will let Olivier and Raghu comment on it.
Thanks, Madhu
From: 梅建强(禹夜) meijianqiang.mjq@alibaba-inc.com Sent: Wednesday, July 19, 2023 3:09 AM To: Olivier Deprez Olivier.Deprez@arm.com; Madhukar Pappireddy Madhukar.Pappireddy@arm.com Cc: hafnium hafnium@lists.trustedfirmware.org; 高海源(码源) haiyuan.ghy@alibaba-inc.com; 王一蒙(北见) wym389994@alibaba-inc.com; 黄明(连一) hm281385@alibaba-inc.com Subject: [Hafnium] multiple SP secondary boot
Hi, Madhu, Olivier,
I am glad that I have the same understanding as both of you about description in FF-A 1.1 18.3.2.1.1.
Other points to discuss:
For SPs beyond the first SP, it is up to the normal world driver to invoke FFA_RUN for each secondary vCPU.
OK, I basically understand what you said. But I did not use this interface. I need to make it clear for my current implementation for booting secondary vCPUs on SPs beyond the first SP. I connect all vCPUs with the same index on all SP. If there is no vCPU for the current index, just skip to next vCPU with the same index on next SP. So for each index, we can build a vCPU boot list. Later when the nwd boots, upon invocation of PSCI_CPU_ON, each secondary vCPU for the first SP get initialized. Then Hafnium starts the next secondary vCPU on SP beyond the first SP according to the boot list. At present, we have implemented it and tested it without problems. Is this a violation of the FF-A spe, or is there any other problem?
Note I don't believe the current optee linux driver supports the above. In which case the scenario you describe (multiple optee instances) might not work out of the box.
Agree.
Also note there must be other issues like how to handle the secure storage when multiple optee instances exists.
Could you describe it in more detail?
Thanks for your supports.
Regards, Yuye.
------------------------------------------------------------------ 发件人:Olivier Deprez <Olivier.Deprez@arm.commailto:Olivier.Deprez@arm.com> 发送时间:2023年7月17日(星期一) 22:29 收件人:Madhukar Pappireddy <Madhukar.Pappireddy@arm.commailto:Madhukar.Pappireddy@arm.com>; 梅建强(禹夜) <meijianqiang.mjq@alibaba-inc.commailto:meijianqiang.mjq@alibaba-inc.com> 抄 送:hafnium <hafnium@lists.trustedfirmware.orgmailto:hafnium@lists.trustedfirmware.org>; 高海源(码源) <haiyuan.ghy@alibaba-inc.commailto:haiyuan.ghy@alibaba-inc.com>; 王一蒙(北见) <wym389994@alibaba-inc.commailto:wym389994@alibaba-inc.com>; 黄明(连一) <hm281385@alibaba-inc.commailto:hm281385@alibaba-inc.com> 主 题:Re: [Hafnium] multiple SP secondary boot
Hi,
I'm adding to Madhu's reply: At TF-A /secure boot time, before nwd is initialized, the first vCPU for each SP is resumed in sequence by the SPMC. It gives the opportunity for each SP to define a secondary boot address for later resuming of secondary vCPUs. Later when the nwd boots, upon invocation of PSCI_CPU_ON, each secondary vCPU for the first SP get initialized. For SPs beyond the first SP, it is up to the normal world driver to invoke FFA_RUN for each secondary vCPU.
Note I don't believe the current optee linux driver supports the above. In which case the scenario you describe (multiple optee instances) might not work out of the box. Also note there must be other issues like how to handle the secure storage when multiple optee instances exists. This might work though if you decide to strip down optee instances to a very minimal set of features, that do not use the same resources. We might help further if you provide a bit more details about the intended setup/scenario.
Regards, Olivier.
________________________________ From: Madhukar Pappireddy <Madhukar.Pappireddy@arm.commailto:Madhukar.Pappireddy@arm.com> Sent: 17 July 2023 16:19 To: 梅建强(禹夜) <meijianqiang.mjq@alibaba-inc.commailto:meijianqiang.mjq@alibaba-inc.com>; Olivier Deprez <Olivier.Deprez@arm.commailto:Olivier.Deprez@arm.com> Cc: hafnium <hafnium@lists.trustedfirmware.orgmailto:hafnium@lists.trustedfirmware.org>; 高海源(码源) <haiyuan.ghy@alibaba-inc.commailto:haiyuan.ghy@alibaba-inc.com>; 王一蒙(北见) <wym389994@alibaba-inc.commailto:wym389994@alibaba-inc.com>; 黄明(连一) <hm281385@alibaba-inc.commailto:hm281385@alibaba-inc.com> Subject: RE: [Hafnium] multiple SP secondary boot
Hi Yuye,
I think the following sentence in the spec is slightly confusing. " For each SP and the SPMC, the Framework assumes that the same entry point address is used for initializing any execution context during a secondary cold boot." I believe what it meant is that the entry point address for any secondary execution context of a particular SP is the same. Note that the entry point of secondary execution contexts belonging to another SP is different than the one previously registered. I can confirm that Hafnium adheres to the above statement.
Please note that the cold boot flow is different for secondary cores compared to primary core. On the primary core, SPMC initializes the execution contexts of all SPs( refer to [1]). However, execution contexts of non-primary SP on secondary cores need CPU cycles to be allocated by the NWd scheduler through the FFA_RUN interface(refer to [2]).
Hopefully, I was able to answer your questions. [1] https://git.trustedfirmware.org/hafnium/hafnium.git/tree/src/arch/aarch64/pl... [2] https://git.trustedfirmware.org/hafnium/hafnium.git/tree/src/api.c#n1047
Thanks, Madhu
-----Original Message----- From: 梅建强(禹夜) via Hafnium <hafnium@lists.trustedfirmware.orgmailto:hafnium@lists.trustedfirmware.org> Sent: Saturday, July 15, 2023 4:54 AM To: Olivier Deprez <Olivier.Deprez@arm.commailto:Olivier.Deprez@arm.com> Cc: hafnium <hafnium@lists.trustedfirmware.orgmailto:hafnium@lists.trustedfirmware.org>; 高海源(码源) <haiyuan.ghy@alibaba-inc.commailto:haiyuan.ghy@alibaba-inc.com>; 王一蒙(北见) <wym389994@alibaba-inc.commailto:wym389994@alibaba-inc.com>; 黄明(连一) <hm281385@alibaba-inc.commailto:hm281385@alibaba-inc.com> Subject: [Hafnium] multiple SP secondary boot
Hi, Olivier, There's a problem I'd like to discuss with you. In the FF-A 1.1 18.3.2.1.1 section, it is described as follows: For each SP and the SPMC, the Framework assumes that the same entry point address is used for initializing any execution context during a secondary cold boot. This does not seem to make sense for entry point addresses of multiple SP. In addition, I do not see support for the secondary boot of the multiple SP in the current implementation of Hafnium. If I have multiple optee instances of MP type, how should I design the cold secondary boot flow for multiple SP? My idea is to build a vCPU list with the same index between VMs, just like primary boot flow, as shown in the code below: void vcpu_update_boot(struct vcpu *vcpu) { struct vcpu *current = NULL; struct vcpu *previous = NULL; if (boot_vcpu == NULL) { boot_vcpu = vcpu; return; } current = boot_vcpu; while (current != NULL && current->vm->boot_order <= vcpu->vm->boot_order) { previous = current; current = current->next_boot; } if (previous != NULL) { previous->next_boot = vcpu; } else { boot_vcpu = vcpu; } vcpu->next_boot = current; } Anyone else have some good suggestions? Thanks for the support. Regards, Yuye. -- Hafnium mailing list -- hafnium@lists.trustedfirmware.orgmailto:hafnium@lists.trustedfirmware.org To unsubscribe send an email to hafnium-leave@lists.trustedfirmware.orgmailto:hafnium-leave@lists.trustedfirmware.org