Hi, Olivier, There's a problem I'd like to discuss with you. In the FF-A 1.1 18.3.2.1.1 section, it is described as follows: For each SP and the SPMC, the Framework assumes that the same entry point address is used for initializing any execution context during a secondary cold boot. This does not seem to make sense for entry point addresses of multiple SP. In addition, I do not see support for the secondary boot of the multiple SP in the current implementation of Hafnium. If I have multiple optee instances of MP type, how should I design the cold secondary boot flow for multiple SP? My idea is to build a vCPU list with the same index between VMs, just like primary boot flow, as shown in the code below: void vcpu_update_boot(struct vcpu *vcpu) { struct vcpu *current = NULL; struct vcpu *previous = NULL; if (boot_vcpu == NULL) { boot_vcpu = vcpu; return; } current = boot_vcpu; while (current != NULL && current->vm->boot_order <= vcpu->vm->boot_order) { previous = current; current = current->next_boot; } if (previous != NULL) { previous->next_boot = vcpu; } else { boot_vcpu = vcpu; } vcpu->next_boot = current; } Anyone else have some good suggestions? Thanks for the support. Regards, Yuye.
Hi Yuye,
I think the following sentence in the spec is slightly confusing. " For each SP and the SPMC, the Framework assumes that the same entry point address is used for initializing any execution context during a secondary cold boot." I believe what it meant is that the entry point address for any secondary execution context of a particular SP is the same. Note that the entry point of secondary execution contexts belonging to another SP is different than the one previously registered. I can confirm that Hafnium adheres to the above statement.
Please note that the cold boot flow is different for secondary cores compared to primary core. On the primary core, SPMC initializes the execution contexts of all SPs( refer to [1]). However, execution contexts of non-primary SP on secondary cores need CPU cycles to be allocated by the NWd scheduler through the FFA_RUN interface(refer to [2]).
Hopefully, I was able to answer your questions. [1] https://git.trustedfirmware.org/hafnium/hafnium.git/tree/src/arch/aarch64/pl... [2] https://git.trustedfirmware.org/hafnium/hafnium.git/tree/src/api.c#n1047
Thanks, Madhu
-----Original Message----- From: 梅建强(禹夜) via Hafnium hafnium@lists.trustedfirmware.org Sent: Saturday, July 15, 2023 4:54 AM To: Olivier Deprez Olivier.Deprez@arm.com Cc: hafnium hafnium@lists.trustedfirmware.org; 高海源(码源) haiyuan.ghy@alibaba-inc.com; 王一蒙(北见) wym389994@alibaba-inc.com; 黄明(连一) hm281385@alibaba-inc.com Subject: [Hafnium] multiple SP secondary boot
Hi, Olivier, There's a problem I'd like to discuss with you. In the FF-A 1.1 18.3.2.1.1 section, it is described as follows: For each SP and the SPMC, the Framework assumes that the same entry point address is used for initializing any execution context during a secondary cold boot. This does not seem to make sense for entry point addresses of multiple SP. In addition, I do not see support for the secondary boot of the multiple SP in the current implementation of Hafnium. If I have multiple optee instances of MP type, how should I design the cold secondary boot flow for multiple SP? My idea is to build a vCPU list with the same index between VMs, just like primary boot flow, as shown in the code below: void vcpu_update_boot(struct vcpu *vcpu) { struct vcpu *current = NULL; struct vcpu *previous = NULL; if (boot_vcpu == NULL) { boot_vcpu = vcpu; return; } current = boot_vcpu; while (current != NULL && current->vm->boot_order <= vcpu->vm->boot_order) { previous = current; current = current->next_boot; } if (previous != NULL) { previous->next_boot = vcpu; } else { boot_vcpu = vcpu; } vcpu->next_boot = current; } Anyone else have some good suggestions? Thanks for the support. Regards, Yuye. -- Hafnium mailing list -- hafnium@lists.trustedfirmware.org To unsubscribe send an email to hafnium-leave@lists.trustedfirmware.org
Hi,
I'm adding to Madhu's reply: At TF-A /secure boot time, before nwd is initialized, the first vCPU for each SP is resumed in sequence by the SPMC. It gives the opportunity for each SP to define a secondary boot address for later resuming of secondary vCPUs. Later when the nwd boots, upon invocation of PSCI_CPU_ON, each secondary vCPU for the first SP get initialized. For SPs beyond the first SP, it is up to the normal world driver to invoke FFA_RUN for each secondary vCPU.
Note I don't believe the current optee linux driver supports the above. In which case the scenario you describe (multiple optee instances) might not work out of the box. Also note there must be other issues like how to handle the secure storage when multiple optee instances exists. This might work though if you decide to strip down optee instances to a very minimal set of features, that do not use the same resources. We might help further if you provide a bit more details about the intended setup/scenario.
Regards, Olivier.
________________________________ From: Madhukar Pappireddy Madhukar.Pappireddy@arm.com Sent: 17 July 2023 16:19 To: 梅建强(禹夜) meijianqiang.mjq@alibaba-inc.com; Olivier Deprez Olivier.Deprez@arm.com Cc: hafnium hafnium@lists.trustedfirmware.org; 高海源(码源) haiyuan.ghy@alibaba-inc.com; 王一蒙(北见) wym389994@alibaba-inc.com; 黄明(连一) hm281385@alibaba-inc.com Subject: RE: [Hafnium] multiple SP secondary boot
Hi Yuye,
I think the following sentence in the spec is slightly confusing. " For each SP and the SPMC, the Framework assumes that the same entry point address is used for initializing any execution context during a secondary cold boot." I believe what it meant is that the entry point address for any secondary execution context of a particular SP is the same. Note that the entry point of secondary execution contexts belonging to another SP is different than the one previously registered. I can confirm that Hafnium adheres to the above statement.
Please note that the cold boot flow is different for secondary cores compared to primary core. On the primary core, SPMC initializes the execution contexts of all SPs( refer to [1]). However, execution contexts of non-primary SP on secondary cores need CPU cycles to be allocated by the NWd scheduler through the FFA_RUN interface(refer to [2]).
Hopefully, I was able to answer your questions. [1] https://git.trustedfirmware.org/hafnium/hafnium.git/tree/src/arch/aarch64/pl... [2] https://git.trustedfirmware.org/hafnium/hafnium.git/tree/src/api.c#n1047
Thanks, Madhu
-----Original Message----- From: 梅建强(禹夜) via Hafnium hafnium@lists.trustedfirmware.org Sent: Saturday, July 15, 2023 4:54 AM To: Olivier Deprez Olivier.Deprez@arm.com Cc: hafnium hafnium@lists.trustedfirmware.org; 高海源(码源) haiyuan.ghy@alibaba-inc.com; 王一蒙(北见) wym389994@alibaba-inc.com; 黄明(连一) hm281385@alibaba-inc.com Subject: [Hafnium] multiple SP secondary boot
Hi, Olivier, There's a problem I'd like to discuss with you. In the FF-A 1.1 18.3.2.1.1 section, it is described as follows: For each SP and the SPMC, the Framework assumes that the same entry point address is used for initializing any execution context during a secondary cold boot. This does not seem to make sense for entry point addresses of multiple SP. In addition, I do not see support for the secondary boot of the multiple SP in the current implementation of Hafnium. If I have multiple optee instances of MP type, how should I design the cold secondary boot flow for multiple SP? My idea is to build a vCPU list with the same index between VMs, just like primary boot flow, as shown in the code below: void vcpu_update_boot(struct vcpu *vcpu) { struct vcpu *current = NULL; struct vcpu *previous = NULL; if (boot_vcpu == NULL) { boot_vcpu = vcpu; return; } current = boot_vcpu; while (current != NULL && current->vm->boot_order <= vcpu->vm->boot_order) { previous = current; current = current->next_boot; } if (previous != NULL) { previous->next_boot = vcpu; } else { boot_vcpu = vcpu; } vcpu->next_boot = current; } Anyone else have some good suggestions? Thanks for the support. Regards, Yuye. -- Hafnium mailing list -- hafnium@lists.trustedfirmware.org To unsubscribe send an email to hafnium-leave@lists.trustedfirmware.org
hafnium@lists.trustedfirmware.org