Hi,
This is to inform that the Hafnium documentation is being migrated to its own ReadTheDocs project (from TF-A RTD project):
https://hafnium.readthedocs.io
This primarily concerns the SPM design doc and threat model.
Hafnium documentation changes should target hafnium tree moving forward:
https://git.trustedfirmware.org/hafnium/hafnium.git/tree/docs
The remaining work over the coming days is:
-creating a docs watcher automatically triggering RTD builds.
-polishing few leftovers originating from TF-A in migrated documents.
-cleanly removing the now duplicated docs from TF-A tree.
If any question feel free to reach to the team.
Regards,
Olivier.
Hi, Madhu, Olivier,
I am glad that I have the same understanding as both of you about description in FF-A 1.1 18.3.2.1.1.
Other points to discuss:
>For SPs beyond the first SP, it is up to the normal world driver to invoke FFA_RUN for each secondary vCPU.
OK, I basically understand what you said. But I did not use this interface.
I need to make it clear for my current implementation for booting secondary vCPUs on SPs beyond the first SP.
I connect all vCPUs with the same index on all SP. If there is no vCPU for the current index, just skip to next vCPU with the same index on next SP.
So for each index, we can build a vCPU boot list.
Later when the nwd boots, upon invocation of PSCI_CPU_ON, each secondary vCPU for the first SP get initialized.
Then Hafnium starts the next secondary vCPU on SP beyond the first SP according to the boot list.
At present, we have implemented it and tested it without problems.
Is this a violation of the FF-A spe, or is there any other problem?
>Note I don't believe the current optee linux driver supports the above.
>In which case the scenario you describe (multiple optee instances) might not work out of the box.
Agree.
>Also note there must be other issues like how to handle the secure storage when multiple optee instances exists.
Could you describe it in more detail?
Thanks for your supports.
Regards,
Yuye.
------------------------------------------------------------------
发件人:Olivier Deprez <Olivier.Deprez(a)arm.com>
发送时间:2023年7月17日(星期一) 22:29
收件人:Madhukar Pappireddy <Madhukar.Pappireddy(a)arm.com>; 梅建强(禹夜) <meijianqiang.mjq(a)alibaba-inc.com>
抄 送:hafnium <hafnium(a)lists.trustedfirmware.org>; 高海源(码源) <haiyuan.ghy(a)alibaba-inc.com>; 王一蒙(北见) <wym389994(a)alibaba-inc.com>; 黄明(连一) <hm281385(a)alibaba-inc.com>
主 题:Re: [Hafnium] multiple SP secondary boot
Hi,
I'm adding to Madhu's reply:
At TF-A /secure boot time, before nwd is initialized, the first vCPU for each SP is resumed in sequence by the SPMC. It gives the opportunity for each SP to define a secondary boot address for later resuming of secondary vCPUs.
Later when the nwd boots, upon invocation of PSCI_CPU_ON, each secondary vCPU for the first SP get initialized.
For SPs beyond the first SP, it is up to the normal world driver to invoke FFA_RUN for each secondary vCPU.
Note I don't believe the current optee linux driver supports the above. In which case the scenario you describe (multiple optee instances) might not work out of the box.
Also note there must be other issues like how to handle the secure storage when multiple optee instances exists.
This might work though if you decide to strip down optee instances to a very minimal set of features, that do not use the same resources.
We might help further if you provide a bit more details about the intended setup/scenario.
Regards,
Olivier.
From: Madhukar Pappireddy <Madhukar.Pappireddy(a)arm.com>
Sent: 17 July 2023 16:19
To: 梅建强(禹夜) <meijianqiang.mjq(a)alibaba-inc.com>; Olivier Deprez <Olivier.Deprez(a)arm.com>
Cc: hafnium <hafnium(a)lists.trustedfirmware.org>; 高海源(码源) <haiyuan.ghy(a)alibaba-inc.com>; 王一蒙(北见) <wym389994(a)alibaba-inc.com>; 黄明(连一) <hm281385(a)alibaba-inc.com>
Subject: RE: [Hafnium] multiple SP secondary boot
Hi Yuye,
I think the following sentence in the spec is slightly confusing.
" For each SP and the SPMC, the Framework assumes that the same entry point address is used for initializing any execution context during a secondary cold boot."
I believe what it meant is that the entry point address for any secondary execution context of a particular SP is the same. Note that the entry point of secondary execution contexts belonging to another SP is different than the one previously registered. I can confirm that Hafnium adheres to the above statement.
Please note that the cold boot flow is different for secondary cores compared to primary core. On the primary core, SPMC initializes the execution contexts of all SPs( refer to [1]). However, execution contexts of non-primary SP on secondary cores need CPU cycles to be allocated by the NWd scheduler through the FFA_RUN interface(refer to [2]).
Hopefully, I was able to answer your questions.
[1] https://git.trustedfirmware.org/hafnium/hafnium.git/tree/src/arch/aarch64/p… <https://git.trustedfirmware.org/hafnium/hafnium.git/tree/src/arch/aarch64/p… >
[2] https://git.trustedfirmware.org/hafnium/hafnium.git/tree/src/api.c#n1047 <https://git.trustedfirmware.org/hafnium/hafnium.git/tree/src/api.c#n1047 >
Thanks,
Madhu
-----Original Message-----
From: 梅建强(禹夜) via Hafnium <hafnium(a)lists.trustedfirmware.org>
Sent: Saturday, July 15, 2023 4:54 AM
To: Olivier Deprez <Olivier.Deprez(a)arm.com>
Cc: hafnium <hafnium(a)lists.trustedfirmware.org>; 高海源(码源) <haiyuan.ghy(a)alibaba-inc.com>; 王一蒙(北见) <wym389994(a)alibaba-inc.com>; 黄明(连一) <hm281385(a)alibaba-inc.com>
Subject: [Hafnium] multiple SP secondary boot
Hi, Olivier,
There's a problem I'd like to discuss with you.
In the FF-A 1.1 18.3.2.1.1 section, it is described as follows:
For each SP and the SPMC, the Framework assumes that the same entry point address is used for initializing any execution context during a secondary cold boot.
This does not seem to make sense for entry point addresses of multiple SP.
In addition, I do not see support for the secondary boot of the multiple SP in the current implementation of Hafnium.
If I have multiple optee instances of MP type, how should I design the cold secondary boot flow for multiple SP?
My idea is to build a vCPU list with the same index between VMs, just like primary boot flow, as shown in the code below:
void vcpu_update_boot(struct vcpu *vcpu) { struct vcpu *current = NULL; struct vcpu *previous = NULL; if (boot_vcpu == NULL) { boot_vcpu = vcpu; return; } current = boot_vcpu; while (current != NULL && current->vm->boot_order <= vcpu->vm->boot_order) { previous = current; current = current->next_boot; } if (previous != NULL) { previous->next_boot = vcpu; } else { boot_vcpu = vcpu; } vcpu->next_boot = current; } Anyone else have some good suggestions?
Thanks for the support.
Regards,
Yuye.
--
Hafnium mailing list -- hafnium(a)lists.trustedfirmware.org To unsubscribe send an email to hafnium-leave(a)lists.trustedfirmware.org
Hi all,
I'd like to ask some questions about how the SMMU S-2 translation works.
My questions are as follows:
1. When SMMU performs Stage-2 translation (I mean, fetches the
translation table and walks it), is it constrained by GPC on CPU MMU,
or GPC on SMMU, or both?
For example, assume I configure the PAS of the non-secure SMMU S-2
translation table as "secure/realm/root PAS" in CPU MMU GPC, but
"non-secure PAS" in SMMU (e.g., the SMMU for TestEngine on FVP) GPC,
will the SMMU successfully perform S-2 translation?
Or in another example, assume I configure the PAS of the non-secure
SMMU S-2 translation table as "non-secure PAS" in CPU MMU GPC, but
"secure/realm/root PAS" in SMMU (e.g., the SMMU for TestEngine on FVP)
GPC, will the SMMU successfully perform S-2 translation?
2. Currently I use Hafnium to configure the SMMU. Due to memory
limitation, I want to place the SMMU Stage-2 table in DRAM2 (starting
from 0x8_8000_0000, but this region is not mapped in Hafnium). Since I
think the SMMU S-2 translation is influenced by EL2 S-1 translation
(really?), can I turn off the EL2 S-1 translation in Hafnium to avoid
this problem? If yes, how to do it?
Sincerely,
WANG Chenxu
Hi, Olivier,
There's a problem I'd like to discuss with you.
In the FF-A 1.1 18.3.2.1.1 section, it is described as follows:
For each SP and the SPMC, the Framework assumes that the same entry point address is used for initializing any
execution context during a secondary cold boot.
This does not seem to make sense for entry point addresses of multiple SP.
In addition, I do not see support for the secondary boot of the multiple SP in the current implementation of Hafnium.
If I have multiple optee instances of MP type, how should I design the cold secondary boot flow for multiple SP?
My idea is to build a vCPU list with the same index between VMs, just like primary boot flow, as shown in the code below:
void vcpu_update_boot(struct vcpu *vcpu)
{
struct vcpu *current = NULL;
struct vcpu *previous = NULL;
if (boot_vcpu == NULL) {
boot_vcpu = vcpu;
return;
}
current = boot_vcpu;
while (current != NULL &&
current->vm->boot_order <= vcpu->vm->boot_order) {
previous = current;
current = current->next_boot;
}
if (previous != NULL) {
previous->next_boot = vcpu;
} else {
boot_vcpu = vcpu;
}
vcpu->next_boot = current;
}
Anyone else have some good suggestions?
Thanks for the support.
Regards,
Yuye.