Hi,
I have a question about hafnium executing PSCI_CPU_ON.
So what I observe is,
if I define single SP with 2 Physical CPUs & 2 VCPUS defined in SPMC manifest for hypervisor and secure partition, respectively.
And when I do have > 2 cpus running in non-secure world, precisely at the moment when Linux brings up 3rd CPU with PSCI_CPU_ON, EL3 PSCI calls will be forwarded to hafnium running in secure EL2.
And this is where it hits.
And since hafnium doesn't find 3rd CPU defined in manifest it will hit "Dead stop here if no more CPU. */
.global cpu_entry
cpu_entry:
#if SECURE_WORLD == 1
/* Get number of cpus gathered from DT. */
adrp x3, cpu_count
add x3, x3, :lo12:cpu_count
ldr w3, [x3]
/* Prevent number of CPUs to be higher than supported by platform. */
cmp w3, #MAX_CPUS
bhi .
/* x0 points to first cpu in cpus array. */
adrp x0, cpus
add x0, x0, :lo12:cpus
/* Get current core affinity. */
get_core_affinity x1, x2
/* Dead stop here if no more cpu. */
0: cbz w3, 0b
sub w3, w3, #1
/* Get cpu id pointed to by x0 in cpu array. */
ldr x2, [x0, CPU_ID]
If I have defined this 3rd physical CPU in hafnium manifest then hafniu proceed further from above code. But then will hit plat_psci_cpu_resume and eventually it will abort with following check
CHECK(vcpu_index < vm->vcpu_count);
cpu_init:
/* Call into C code, x0 holds the CPU pointer. */
bl cpu_main
/* Run the vCPU returned by cpu_main. */
bl vcpu_restore_all_and_run
it looks like hafnium expects >= the number of physical Pes running at non-secure side to be defined in SPMC manifest file. Also it expects the same number of VCPUs to be defined in manifest file.
but my understanding is, there could be number of PEs running at non-secure side could be greater than number of Pes/VCPUs at secure side.
But from the code flow of hafnium, I don't think it supports that.
Regards,
Oza.
Hi all,
I'm testing FF-A notifications with OP-TEE and Hafnium. I'm using
interrupts from the secure uart as a trigger to set a notification for
the normal world. Sometimes when testing I run into:
VERBOSE: Secure virtual interrupt not yet serviced by SP 8001.
FFA_MSG_SEND_DIRECT_RESP interrupted
Hafnium then returns an FFA_ERROR (code -5) as a response to the
FFA_MSG_SEND_DIRECT_RESP OP-TEE was just exiting with. After some
digging in the code I find a comment at the top of
plat_ffa_is_direct_response_interrupted()
https://git.trustedfirmware.org/hafnium/hafnium.git/tree/src/arch/aarch64/p…
/*
* A secure interrupt might trigger while the target SP is currently
* running to send a direct response. SPMC would then inject virtual
* interrupt to vCPU of target SP and resume it.
* However, it is possible that the S-EL1 SP could have its interrupts
* masked and hence might not handle the virtual interrupt before
* sending direct response message. In such a scenario, SPMC must
* return an error with code FFA_INTERRUPTED to inform the S-EL1 SP of
* a pending interrupt and allow it to be handled before sending the
* direct response.
*/
The specification doesn't mention this as a valid error code for
FFA_MSG_SEND_DIRECT_RESP. Is this something we can expect to be added
to the specification or at least something OP-TEE has to be prepared
to handle regardless?
As far as I can tell there's no way of guaranteeing that Hafnium will
not return this error for FFA_MSG_SEND_DIRECT_RESP. Even if we were
able to execute the smc instruction with secure interrupts unmasked,
what if the interrupt is raised just after the smc instruction has
been trapped in Hafnium?
It is a bit inconvenient as it means saving the registers passed to
the smc instruction to be able to restart the smc instruction with the
same arguments. It seems we may need to redesign the exit procedure.
It would be nice with an example of how an S-EL1 SP is supposed to
exit with FFA_MSG_SEND_DIRECT_RESP.
Thoughts?
Thanks,
Jens
Hi,
As of today, the way to build Hafnium is a single make command to build Hafnium binaries (e.g. SPMC) and tests for all platforms at once.
This is an overhead for deployments or integrations downstream that are only interested in building a single platform SPMC image, and are not interested in running Hafnium tests.
We have a proposal to help scaling to those deployments where only an SPMC binary for a given platform is required. On the benefits side, this greatly decreases the build time for said deployments and limits build dependencies (e.g. ones within the test framework).
There are certainly more complex solutions that we thought about (like segregating individual project git trees beyond 'reference') . However, the proposed change remains simple and keeps the build system backwards compatible with existing deployments.
Essentially this introduces a PLATFORM command line build option for the sake of building the SPMC for one given platform (omitting tests). If required it can be combined with the existing PROJECT variable.
Example usages:
* make [PROJECT=reference] : builds all platforms from 'reference' project including tests (same as today)
* make PLATFORM=secure_aem_v8a_fvp_vhe : builds the SPMC targeting FVP from the 'reference' project (omitting tests)
*
Similarly for Arm solutions, make PLATFORM=secure_tc, make PLATFORM=secure_rd_fremont
*
make PROJECT=project1 : make all platforms from alternate project 'project1'
*
make PROJECT=project2 PLATFORM=platform1 : make a single 'platform1' hafnium binary from project 'project2'
See the changes:
https://review.trustedfirmware.org/c/hafnium/hafnium/+/24822https://review.trustedfirmware.org/c/hafnium/project/reference/+/24821/
Let us know if the approach sounds reasonable.
Regards,
Olivier.
Hi,
https://hafnium.readthedocs.io/en/latest/design/ffa-manifest-binding.html
has a nice description of the manifest bindings. The intention is
obviously to match the fields in the FF-A specification. However, the
"pages-count" field in the bindings is called "page count" in the
specification. Any chance of fixing this so the two fields have the
same name?
Thanks,
Jens
Hi All,
Note you may have received another instance of this note but when I
attempted to send to all TF ML's simultaneously it seemed to fail, so
sending to each one at a time. Sorry about that. :/
We've created a Discord Server for real time chats/sharing. This solution
comes at no cost to the project, is set up with channels for each project,
includes a #general channel, and supports direct 1-1 chats between members,
all with the goal of improving collaboration between trustedfirmware.org
developers.
We encourage all to join! :) Instructions for joining can be found on
the TF.org
FAQ page <https://www.trustedfirmware.org/faq/>.
See you all there and please don't hesitate to reach out if you have any
questions!
Don Harbin
TrustedFirmware Community Manager
don.harbin(a)linaro.org
Hi All,
To all TF maillists in case all aren't yet aware.
We've created a Discord Server for real time chats/sharing. This solution
comes at no cost to the project, is set up with channels for each project,
includes a #general channel, and supports direct 1-1 chats between members,
all with the goal of improving collaboration between trustedfirmware.org
developers.
I've attached a recent screenshot from the #general channel as a sample of
the interface and usages.
We encourage all to join! :) Instructions for joining can be found on
the TF.org
FAQ page <https://www.trustedfirmware.org/faq/>.
See you all there and please don't hesitate to reach out if you have any
questions!
Don Harbin
TrustedFirmware Community Manager
don.harbin(a)linaro.org
Hi,
This is to inform that the Hafnium documentation is being migrated to its own ReadTheDocs project (from TF-A RTD project):
https://hafnium.readthedocs.io
This primarily concerns the SPM design doc and threat model.
Hafnium documentation changes should target hafnium tree moving forward:
https://git.trustedfirmware.org/hafnium/hafnium.git/tree/docs
The remaining work over the coming days is:
-creating a docs watcher automatically triggering RTD builds.
-polishing few leftovers originating from TF-A in migrated documents.
-cleanly removing the now duplicated docs from TF-A tree.
If any question feel free to reach to the team.
Regards,
Olivier.