Hi All,
The next release of the Firmware-A bundle of projects tagged v2.12 has an expected code freeze date of Nov, 8th 2024.
Refer to the release cadence section from TF-A documentation (https://trustedfirmware-a.readthedocs.io/en/latest/about/release-informatio…).
Closing out the release takes around 6-10 working days after the code freeze.
v2.12 release preparation tasks start from now.
We want to ensure that planned feature patches for the release are submitted in good time for the review process to conclude.
As a kind recommendation and a matter of sharing CI resources, please launch CI jobs with care e.g.:
-For simple platform, docs changes, or one liners, use Allow-CI+1 label (no need for a full Allow-CI+2 run).
-For large patch stacks use Allow-CI+2 at top of the patch stack (and if required few individual Allow+CI+1 labels in the middle of the patch stack).
-Carefully analyze results and fix the change if required, before launching new jobs on the same change.
-If after issuing a Allow-CI+1 or Allow-CI+2 label a Build start notice is not added as a gerrit comment on the patch right away please be patient as under heavy load CI jobs can be queued and in extreme conditions it can be over an hour before the Build start notice is issued. Issuing another Allow-CI+1 or Allow-CI+2 label will just result in an additional job being queued.
--
Thanks,
Govindraj R
+ other MLs
________________________________
From: Olivier Deprez
Sent: 30 October 2024 11:41
To: tf-a(a)lists.trustedfirmware.org <tf-a(a)lists.trustedfirmware.org>
Subject: TF-A Tech Forum regular call
Dear TF-A ML members,
As mentioned in https://www.trustedfirmware.org/meetings/tf-a-technical-forum/, trustedfirmware.org hosts regular technical calls on Thursdays. It mentions TF-A although in practise a number of Cortex-A projects beyond TF-A were discussed (refer to prior recordings on this page).
Unfortunately this slot hasn't been very active recently.
By this email I'm kindly emphasizing this forum is open to the community (and beyond trustedfirmware.org members) and you are welcome to propose topics. Presentations/slides are not strictly necessary, and we can also host informal discussions or session of questions. If you think of a topic, please reach to me and I'll be happy to accommodate.
Thanks for your contributions in advance!
Regards,
Olivier.
Hi,
I have a couple of question about SMP support of linux driver.
We'd like to run multiple SP that uses secondary cores.
The first SP can initialize all secondary vCPUs when linux invokes cpu on for all secondary cores.
But the other SP doesn't initialize its secondary vCPUs.
If My understanding is correct,
they will be initialized when FFA_RUN calls to each vCPUs are called [1]
and FFA_RUN calls are called by Normal world driver in linux [2].
But I can't find any implementations regarding this in the latest upstream linux (v6.10).
Actually, I found ffa_cpu_ops in linux code [3], but couldn't the place where it is used.
I think that ffa_run has to be called in driver to initialize secodary vCPUs of secondary VMs.
Am I missing something in the code?
Or Is there no implementation yet in upstream?
I'd like to know about the current status of secondary VM SMP support in linux.
If linux driver doesn't support it yet,
Do you have any plan to make an implementation and merge it?
Thanks,
Sungbae Yoo.
[1] https://hafnium.readthedocs.io/en/latest/secure-partition-manager/secure-pa…
[2] https://lists.trustedfirmware.org/archives/list/hafnium@lists.trustedfirmwa…
[3] https://elixir.bootlin.com/linux/v6.11-rc5/source/drivers/firmware/arm_ffa/…
Hi ,
Could you add below patch to gicv3_helpers.h?
--- a/src/arch/aarch64/plat/interrupts/gicv3_helpers.h
+++ b/src/arch/aarch64/plat/interrupts/gicv3_helpers.h
@@ -203,6 +203,16 @@ static inline void gicd_wait_for_pending_write(uintptr_t gicd_base)
}
}
+static inline void gicd_set_ctlr(uintptr_t base,
+ unsigned int bitmap,
+ unsigned int rwp)
+{
+ gicd_write_ctlr(base, gicd_read_ctlr(base) | bitmap);
+ if (rwp != 0U) {
+ gicd_wait_for_pending_write(base);
+ }
+}
+
static inline uint32_t gicd_read_pidr2(uintptr_t base)
{
return io_read32(IO32_C(base + GICD_PIDR2_GICV3));
The existed gicd_write_ctlr() just write the GICD_CTLR, but from GIC spec, if write GICD_CTLR, it needs to wait the RWP bit becomes zero.
Therefore, I think the gicd_set_ctlr() function of tf-a is a good choice.
I tried to push this patch to https://review.trustedfirmware.org/, but I encountered below error.
error: RPC failed; HTTP 403 curl 22 The requested URL returned error: 403
Hi All,
We are pleased to announce the formal release of Trusted Firmware-A version 2.10 bundle of project deliverables.
This includes Trusted Firmware-A, Trusted Firmware-A Tests, Hafnium, RMM and TF-A OpenCI Scripts/Jobs 2.10 releases involving the tagging of multiple repositories.
These went live on 22nd November 2023.
Please find references to tags and change logs at the end of this email.
Many thanks to the community for the active engagement in delivering this release!
Notable Features of the Version 2.10 Release are as follows:
TF-A/EL3 Root World
* New Features:
* Firmware handoff library support
* Improvements to BL31 runtime exception handling
* Context management refactoring for RME/4 worlds
* Gelas, Nevis & Travis CPUs support
* V8.9 features enabled (FEAT_ HAFT, RPRFM, LRCPC3, MTE_PERM)
TF-A Boot BL1/BL2
* New Features
* Trusted Boot support for ECDSA (Elliptic Curve Digital Signature Algorithm)
* Migrated to PSA crypto API’s
* Improved the GUID Partition Table (GPT) parser.
* Various security Improvements and threat Model updates for ARM CCA
* Signer id extraction Implementation
Hafnium/SEL2 SPM
* New Features:
* FF-A v1.2: FFA_YIELD with time-out; EL3 SPMDs LSPs communication; memory sharing updates.
* Memory region relative base address field support in SP manifests.
* Interrupt re-configuration hypervisor calls.
* Memory management: S2 PT NS/S IPA split
* SMCCCv1.2+ compliance fixes.
* Feature parity test improvements, EL3 SPMC and Hafnium (S-EL2 SPMC)
TF-RMM/REL2
* New Feature/Support
* Fenimore v1.0 EAC5 aligned implementation.
* TFTF Enhancements for RME testing
* Initial CBMC support
* NS SME support in RMM
* BTI support for RMM
Errata
* Errata implemented (1xCortex-X2/ Matterhorn-ELP, 1xCortex-A710/Matterhorn, 1xNeoverse N2/Perseus, 2xNeoverse V2/Demeter, Makalu ELP/Cortex X3, Klein/Cortex-A510)
* Fix some minor defects with version in a few errata that applies for some follow up revisions of the CPUs. (Neoverse V1, Cortex-X2, Cortex-A710)
TF-A Tests
* Core
* Added errata management firmware interface tests.
* Added firmware handoff tests.
* Introduced RAS KFH support test.
* SPM/FF-A
* Support SMCCCv1.2 extended GP registers set.
* Test SMCCC compliance at the non-secure physical instance.
* Test secure eSPI interrupt handling.
* Test FF-A v1.2 FFA_PARTITION_INFO_GET_REGS interface.
* RMM
* Added FPU/SVE/SME tests
* Added multiple REC single CPU tests.
* Added PAuth support in Realms tests.
* Added PMU tests.
Platform Support
* New platforms added:
* Aspeed AST2700, NXP IMX93, Intel Agilex5, Nuvoton NPCM845x, QTI MDM9607, MSM8909, MSM8939, ST STM32MP2
Release tags across repositories:
https://git.trustedfirmware.org/TF-A/trusted-firmware-a.git/tag/?h=v2.10https://git.trustedfirmware.org/TF-A/tf-a-tests.git/tag/?h=v2.10https://git.trustedfirmware.org/ci/tf-a-ci-scripts.git/tag/?h=v2.10https://git.trustedfirmware.org/ci/tf-a-job-configs.git/tag/?h=v2.10https://git.trustedfirmware.org/hafnium/hafnium.git/tag/?h=v2.10https://git.trustedfirmware.org/ci/hafnium-ci-scripts.git/tag/?h=v2.10https://git.trustedfirmware.org/ci/hafnium-job-configs.git/tag/?h=v2.10https://git.trustedfirmware.org/TF-RMM/tf-rmm.git/tag/?h=tf-rmm-v0.4.0
Change logs:
https://trustedfirmware-a.readthedocs.io/en/v2.10/change-log.html#id1https://trustedfirmware-a-tests.readthedocs.io/en/v2.10/change-log.html#ver…https://hafnium.readthedocs.io/en/latest/change-log.html#v2-10https://tf-rmm.readthedocs.io/en/tf-rmm-v0.4.0/about/change-log.html#v0-4-0
Regards,
Olivier.
Hi All,
The next release of the Firmware-A bundle of projects tagged v2.10 has an expected code freeze date of Nov, 7th 2023.
Refer to the Release Cadence section from TF-A documentation (https://git.trustedfirmware.org/TF-A/trusted-firmware-a.git/tree/docs/about…).
Closing out the release takes around 6-10 working days after the code freeze.
Preparations tasks for v2.10 release should start in coming month.
We want to ensure that planned feature patches for the release are submitted in good time for the review process to conclude. As a kind recommendation and a matter of sharing CI resources, please launch CI jobs with care e.g.:
-For simple platform, docs changes, or one liners, use Allow-CI+1 label (no need for a full Allow-CI+2 run).
-For large patch stacks use Allow-CI+2 at top of the patch stack (and if required few individual Allow+CI+1 in the middle of the patch stack).
-Carefully analyze results and fix the change if required, before launching new jobs on the same change.
-If after issuing a Allow-CI+1 or Allow-CI+2 label a Build start notice is not added as a gerrit comment on the patch right away please be patient as under heavy load CI jobs can be queued and in extreme conditions it can be over an hour before the Build start notice is issued. Issuing another Allow-CI+1 or Allow-CI+2 label will just result in an additional job being queued.
Thanks & Regards,
Olivier.
Hi,
I have a question about hafnium executing PSCI_CPU_ON.
So what I observe is,
if I define single SP with 2 Physical CPUs & 2 VCPUS defined in SPMC manifest for hypervisor and secure partition, respectively.
And when I do have > 2 cpus running in non-secure world, precisely at the moment when Linux brings up 3rd CPU with PSCI_CPU_ON, EL3 PSCI calls will be forwarded to hafnium running in secure EL2.
And this is where it hits.
And since hafnium doesn't find 3rd CPU defined in manifest it will hit "Dead stop here if no more CPU. */
.global cpu_entry
cpu_entry:
#if SECURE_WORLD == 1
/* Get number of cpus gathered from DT. */
adrp x3, cpu_count
add x3, x3, :lo12:cpu_count
ldr w3, [x3]
/* Prevent number of CPUs to be higher than supported by platform. */
cmp w3, #MAX_CPUS
bhi .
/* x0 points to first cpu in cpus array. */
adrp x0, cpus
add x0, x0, :lo12:cpus
/* Get current core affinity. */
get_core_affinity x1, x2
/* Dead stop here if no more cpu. */
0: cbz w3, 0b
sub w3, w3, #1
/* Get cpu id pointed to by x0 in cpu array. */
ldr x2, [x0, CPU_ID]
If I have defined this 3rd physical CPU in hafnium manifest then hafniu proceed further from above code. But then will hit plat_psci_cpu_resume and eventually it will abort with following check
CHECK(vcpu_index < vm->vcpu_count);
cpu_init:
/* Call into C code, x0 holds the CPU pointer. */
bl cpu_main
/* Run the vCPU returned by cpu_main. */
bl vcpu_restore_all_and_run
it looks like hafnium expects >= the number of physical Pes running at non-secure side to be defined in SPMC manifest file. Also it expects the same number of VCPUs to be defined in manifest file.
but my understanding is, there could be number of PEs running at non-secure side could be greater than number of Pes/VCPUs at secure side.
But from the code flow of hafnium, I don't think it supports that.
Regards,
Oza.