Hi Everyone
We are planning to change how TF-RMM clones and updates the submodule dependencies. The usual practice is to specify the `recursive` option to git clone of the project. This works well when the submodules themselves do not have dependencies. For some dependent repositories like libspdm, there are further dependencies like openssl, cmocka which are not used in the RMM context. Hence specifying the specifying the `recursive` option is not the ideal solution especially when RMM is deployed in Continuous integration solutions. The above issue was worked around in RMM by fetching libspdm within the build context but this was also not ideal as it kept the libspdm outside the git submodules framework and the git fetch was done every time the project was rebuilt.
To solve this, we are proposing to move the management of the submodules into the build system and away from the user. Specifically, during configuration phase of the project, cmake will issue `git submodule update --init --depth 1`.
This means that the user will not be responsible for syncing the submodules anymore and the build system will take of this. This also ties in with the patching method of the build system as a particular SHA can be ensured before the patch is applied.
The patch can be found here : https://review.trustedfirmware.org/c/TF-RMM/tf-rmm/+/33512
Any rebase of the project which updates the submodules will now be transparently applied without the user having to update the submodules manually.
We think that we will have more dependent submodules for TF-RMM in the future and it is better to script this within the build system. This change should not break any of the existing CI systems as it is backward compatible, but it may become a little inefficient if the `recursive` option is specified as there will be unnecessary git repositories fetched.
Please let us know if any comments.
Best Regards
Soby Mathew
Hi All,
In preparation to the Firmware-A v2.12 bundle release the following TF-A/TF-A-tests/Hafnium/RMM/CI project tags were applied:
https://git.trustedfirmware.org/TF-A/trusted-firmware-a/+/refs/tags/v2.12-r…https://git.trustedfirmware.org/tf-a-tests/+/refs/tags/v2.12-rc0https://git.trustedfirmware.org/ci/tf-a-ci-scripts/+/refs/tags/v2.12-rc0https://git.trustedfirmware.org/ci/tf-a-job-configs/+/refs/tags/v2.12-rc0https://git.trustedfirmware.org/hafnium/hafnium.git/+/refs/tags/v2.12-rc0https://git.trustedfirmware.org/ci/hafnium-ci-scripts.git/+/refs/tags/v2.12…https://git.trustedfirmware.org/ci/hafnium-job-configs.git/+/refs/tags/v2.1…https://git.trustedfirmware.org/TF-RMM/tf-rmm/+/refs/tags/tf-rmm-v0.6.0-rc0
Trees are frozen still accepting security or bug fixes until the release close down happening end next week (hopefully!).
For partners, it will help if tests are run against those trees on downstream platforms and spot any issue hit before the final tagging.
--
Thanks,
Govindraj R
________________________________
From: Govindraj Raja via TF-A-Tests <tf-a-tests(a)lists.trustedfirmware.org>
Sent: Monday, October 14, 2024 20:18
To: Joanna Farley via TF-A <tf-a(a)lists.trustedfirmware.org>; tf-a-tests(a)lists.trustedfirmware.org <tf-a-tests(a)lists.trustedfirmware.org>
Cc: hafnium(a)lists.trustedfirmware.org <hafnium(a)lists.trustedfirmware.org>; tf-rmm(a)lists.trustedfirmware.org <tf-rmm(a)lists.trustedfirmware.org>; trusted-services(a)lists.trustedfirmware.org <trusted-services(a)lists.trustedfirmware.org>
Subject: [Tf-a-tests] Firmware-A v2.12 release code freeze notification
Hi All,
The next release of the Firmware-A bundle of projects tagged v2.12 has an expected code freeze date of Nov, 8th 2024.
Refer to the release cadence section from TF-A documentation (https://trustedfirmware-a.readthedocs.io/en/latest/about/release-informatio…).
Closing out the release takes around 6-10 working days after the code freeze.
v2.12 release preparation tasks start from now.
We want to ensure that planned feature patches for the release are submitted in good time for the review process to conclude.
As a kind recommendation and a matter of sharing CI resources, please launch CI jobs with care e.g.:
-For simple platform, docs changes, or one liners, use Allow-CI+1 label (no need for a full Allow-CI+2 run).
-For large patch stacks use Allow-CI+2 at top of the patch stack (and if required few individual Allow+CI+1 labels in the middle of the patch stack).
-Carefully analyze results and fix the change if required, before launching new jobs on the same change.
-If after issuing a Allow-CI+1 or Allow-CI+2 label a Build start notice is not added as a gerrit comment on the patch right away please be patient as under heavy load CI jobs can be queued and in extreme conditions it can be over an hour before the Build start notice is issued. Issuing another Allow-CI+1 or Allow-CI+2 label will just result in an additional job being queued.
--
Thanks,
Govindraj R
--
TF-A-Tests mailing list -- tf-a-tests(a)lists.trustedfirmware.org
To unsubscribe send an email to tf-a-tests-leave(a)lists.trustedfirmware.org
+ other MLs
________________________________
From: Olivier Deprez
Sent: 30 October 2024 11:41
To: tf-a(a)lists.trustedfirmware.org <tf-a(a)lists.trustedfirmware.org>
Subject: TF-A Tech Forum regular call
Dear TF-A ML members,
As mentioned in https://www.trustedfirmware.org/meetings/tf-a-technical-forum/, trustedfirmware.org hosts regular technical calls on Thursdays. It mentions TF-A although in practise a number of Cortex-A projects beyond TF-A were discussed (refer to prior recordings on this page).
Unfortunately this slot hasn't been very active recently.
By this email I'm kindly emphasizing this forum is open to the community (and beyond trustedfirmware.org members) and you are welcome to propose topics. Presentations/slides are not strictly necessary, and we can also host informal discussions or session of questions. If you think of a topic, please reach to me and I'll be happy to accommodate.
Thanks for your contributions in advance!
Regards,
Olivier.
Hi All,
The next release of the Firmware-A bundle of projects tagged v2.12 has an expected code freeze date of Nov, 8th 2024.
Refer to the release cadence section from TF-A documentation (https://trustedfirmware-a.readthedocs.io/en/latest/about/release-informatio…).
Closing out the release takes around 6-10 working days after the code freeze.
v2.12 release preparation tasks start from now.
We want to ensure that planned feature patches for the release are submitted in good time for the review process to conclude.
As a kind recommendation and a matter of sharing CI resources, please launch CI jobs with care e.g.:
-For simple platform, docs changes, or one liners, use Allow-CI+1 label (no need for a full Allow-CI+2 run).
-For large patch stacks use Allow-CI+2 at top of the patch stack (and if required few individual Allow+CI+1 labels in the middle of the patch stack).
-Carefully analyze results and fix the change if required, before launching new jobs on the same change.
-If after issuing a Allow-CI+1 or Allow-CI+2 label a Build start notice is not added as a gerrit comment on the patch right away please be patient as under heavy load CI jobs can be queued and in extreme conditions it can be over an hour before the Build start notice is issued. Issuing another Allow-CI+1 or Allow-CI+2 label will just result in an additional job being queued.
--
Thanks,
Govindraj R
Hi,
Please have a look at virtio-mem which provides memory hotplug for VMs. It
is available in Linux, QEMU, cloud-hypervisor and libvirt:
https://libvirt.org/kbase/memorydevices.html#virtio-mem-model
Another reason to use virtio-mem rather than ACPI memory hotplug is to
keep complexity out of ACPI tables, in order to simplify remote
attestation which requires measuring or verifying the firmware tables.
For example running QEMU with the following allows to hotplug 4G of memory
to the VM:
-m 512M,maxmem=1T
-object memory-backend-ram,id=mem0,size=4G
-device virtio-mem-pci,id=vm0,memdev=mem0,node=0
Then at runtime QEMU monitor can plug and unplug memory:
(qemu) qom-get vm0 size
(qemu) qom-set vm0 requested-size 1G
(qemu) qom-set vm0 requested-size 0
This works for a Realm VM, with a small change to the Linux guest:
https://jpbrucker.net/git/linux/commit/?h=cca/v4-hotplug&id=6b8768385fa464a…
(I'm not sure it's correct yet but may be worth adding to the initial guest
support.)
The host adds memory to the guest with
RMI_GRANULE_DELEGATE+RMI_DATA_CREATE_UNKNOWN, and removes it with
RMI_DATA_DESTROY+RMI_GRANULE_UNDELEGATE which ensures that the pages are
wiped before being returned to the host.
Virtual device hotplug works out of the box for a Realm VM. The VM needs
to have root ports allowing hotplug (see
https://www.libvirt.org/pci-hotplug.html#aarch64-architecture ), and the
guest kernel must have PCIe hotplug enabled. For example this adds a root
port in QEMU:
-device pcie-root-port,chassis=1,id=pcie.1,bus=pcie.0
Then in the monitor add a virtio-net device:
(qemu) netdev_add user,id=net1
(qemu) device_add virtio-net-pci,netdev=net1,bus=pcie.1,id=hp0
[ 300.003234] pcieport 0000:00:03.0: pciehp: Slot(0): Button press: will power on in 5 sec
...
[ 300.798772] virtio-pci 0000:01:00.0: enabling device (0000 -> 0002)
# lspci
01:00.0 Ethernet controller: Red Hat, Inc. Virtio 1.0 network device (rev 01)
And remove it:
(qemu) device_del hp0
The security model of hotplug is equivalent to regular PCI support in a
Realm: the guest should only interact with devices whose driver has been
hardened against untrusted hosts, and with devices authenticated via
CMA-SPDM.
Thanks,
Jean
Cloud vendors hope that cloud servers have hot-plug capabilities for CPU, memory, and devices. In confidential virtual machine scenarios, the measurement values will change after hot-plug , and rmi_data_create needs to be called to dynamically update the device tree information. Please consult CCA's plan for the hot-plug capability , and under the security model of confidential virtual machines, should the hot-plug capability of confidential virtual machines be supported?
Hi all,
I am working with FVP (Base RevC AEM) and arm integration solution (https://gitlab.arm.com/arm-reference-solutions/arm-reference-solutions-docs…). I want to measure the overhead of a target ML workload between a realm VM and normal world VM. Both VMs are created by this command:
nice -n -20 taskset -c 1 lkvm run --realm -c 1 -m 350 -k /root/VM_image/Image -i /root/VM_image/VM-fs.cpio --irqchip=gicv3
the target workload code and data is envisioned into the VM-fs.cpio. I also use GenericTrace to measure the number of instructions executed by core 1 (taskset -c 1 indicates that the VM process should be only given to core one). I use ToggleMTIPlugin to enable/disable tracing at particular points (at the beginning and end of the target workload inside the VM). What I am experiencing is that the numbers in normal world VM are very stable (271 millions) but, the numbers in the realm VM are very different between different runs of realm VM (from 314 to 463 and even 7671 millions!!!). I do all measurements in the same run of FVP in which I create a NW VM and run the target workload, then I destroy it and create a realm VM, run the target workload and destroy it while I repeat this steps several times and then terminates the FVP. I guess something in between the path from the realm to hypervisor makes the numbers unstable (either RMM or secure monitor). Have you ever seen such a problem and worked around measuring number of instructions for the realm workloads?
Thanks,
Sina
Hi All,
We are pleased to announce the formal release of Trusted Firmware-A version 2.10 bundle of project deliverables.
This includes Trusted Firmware-A, Trusted Firmware-A Tests, Hafnium, RMM and TF-A OpenCI Scripts/Jobs 2.10 releases involving the tagging of multiple repositories.
These went live on 22nd November 2023.
Please find references to tags and change logs at the end of this email.
Many thanks to the community for the active engagement in delivering this release!
Notable Features of the Version 2.10 Release are as follows:
TF-A/EL3 Root World
* New Features:
* Firmware handoff library support
* Improvements to BL31 runtime exception handling
* Context management refactoring for RME/4 worlds
* Gelas, Nevis & Travis CPUs support
* V8.9 features enabled (FEAT_ HAFT, RPRFM, LRCPC3, MTE_PERM)
TF-A Boot BL1/BL2
* New Features
* Trusted Boot support for ECDSA (Elliptic Curve Digital Signature Algorithm)
* Migrated to PSA crypto API’s
* Improved the GUID Partition Table (GPT) parser.
* Various security Improvements and threat Model updates for ARM CCA
* Signer id extraction Implementation
Hafnium/SEL2 SPM
* New Features:
* FF-A v1.2: FFA_YIELD with time-out; EL3 SPMDs LSPs communication; memory sharing updates.
* Memory region relative base address field support in SP manifests.
* Interrupt re-configuration hypervisor calls.
* Memory management: S2 PT NS/S IPA split
* SMCCCv1.2+ compliance fixes.
* Feature parity test improvements, EL3 SPMC and Hafnium (S-EL2 SPMC)
TF-RMM/REL2
* New Feature/Support
* Fenimore v1.0 EAC5 aligned implementation.
* TFTF Enhancements for RME testing
* Initial CBMC support
* NS SME support in RMM
* BTI support for RMM
Errata
* Errata implemented (1xCortex-X2/ Matterhorn-ELP, 1xCortex-A710/Matterhorn, 1xNeoverse N2/Perseus, 2xNeoverse V2/Demeter, Makalu ELP/Cortex X3, Klein/Cortex-A510)
* Fix some minor defects with version in a few errata that applies for some follow up revisions of the CPUs. (Neoverse V1, Cortex-X2, Cortex-A710)
TF-A Tests
* Core
* Added errata management firmware interface tests.
* Added firmware handoff tests.
* Introduced RAS KFH support test.
* SPM/FF-A
* Support SMCCCv1.2 extended GP registers set.
* Test SMCCC compliance at the non-secure physical instance.
* Test secure eSPI interrupt handling.
* Test FF-A v1.2 FFA_PARTITION_INFO_GET_REGS interface.
* RMM
* Added FPU/SVE/SME tests
* Added multiple REC single CPU tests.
* Added PAuth support in Realms tests.
* Added PMU tests.
Platform Support
* New platforms added:
* Aspeed AST2700, NXP IMX93, Intel Agilex5, Nuvoton NPCM845x, QTI MDM9607, MSM8909, MSM8939, ST STM32MP2
Release tags across repositories:
https://git.trustedfirmware.org/TF-A/trusted-firmware-a.git/tag/?h=v2.10https://git.trustedfirmware.org/TF-A/tf-a-tests.git/tag/?h=v2.10https://git.trustedfirmware.org/ci/tf-a-ci-scripts.git/tag/?h=v2.10https://git.trustedfirmware.org/ci/tf-a-job-configs.git/tag/?h=v2.10https://git.trustedfirmware.org/hafnium/hafnium.git/tag/?h=v2.10https://git.trustedfirmware.org/ci/hafnium-ci-scripts.git/tag/?h=v2.10https://git.trustedfirmware.org/ci/hafnium-job-configs.git/tag/?h=v2.10https://git.trustedfirmware.org/TF-RMM/tf-rmm.git/tag/?h=tf-rmm-v0.4.0
Change logs:
https://trustedfirmware-a.readthedocs.io/en/v2.10/change-log.html#id1https://trustedfirmware-a-tests.readthedocs.io/en/v2.10/change-log.html#ver…https://hafnium.readthedocs.io/en/latest/change-log.html#v2-10https://tf-rmm.readthedocs.io/en/tf-rmm-v0.4.0/about/change-log.html#v0-4-0
Regards,
Olivier.
Hi All,
The next release of the Firmware-A bundle of projects tagged v2.10 has an expected code freeze date of Nov, 7th 2023.
Refer to the Release Cadence section from TF-A documentation (https://git.trustedfirmware.org/TF-A/trusted-firmware-a.git/tree/docs/about…).
Closing out the release takes around 6-10 working days after the code freeze.
Preparations tasks for v2.10 release should start in coming month.
We want to ensure that planned feature patches for the release are submitted in good time for the review process to conclude. As a kind recommendation and a matter of sharing CI resources, please launch CI jobs with care e.g.:
-For simple platform, docs changes, or one liners, use Allow-CI+1 label (no need for a full Allow-CI+2 run).
-For large patch stacks use Allow-CI+2 at top of the patch stack (and if required few individual Allow+CI+1 in the middle of the patch stack).
-Carefully analyze results and fix the change if required, before launching new jobs on the same change.
-If after issuing a Allow-CI+1 or Allow-CI+2 label a Build start notice is not added as a gerrit comment on the patch right away please be patient as under heavy load CI jobs can be queued and in extreme conditions it can be over an hour before the Build start notice is issued. Issuing another Allow-CI+1 or Allow-CI+2 label will just result in an additional job being queued.
Thanks & Regards,
Olivier.