Hello all,
We currently have a patch under review [1] which will break the existing dependency between 'run-converage' and 'run-unittests' rules in the RMM build system.
This means that once the patch is merged, 'run-coverage' will not build and run RMM unittests. If ran in isolation, it will generate an empty coverage report by default.
This allows to get coverage analysis for especific tests rather than for all the unittests, which makes the process of writting unittests for new modules easier as we can have a picture of the current coverage for such modules without being tainted by the rest of the unittests.
Please note that in order to run coverage analysis on the whole existing set of unittests (as done previously by 'run-coverage') we need to invoke 'run-unittests' before.
The patch, which is currently open for discussion and review, includes instructions on how to run 'run-coverage' to get different types of analysis.
Thanks,
Javier
[1]: https://review.trustedfirmware.org/c/TF-RMM/tf-rmm/+/23039
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Hello everyone,
I would like to let you know that there is a first draft of the Threat Model for the TF-RMM ready for review. You can check it out and leave your comments here: https://review.trustedfirmware.org/c/TF-RMM/tf-rmm/+/20477
Best regards,
Javier
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Hello,
QEMU 8.1 added support for FEAT_RME. It is experimental, enabled with
'-cpu max,x-rme=on', and requires fixes that will be available in QEMU
8.2. I'm working on adding support to TF-A and TF-RMM.
I just submitted some TF-A patches for review (topic qemu-rme) that enable
the feature for the virt platform, and intend to send the necessary
changes for TF-RMM within a week or so. Without RMM, the Test Realm
Payload is included into the FIP for light testing.
After the TF-A and TF-RMM changes, I'd like to add support for the SBSA
platform as well, which should be a relatively small change once the
common QEMU support is merged.
Thanks,
Jean
---
Building TF-A for QEMU with RME support:
make -j CROSS_COMPILE=aarch64-linux-gnu- PLAT=qemu DEBUG=1
RMM=path/to/rmm/build/Debug/rmm.img ENABLE_RME=1
BL33=path/to/QEMU_EFI.fd QEMU_USE_GIC_DRIVER=QEMU_GICV3
all fip
dd if=tf-a/build/qemu/debug/bl1.bin of=flash.bin bs=4096 conv=notrunc
dd if=tf-a//build/qemu/debug/fip.bin of=flash.bin seek=64 bs=4096 conv=notrunc
Running QEMU, for example:
qemu-system-aarch64 -cpu max,x-rme=on,sme=off -m 3G -smp 8
-M virt,gic-version=3,virtualization=on,secure=on,acpi=off
-bios flash.bin
-kernel linux/arch/arm64/boot/Image
-initrd path/to/initrd
-append console=ttyAMA0
-nographic
...
[ 0.825891] kvm [1]: Using prototype RMM support (version 66.0)
SMC_RMM_FEATURES 0 > RMI_SUCCESS 33403e30
Hi All,
Note you may have received another instance of this note but when I
attempted to send to all TF ML's simultaneously it seemed to fail, so
sending to each one at a time. Sorry about that. :/
We've created a Discord Server for real time chats/sharing. This solution
comes at no cost to the project, is set up with channels for each project,
includes a #general channel, and supports direct 1-1 chats between members,
all with the goal of improving collaboration between trustedfirmware.org
developers.
We encourage all to join! :) Instructions for joining can be found on
the TF.org
FAQ page <https://www.trustedfirmware.org/faq/>.
See you all there and please don't hesitate to reach out if you have any
questions!
Don Harbin
TrustedFirmware Community Manager
don.harbin(a)linaro.org
Hi Everyone,
There is a new discord channel created for TF-RMM in Discord under TrustedFirmware umbrella. If you would like to join the channel, please use the invite link : https://discord.gg/ay5gSXnGg4
Looking forward to discussions on the channel.
Best Regards
Soby Mathew
Hi Everyone,
This is a heads up about a planned TF-RMM alignment to RMM EAC2 specification. The required changes have been merged to a branch : https://git.trustedfirmware.org/TF-RMM/tf-rmm.git/log/?h=topics/rmm-eac2 . Once the integration testing with kernel components have completed successfully, we expect to merge this branch back to `main` branch. The details about suitable kernel/kvmtool/kvm-unit-tests branches will be published later.
The planned delivery timelines can be found github project plan in this link : https://github.com/orgs/TF-RMM/projects/2/views/1
Best Regards
Soby Mathew
Hi all,
We are introducing support for FEAT_LPA2 into TF-RMM (patches are available here<https://review.trustedfirmware.org/q/topic:%22lpa2-support%22+(status:open%…>). Once the patches are merged, FEAT_LPA2 will be a mandatory feature for TF-RMM to work. This means that FVP will need to enable this feature when running. To do so, the following changes are needed on the command line:
- -C cluster0.PA_SIZE=48
+
-C cluster0.PA_SIZE=52
+ -C cluster0.has_large_va=2
+ -c cluster0.has_52bit_address_with_4k=2
This needs to be applied to all the clusters on the model.
In addition to that, "arch_version" on the model needs to be set to "8.7" or higher.
The changes can be applied at any point from now as at the moment TF-RMM is able to work with and without FEAT_LPA2. Once the patches are merged, though, TF-RMM will fail during boot if FEAT_LPA2 is not available.
Thank you very much.
Best regards,
Javier
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.